Need guidance on search for a list of servers that had software x installed from Kibana/Logstash/filebeat

Hi,

I'm not sure whether this should goes to Logstash, Elastic search or other forums.

My objective:
To search for a list of servers that had software x installed, can someone please give me an advice on what's the best way to achieve the objective?

Thanks!
Weng Sheng Lee

Can someone please give me an idea? It's been a week.

You can use metricbeat, elasticsearch and Kibana.

Thanks for the response David.

What string should I be searching? I've have 3 nodes with Zabbix installed. If I want to get a list of servers that has Zabbix installed, what is the syntax should I use in search? Also what parameters should i add in the config file in order for Metrixbeat, Logstash, filebeat etc to be able to find the nodes that has Zabbix installed?

Thanks!
Lee Weng Sheng

This is on Elasticsearch forum, hopefully someone can get back to me with the guidance.

Read this and specifically the "Also be patient" part.

It's fine to answer on your own thread after 2 or 3 days (not including weekends) if you don't have an answer.

Just install metricbeat on every single server, configure it to send the data to your elasticsearch node, configure Kibana endpoint as well.

Then open the metricbeat system dashboard and you should see all processes running on all nodes.

Search for the one you want (I don't know what is the Linux name for zabbix, probably zabbix) and you will probably see where they are running.

Thank you David, I will give it a try, should be able to respond by tomorrow, at the same time. Thanks

Hi,

I've just added the following on the .yml file
#------------------------------- System Module -------------------------------

  • module: system
    metricsets:

    CPU stats

    • cpu

    System Load stats

    • load

    Per CPU core stats

    #- core

    IO stats

    #- diskio

    Per filesystem stats

    • filesystem

    File system summary stats

    • fsstat

    Memory stats

    • memory

    Network stats

    • network

    Per process stats

    • process

    Sockets (linux only)

    #- socket
    enabled: true
    period: 10s
    processes: ['.*']

  • module: apache
    metricsets: ["status"]
    enabled: true
    period: 1s
    hosts: ["http://127.0.0.1"]

When i restarted the Metricbeat agent, it gives me:

[root@mhlinux151 ~]# service metricbeat restart
Exiting: error loading config file: yaml: line 24: did not find expected key

Do you know the solution for this?

Thanks!

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.
Please update your post.

indent preformatted text by 4 spaces

sorry for the late reply

Here's the error:
[root@mhlinux151 metricbeat]# service metricbeat restart
Exiting: error loading config file: yaml: line 118: did not find expected key

Here's the code:
###################### Metricbeat Configuration Example #######################

This file is an example configuration file highlighting only the most common

options. The metricbeat.reference.yml file from the same directory contains all the

supported options with more comments. You can use it as a reference.

You can find the full configuration reference here:

https://www.elastic.co/guide/en/beats/metricbeat/index.html

#========================== Modules configuration ============================

metricbeat.config.modules:

Glob pattern for configuration loading

path: /etc/metricbeat/modules.d/*.yml

path: ${path.config}/modules.d/*.yml

metricbeat.modules:

Set to true to enable config reloading

reload.enabled: false

Period on which files under path should be checked for changes

#reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
#_source.enabled: false

#================================ General =====================================

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

#name:

The tags of the shipper are included in their own field with each

transaction published.

#tags: ["service-X", "web-tier"]

Optional fields that you can specify to add additional information to the

output.

#fields:

env: staging

#============================== Dashboards =====================================

These settings control loading the sample dashboards to the Kibana index. Loading

the dashboards is disabled by default and can be enabled either by setting the

options here, or by using the -setup CLI flag or the setup command.

#setup.dashboards.enabled: false

The URL from where to download the dashboards archive. By default this URL

has a value which is computed based on the Beat name and version. For released

versions, this URL points to the dashboard archive on the artifacts.elastic.co

website.

#setup.dashboards.url:

#============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

#host: "localhost:5601"

Kibana Space ID

ID of the Kibana Space into which the dashboards should be loaded. By default,

the Default Space will be used.

#space.id:

#============================= Elastic Cloud ==================================

These settings simplify using metricbeat with the Elastic Cloud (https://cloud.elastic.co/).

The cloud.id setting overwrites the output.elasticsearch.hosts and

setup.kibana.host options.

You can find the cloud.id in the Elastic Cloud web UI.

#cloud.id:

The cloud.auth setting overwrites the output.elasticsearch.username and

output.elasticsearch.password settings. The format is <user>:<pass>.

#cloud.auth:

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.

hosts: ["localhost:9200"]

Enabled ilm (beta) to use index lifecycle management instead daily indices.

#ilm.enabled: false

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:

The Logstash hosts

hosts: ["10.139.16.43:5004"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
#ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

Configure processors to enhance or manipulate events generated by the beat.

processors:

  • add_host_metadata: ~
  • add_cloud_metadata: ~

#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: error, warning, info, debug

#logging.level: debug

At debug level, you can selectively enable logging only for some components.

To enable all selectors use ["*"]. Examples of other selectors are "beat",

"publish", "service".

#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================

metricbeat can export internal metrics to a central Elasticsearch monitoring

cluster. This requires xpack monitoring to be enabled in Elasticsearch. The

reporting is disabled by default.

Set to true to enable the monitoring reporter.

#xpack.monitoring.enabled: false

Uncomment to send the metrics to Elasticsearch. Most settings from the

Elasticsearch output are accepted here as well. Any setting that is not set is

automatically inherited from the Elasticsearch output configuration, so if you

have the Elasticsearch output configured, you can simply uncomment the

following line.

#xpack.monitoring.elasticsearch:

By the way, here's line 118:

113 #----------------------------- Logstash output --------------------------------
118
119 hosts: ["10.139.16.43:5004"]
120
121 # Optional SSL. By default is off.
122 # List of root certificates for HTTPS server verifications
123 #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
124 #ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
125
126 # Certificate for SSL client authentication
127 #ssl.certificate: "/etc/pki/client/cert.pem"
128
129 # Client Certificate Key
130 # ssl.key: "/etc/pki/client/cert.key"
131

now I've fixed the "did not find expected key" error, but got a new error below:

2019-04-01T19:01:55.626-0500 INFO instance/beat.go:281 Setup Beat: metricbeat; Version: 6.6.2
2019-04-01T19:01:58.628-0500 INFO add_cloud_metadata/add_cloud_metadata.go:319 add_cloud_metadata: hosting provider type not detected.
2019-04-01T19:01:58.629-0500 ERROR instance/beat.go:911 Exiting: error initializing publisher: missing required field accessing 'output.elasticsearch.hosts'
Exiting: error initializing publisher: missing required field accessing 'output.elasticsearch.hosts'

I just resolved the error "Exiting: error initializing publisher: missing required field accessing 'output.elasticsearch.hosts'"

Restarted the services, and didn't give me an error. Service claims it's up:

["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40","41","42","43","44","45","46","47","48","49","50","51","52","53","54","55","56","57","58","59","60","61","62","63"],"ambient":null}, "cwd": "/", "exe": "/usr/share/metricbeat/bin/metricbeat", "name": "metricbeat", "pid": 6009, "ppid": 6008, "seccomp": {"mode":""}, "start_time": "2019-04-02T16:12:28.440-0500"}}}
2019-04-02T16:12:28.586-0500 INFO instance/beat.go:280 Setup Beat: metricbeat; Version: 6.7.0
2019-04-02T16:12:28.589-0500 INFO elasticsearch/client.go:164 Elasticsearch url: http://10.139.16.??:5044
2019-04-02T16:12:28.592-0500 INFO [publisher] pipeline/module.go:110 Beat name: linux1
Config OK
[ OK ]

However, I still can't find the host in Metricbeat! Do you know the solution to troubleshoot this? Is my expectation correct? I expect to see mhlinux151 as the hostname in Kibana but can't find it, I can see other hostname only. i refreshed the browser atleast three times and still can't find the hostname.

metricbeat

I found this from Metricbeat log file:

2019-04-02T16:31:58.660-0500 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":13240,"time":{"ms":308}},"total":{"ticks":33470,"time":{"ms":800},"value":33470},"user":{"ticks":20230,"time":{"ms":492}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":4},"info":{"ephemeral_id":"da8b4377-3baf-420d-a16c-c4460e0e4e8c","uptime":{"ms":1170045}},"memstats":{"gc_next":31056528,"memory_alloc":22266128,"memory_total":3445476984,"rss":196608}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":6,"events":{"active":2335,"published":67,"retry":34,"total":67}}},"metricbeat":{"system":{"cpu":{"events":3,"success":3},"filesystem":{"events":12,"success":12},"fsstat":{"events":1,"success":1},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":15,"success":15},"process":{"events":24,"success":24},"process_summary":{"events":3,"success":3},"socket_summary":{"events":3,"success":3}}},"system":{"load":{"1":0.34,"15":0.51,"5":0.34,"norm":{"1":0.0213,"15":0.0319,"5":0.0213}}}}}}
2019-04-02T16:32:25.935-0500 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://10.139.16.43:5044)): Get http://10.139.16.:5044: dial tcp 10.139.16.43:5044: connect: connection refused
2019-04-02T16:32:25.935-0500 INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://10.139.16.43:5044)) with 30 reconnect attempt(s)
2019-04-02T16:32:25.935-0500 INFO [publish] pipeline/retry.go:189 retryer: send unwait-signal to consumer
2019-04-02T16:32:25.935-0500 INFO [publish] pipeline/retry.go:191 done
2019-04-02T16:32:25.935-0500 INFO [publish] pipeline/retry.go:166 retryer: send wait signal to consumer
2019-04-02T16:32:25.935-0500 INFO [publish] pipeline/retry.go:168 done
2019-04-02T16:32:28.660-0500 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":13550,"time":{"ms":318}},"total":{"ticks":34310,"time":{"ms":845},"value":34310},"user":{"ticks":20760,"time":{"ms":527}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":4},"info":{"ephemeral_id":"da8b4377-3baf-420d-a16c-c4460e0e4e8c","uptime":{"ms":1200044}},"memstats":{"gc_next":31576496,"memory_alloc":21224488,"memory_total":3533775688,"rss":684032}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":6,"events":{"active":2389,"published":54,"retry":34,"total":54}}},"metricbeat":{"system":{"cpu":{"events":3,"success":3},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":15,"success":15},"process":{"events":24,"success":24},"process_summary":{"events":3,"success":3},"socket_summary":{"events":3,"success":3}}},"system":{"load":{"1":0.59,"15":0.53,"5":0.4,"norm":{"1":0.0369,"15":0.0331,"5":0.025}}}}}}

Please advise, whether i need to start a service, because Metricbeat service is started, and I can ping the IP address:

[root@linux metricbeat]# ping 10.139.16.??
PING 10.139.16.??(10.139.16.??) 56(84) bytes of data.
64 bytes from 10.139.16.??: icmp_seq=1 ttl=51 time=169 ms
64 bytes from 10.139.16.??: icmp_seq=2 ttl=51 time=169 ms

[root@linu151x metricbeat]# service metricbeat status
metricbeat-god (pid 6039) is running...

2019-04-02T16:57:37.071-0500 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://10.139.16.43:5044)): Get http://10.139.16.??:5044: dial tcp 10.139.16.??:5044: connect: connection refused
2019-04-02T16:57:37.071-0500 INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://10.139.16.??:5044)) with 22 reconnect attempt(s)

[root@linux metricbeat]# lsof -i | grep 5044
[root@linux metricbeat]#

Can you tell me what application do i need to open in order to listen to port 5044? or any configuration i have to do in order to achieve that objective? Thanks!

I managed to solve the error by altering the config file, now that error gone but i'm still not seeing the beatname mhlinux151 on Kibana

019-04-02T18:50:30.362-0500 INFO instance/beat.go:273 Setup Beat: metricbeat; Version: 6.4.2
2019-04-02T18:50:30.363-0500 INFO pipeline/module.go:98 Beat name: mhlinux151
2019-04-02T18:50:30.363-0500 INFO instance/beat.go:367 metricbeat start running.
2019-04-02T18:50:30.363-0500 INFO [monitoring] log/log.go:114 Starting metrics logging every 30s
2019-04-02T18:50:30.367-0500 INFO cfgfile/reload.go:196 Loading of config files completed.
2019-04-02T18:50:31.366-0500 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://10.139.16.??:5004))
2019-04-02T18:50:31.981-0500 INFO pipeline/output.go:105 Connection to backoff(async(tcp://10.139.16.??:5004)) established

Please advise!

I'm still not seeing the hostname on Kibana, the log didn't throw any error out.

2019-04-08T15:39:00.365-0500 INFO [monitoring] log/log.go:141 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5018840,"time":{"ms":298}},"total":{"ticks":13531990,"time":{"ms":785},"value":13531990},"user":{"ticks":8513150,"time":{"ms":487}}},"info":{"ephemeral_id":"a08b8df5-0511-444a-9947-8ca1de712c72","uptime":{"ms":506910040}},"memstats":{"gc_next":6951984,"memory_alloc":6018320,"memory_total":1357883042576}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":64,"batches":3,"total":64},"read":{"bytes":105},"write":{"bytes":9020}},"pipeline":{"clients":6,"events":{"active":0,"published":64,"total":64},"queue":{"acked":64}}},"metricbeat":{"system":{"cpu":{"events":3,"success":3},"filesystem":{"events":12,"success":12},"fsstat":{"events":1,"success":1},"load":{"events":3,"success":3},"memory":{"events":3,"success":3},"network":{"events":15,"success":15},"process":{"events":24,"success":24},"process_summary":{"events":3,"success":3}}},"system":{"load":{"1":0.01,"15":0.08,"5":0.05,"norm":{"1":0.0006,"15":0.005,"5":0.0031}}}}}}

I expect to see mhlinux151 but i can't see it. It has other server name, just not mhlinux151
beatname

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.