Logstash using high CPU

The other related topics didn't seem similar enough to this so I'm posting.

I've installed kolide fleet, elasticsearch, kibana, and i've installed logstash all on the same server. Everything is fine (kolide fleet, elasticsearch, kibana are all running) and my server is humming along at under 10% utilized, but as soon as I try starting logstash my server becomes close to 100% utilized and very sluggish, and while logstash has created /var/log/logstash, it never writes anything in there so I have no logs to check to see what is going on. I suspect there is a problem with my logstash configuration and maybe that is causing it, as I've not really been able to get a handle on how the configuration should be done - most people it seems use filebeat to send the osquery logs to logstash to send them to elasticsearch, but I read that filebeat is not necessary if you are using logstash on the same machine as the osquery logs are being stored on. So this is my logstash configuration file (/etc/logstash/conf.d/47-osquery-output.conf):

input {
file {
path => "/var/log/osquery/osqueryd.results.log"
type => "osquery_json"
codec => "json"
}
}

filter {
if [type] == "osquery_json" {
date {
match => [ "unixTime", "UNIX" ]
}
}
}

output {
stdout {}
elasticsearch {
hosts=> "127.0.0.1:9200"
}
}

Can anyone suggest why logstash is overloading my server? and why it is not creating any logs?
can anyone confirm I've got a correct configuration in this conf file? /var/log/osquery/osqueryd.results.log is the file being output from kolide fleet. I notice that it is owned by root and in the root group but it is read for owner, group and other so it shouldn't be a prolem. and elasticsearch is listening on 127.0.0.1:9200. I don't understand what the filter in the middle of the config is for though - is it really necessary? what does it do?

thanks in advance!

I also discovered this in syslog. I'm not yet sure what it might mean but I share it in case it confirms someone's suspicions about the source of my problem:
syslog.1:Mar 12 07:13:19 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T07:13:19Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1044,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
syslog.1:Mar 12 07:13:19 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T07:13:19Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
syslog.1:Mar 12 07:13:19 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T07:13:19Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
syslog.1:Mar 12 07:13:35 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T07:13:35Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1044,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"No Living connections"}
syslog.1:Mar 12 09:36:54 ip-172-30-0-162 systemd[1]: Started logstash.
syslog.1:Mar 12 09:41:29 ip-172-30-0-162 systemd[1]: Stopping logstash...
syslog.1:Mar 12 09:42:33 ip-172-30-0-162 systemd[1]: logstash.service: Main process exited, code=exited, status=143/n/a
syslog.1:Mar 12 09:42:33 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T09:42:31Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
syslog.1:Mar 12 09:42:33 ip-172-30-0-162 systemd[1]: logstash.service: Failed with result 'exit-code'.
syslog.1:Mar 12 09:42:33 ip-172-30-0-162 systemd[1]: Stopped logstash.
syslog.1:Mar 12 09:42:33 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T09:42:32Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1044,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 30000ms"}
syslog.1:Mar 12 10:47:15 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T10:47:13Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from green to red - Request Timeout after 3000ms","prevState":"green","prevMsg":"Ready"}
syslog.1:Mar 12 10:47:15 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T10:47:14Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1044,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
syslog.1:Mar 12 10:48:06 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T10:47:51Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from green to red - Request Timeout after 3000ms","prevState":"green","prevMsg":"Ready"}
syslog.1:Mar 12 10:49:22 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T10:49:21Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from red to red - Request Timeout after 30000ms","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
syslog.1:Mar 12 10:50:00 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T10:49:25Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1044,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 30000ms"}
syslog.1:Mar 12 10:55:29 ip-172-30-0-162 kibana[1023]: {"type":"log","@timestamp":"2019-03-12T10:55:29Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1023,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
syslog.1:Mar 12 10:55:30 ip-172-30-0-162 kibana[1023]: {"type":"log","@timestamp":"2019-03-12T10:55:30Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1023,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
syslog.1:Mar 12 10:55:30 ip-172-30-0-162 kibana[1023]: {"type":"log","@timestamp":"2019-03-12T10:55:30Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1023,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
syslog.1:Mar 12 10:55:49 ip-172-30-0-162 kibana[1023]: {"type":"log","@timestamp":"2019-03-12T10:55:49Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1023,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"No Living connections"}

All I know is that until I start logstash the server is running perfectly and then it goes to near 100% (according to top). But this is weird. Top doesn't show logstash is consuming a lot of resources, I see some strange "shim" processes and other random things but not logstash. But the high utilization only occurs when I start logstash.

Also, this might be relevant to the syslog messages:
ubuntu@ip-172-30-0-162:~$ curl http://127.0.0.1:9200
{
"name" : "3sHDhp8",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "_RRhXzMkQRi2cl9ree9S5Q",
"version" : {
"number" : "6.6.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "1fd8f69",
"build_date" : "2019-02-13T17:10:04.160291Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

This tells me elastic search is actually listening

I think that Logstash is exiting with a syntax error or similar. The service manage tries to restart it continually.
Logstash at startup uses a lot of CPU while it (JRuby) initialises the JVM and itself.

Thanks Guyboertje,
I'm coming to the conclusion that I think you are right too. The only problem is I am struggling to find any clear documentation on how to configure logstash to correctly pickup osquery log files that are stored on the same machine as it is running on. That configuration file was the closest I came to finding one that made any sense. So I have no real way to confirm if the configuration file is the cause of the problem or not, since logstash doesn't actually output any logs about the problem.
Regards,
Stephen

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.