Logstash constantly restarting

(Matt) #1

Hi, very new to ELK stack & just trying trying to set everything up on a fresh install of Ubuntu server 16.04 with openjdk8

I have a C# application with NLog which is posting logs to the elasticsearch endpoint on the Ubuntu server

Everything is actually functioning as expected, logs are visible on Kibana

My only issue is that Logstash constantly restarts itself over and over and is eating 100% CPU

Logstash is install as a systemd service via apt-get

I am very unsure what to put into my config file here: /etc/logstash/bin/default.conf

Currently it looks like:

input {}
output { elasticsearch { hosts => ["localhost:9200"] } }

When I start the service via systemctl the logs show this (time is ordered bottom to top):

 18:28 Stopped logstash. systemd
 18:28 logstash.service: Service hold-off time over, scheduling restart. systemd
 18:28 [2018-06-14T18:28:38,141][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x48c12b58 run>"} logstash
 18:28 [2018-06-14T18:28:37,732][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} logstash
 18:28 [2018-06-14T18:28:37,343][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} logstash
 18:28 [2018-06-14T18:28:37,257][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x48c12b58 run>"} logstash
 18:28 [2018-06-14T18:28:37,194][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]} logstash
 18:28 [2018-06-14T18:28:37,135][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} logstash
 18:28 [2018-06-14T18:28:37,111][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} logstash
 18:28 [2018-06-14T18:28:37,093][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6} logstash
 18:28 [2018-06-14T18:28:37,088][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6} logstash
 18:28 [2018-06-14T18:28:37,013][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"} logstash
 18:28 [2018-06-14T18:28:36,761][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"} logstash
 18:28 [2018-06-14T18:28:36,744][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} logstash
 18:28 [2018-06-14T18:28:36,130][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} logstash
 18:28 [2018-06-14T18:28:33,615][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.0"} logstash
 18:28 Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties logstash
 18:28 Started logstash. systemd

These logs repeat constantly and CPU is 100%, used entirely by java/logstash

Apologies if this is a basic question but I've spent a few hours reading docs and forum post and can't work out what to try next

Any help would be greatly appreciated

(Ry Biesemeyer) #2

When there are no longer any inputs, the Logstash process automatically shuts down; this is what enables us to support one-off pipelines where Logstash can shutdown after the its single input (such as generator or stdin) has closed. It is also what allows a pipeline running on persistent queues with queue.drain=true to drain the queue without starting any inputs.

Since you have configured your pipeline with no inputs, logstash will begin the shutdown sequence immediately after starting up.

(Matt) #3

Thanks for explanation, I kind of guessed that's what was happening

What input/s should I configure for my scenario?

I would like the service to remain active and not eat the CPU, will this happen naturally when logstash has work to do?

(Dan Hermann) #4

@jintawk, if your application is posting logs to Elasticsearch directly via NLog, you don't need to to run Logstash unless you have another event source besides your application logs from which you want Logstash to pull events.

(Matt) #5

Thanks for this. Now makes perfect sense

It's always collectively referred to as ELK stack, which led me down the wrong path

More like EK stack in my case

Thanks again

(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.