Logstash stopping and starting, is this normal behaviour?

I have setup Logstash to take input from one ES index based on a query and then output to another ES index. It is working, but going very slowly, and from the logs it seems to be stopping and starting every few seconds. Is this normal behaviour or is something going wrong that I should look into?
Here is an example of the logs I'm seeing:

[2019-03-22T10:27:58,168][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5e40ab63 run>"}
[2019-03-22T10:27:58,220][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-03-22T10:27:58,458][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-03-22T10:28:02,760][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x5e40ab63 run>"}
[2019-03-22T10:28:18,657][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.6.2"}
[2019-03-22T10:28:23,536][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-03-22T10:28:23,922][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9201/]}}
[2019-03-22T10:28:24,094][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9201/"}
[2019-03-22T10:28:24,165][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-03-22T10:28:24,168][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-03-22T10:28:24,207][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9201"]}
[2019-03-22T10:28:24,237][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-03-22T10:28:24,268][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-03-22T10:28:24,601][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x54e60578 sleep>"}
[2019-03-22T10:28:24,658][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-03-22T10:28:24,912][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-03-22T10:28:29,221][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x54e60578 run>"}
[2019-03-22T10:28:45,143][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.6.2"}
[2019-03-22T10:28:49,927][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-03-22T10:28:50,319][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9201/]}}
[2019-03-22T10:28:50,509][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9201/"}
[2019-03-22T10:28:50,569][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-03-22T10:28:50,573][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-03-22T10:28:50,613][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9201"]}
[2019-03-22T10:28:50,632][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-03-22T10:28:50,672][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-03-22T10:28:51,009][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x193b2ab8 run>"}
[2019-03-22T10:28:51,056][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-03-22T10:28:51,309][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-03-22T10:28:55,612][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x193b2ab8 run>"}

I would suggest making sure there are enough resources to run Logstash and Elasticsearch. I've found at least 2 settings where you may see an error message in the logs

max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]

Run this command
grep vm.max_map_count /etc/sysctl.conf

If it comes back with nothing, run this

sysctl -w vm.max_map_count=262144 or modify that file manually

The other one is --ulimit setting, in your docker run, include this

--ulimit nofile=65536:65536

Elasticsearch needs at least 2GB RAM

JAVA uses heaps of RAM

I would start with that

I tried running the command you suggested and then restarted Logstash but it's still doing the same thing. My VM has 16 GB of RAM so it should be able to handle it. It's not running in Docker so I can't try your other suggestion.
Thanks.

These control the behavior of the plugin to some degree. Configuration. This API fetches or listens for source data, typically looping until stopped shareit.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.