Logstash Crashes... Minimal Config, Default Pipeline!

Hi Logstash gurus,

I have been running the Docker container instance of Logstash (ver 7.4.0, yes, I need to upgrade) for a while now. I had to tear down and rebuild the container, and I decided to take the opportunity to build out a nice, clean config file from scratch. Problem is, once I restarted Logstash and then applied a minimal config, Logstash crashes after about 20 seconds.

Here’s what I’ve done:

I spun up a new instance of the Logstash container, which came up with the default configs. Then waited about 10 minutes to ensure that the container was stable. No issues.

Feeling confident, I then consoled into the container, and replaced two files. Here’s my new logstash.yml file ( /usr/share/logstash/config/logstash.yml ):

http.host: 0.0.0.0
path.config: /usr/share/logstash/config/tmp.conf
xpack.monitoring.elasticsearch.hosts: http://192.168.3.4:9200

And my minimal config file ( /usr/share/logstash/config/tmp.conf ):

input {
}
filter {
}
output {
}

And, for giggles, here’s the pipeline file ( /usr/share/logstash/config/pipelines.yml ), which I did not change:

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline"

On my previous instance of Logstash, my notes say I had only edited the logstash.yml and tmp.conf files, so those were the only two files I backed up. But now I’m wondering if I missed something? The logstash.yml file basically says, “use tmp.comf as your config file,” and tmp.conf file is as empty as can be. I never configured a pipeline, and the previous version ran smoothly for almost a year.

When the container launches and then crashes, the log makes some references to issues with the pipeline file, which I don’t understand. ("Pipeline terminated!") As far as I can see, my pipeline file isn’t doing anything… so what is the issue? (I’ve included the complete log of an entire lifecycle of the container, from the time I restart it, to when it crashes.)

Any advice is greatly appreciated.

My logs:

ms@ubuntu:/home/me/#
ms@ubuntu:/home/me/#
ms@ubuntu:/home/me/# docker logs mylogstash
2020/06/11 16:56:54 Setting 'xpack.monitoring.elasticsearch.hosts' from environment.
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.8.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2020-06-11T16:57:11,429][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-06-11T16:57:11,439][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.4.0"}
[2020-06-11T16:57:11,866][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2020-06-11T16:57:12,579][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.3.4:9200/]}}
[2020-06-11T16:57:12,739][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://192.168.3.4:9200/"}
[2020-06-11T16:57:12,787][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2020-06-11T16:57:12,790][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-06-11T16:57:12,901][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2020-06-11T16:57:12,902][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2020-06-11T16:57:14,032][INFO ][org.reflections.Reflections] Reflections took 40 ms to scan 1 urls, producing 20 keys and 40 values
[2020-06-11T16:57:14,260][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-06-11T16:57:14,265][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>16, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2000, :thread=>"#<Thread:0x1447ab1c run>"}
[2020-06-11T16:57:14,300][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2020-06-11T16:57:14,386][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-06-11T16:57:14,822][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://192.168.3.4:9200], sniffing=>false, manage_template=>false, id=>"a6a06e91fecd8b82497dd20acf1778bab84607528ea0a5b6544705fd6eff8f56", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0f3b2b86-1972-4d7e-86c5-bbcc421094fe", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2020-06-11T16:57:14,868][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.3.4:9200/]}}
[2020-06-11T16:57:14,877][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://192.168.3.4:9200/"}
[2020-06-11T16:57:14,882][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2020-06-11T16:57:14,883][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-06-11T16:57:14,896][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.3.4:9200"]}
[2020-06-11T16:57:14,914][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x6f405188 run>"}
[2020-06-11T16:57:14,939][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2020-06-11T16:57:14,947][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2020-06-11T16:57:15,139][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-06-11T16:57:17,112][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2020-06-11T16:57:17,940][INFO ][logstash.runner          ] Logstash shut down.
ms@ubuntu:/home/me/#
ms@ubuntu:/home/me/#

Your minimal pipeline tells Logstash to do nothing. So maybe it just terminates because it has nothing to do? Try it with a stdin and stdout maybe?

Hmm, thanks Jenni. Your theory makes a lot of sense. :slightly_smiling_face:

So with the Docker container, I can just start the container, and I have 20 seconds before the container crashes again. I don't know how you'd test with stdin and stdout, to be honest with you.

Here's another approach... Let's assume that when I set up my Logstash last year, I did something more intelligent with my pipeline file? Can you recommend an edit to the pipeline config file that just tells Logstash, "use the tmp.conf file". Maybe something like this:

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main
  path.config: "/usr/share/logstash/config/tmp.conf"

Actually, when I reread at the log, I see this entry:

[WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified

So maybe the pipelines file isn't the issue...?

Hi all, wanted to post my (non)solution.

I never did find the root problem. I tried pulling a fresh Docker image, no luck. Only when I restored the original logstash.yml and config file did my container become stable. I guess maybe there were some weird, unseen characters in the new version of the files?

Anyway, when in doubt... always go to your backups.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.