Docker logstash terminated during first run


(Stephen Im) #1

Hi,
I am installing logstash on docker.
I installed elasticsearch and kibana, but failed to run logstash.
Below is the message I am getting.
I created logstash.conf file under "/usr/share/logstash/pipelilne", but below show no config file found. Probably, I need to add the path in setting. But, I don't know how to add.
Can anyone help please? Thanks

</>
root@ubuntu-VirtualBox:~/Documents/logstash/logs# docker run --name logstash --link elasticsearch:elasticsearch --rm -it -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:6.1.3
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-02-07T20:10:48,482][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-02-07T20:10:48,515][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-02-07T20:10:49,986][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.1.3-java/modules/arcsight/configuration"}
[2018-02-07T20:10:50,270][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2018-02-07T20:10:50,283][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2018-02-07T20:10:51,289][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-02-07T20:10:51,371][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"b596752c-483e-4106-9059-59de932940ca", :path=>"/usr/share/logstash/data/uuid"}
[2018-02-07T20:10:53,109][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.1.3"}
[2018-02-07T20:10:53,356][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/usr/share/logstash/pipeline/*"}
[2018-02-07T20:10:53,915][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-02-07T20:10:57,894][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[http://elasticsearch:9200], bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", manage_template=>false, document_type=>"%{[@metadata][document_type]}", sniffing=>false, user=>"logstash_system", password=>, id=>"a8534760ec12a086fe293ee32232f724b17c660fec5c5bee2bbb376965e5bb43", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0249c15a-b1c3-4ecb-b598-2a6d9fadea5f", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-02-07T20:10:59,100][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/]}}
[2018-02-07T20:10:59,143][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2018-02-07T20:10:59,692][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://logstash_system:xxxxxx@elasticsearch:9200/"}
[2018-02-07T20:10:59,835][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-02-07T20:10:59,839][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-02-07T20:10:59,878][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[2018-02-07T20:10:59,916][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x2e346c5b run>"}
[2018-02-07T20:11:00,299][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/]}}
[2018-02-07T20:11:00,304][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2018-02-07T20:11:00,331][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://logstash_system:xxxxxx@elasticsearch:9200/"}
[2018-02-07T20:11:00,358][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>nil}
[2018-02-07T20:11:00,362][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-02-07T20:11:00,605][INFO ][logstash.pipeline ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2018-02-07T20:11:00,897][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>[".monitoring-logstash"]}
[2018-02-07T20:11:00,922][INFO ][logstash.inputs.metrics ] Monitoring License OK
[2018-02-07T20:11:02,129][INFO ][logstash.pipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
root@ubuntu-VirtualBox:~/Documents/logstash/logs#
</>


(Jherez Taylor) #2

I am having this exact issue but with a different outcome on two separate machines.

Versions: The official 6.1.3 docker images (Elasticsearch and Logstash)
Host OS 1: CentOS Linux release 7.3.1611 (Core)
Host OS 2: Red Hat Enterprise Linux Server release 7.2

The Elasticsearch runs fine on both machines, however on Host OS 2 the logstash container starts, transfers data for a few seconds (this is sporadic, doesn't happen each time and some data appears in ES), then it receives a SIGTERM.

I can't tell from the logs what is killing the logstash container and the exit codes are different. I have seen exit code 143 and 0.

logstash1_1 | 2018/02/06 18:58:17 Setting 'config.reload.automatic' from environment.
logstash1_1 | Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash1_1 | [2018-02-06T18:58:39,121][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
logstash1_1 | [2018-02-06T18:58:39,133][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
script_logstash1_1 exited with code 143

The logstash config is the same on both machines but Host OS 1 works without issue.

I eventually found a work around for the problem. I'm using docker compose and the following works:

  • Set sleep infinity command: "sleep infinity" under the logstash container
  • Enter the container
  • Start the logstash process with nohup bin/logstash -f pipeline/logstash.conf & and exit

The container has been up for over 48 hours now with no restart, but this isn't an ideal situation and it can't be deployed automatically.


(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.