Logstash shutting down without noticable error

For some reason after logstash starts the API endpoint it shuts down.

[2020-05-05T20:22:54,694][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-05-05T20:22:54,776][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.2"}
[2020-05-05T20:22:55,019][INFO ][org.reflections.Reflections] Reflections took 27 ms to scan 1 urls, producing 20 keys and 40 values
[2020-05-05T20:22:55,282][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://logstash_writer:xxxxxx@127.0.0.1:9200/]}}
[2020-05-05T20:22:55,378][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://logstash_writer:xxxxxx@127.0.0.1:9200/"}
[2020-05-05T20:22:55,407][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-05-05T20:22:55,409][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[2020-05-05T20:22:55,440][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>["//127.0.0.1:9200"]}
[2020-05-05T20:22:55,476][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/basic_ls_config"], :thread=>"#<Thread:0x42951ff1@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:105 run>"}
[2020-05-05T20:22:56,079][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-05-05T20:22:56,103][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2020-05-05T20:22:56,141][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-05-05T20:22:56,633][INFO ][logstash.runner ] Logstash shut down.

I am able to pull from 9200 fine:

curl http://127.0.0.1:9200/
{
"name" : "elk-prod01-elk-prod01",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "pHLpTCY5Q4C4vXjphy9SMg",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

input { elasticsearch {
hosts => "localhost"
user => xxxxxxxxxx
password => "xxxxxxxxx" }
}
filter {}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
index => "%{[some_field][sub_field]}-%{+YYYY.MM.dd}"
user => xxxxxxx
password => "xxxxxx"
manage_template => "false"
template_name => "logstash"
ilm_enabled => true}
}

You may need a query and index parameter in your input, otherwise it doesn't know what data you want to retrive.

I added
query => '{ "query": { "match": { "statuscode": 200 } }, "sort": [ "_doc" ] }'

Now I am getting basically the same

[2020-05-05T21:37:41,991][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-05-05T21:37:42,071][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.2"}
[2020-05-05T21:37:42,352][INFO ][org.reflections.Reflections] Reflections took 27 ms to scan 1 urls, producing 20 keys and 40 values
[2020-05-05T21:37:42,618][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://logstash_writer:xxxxxx@127.0.0.1:9200/]}}
[2020-05-05T21:37:42,735][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://logstash_writer:xxxxxx@127.0.0.1:9200/"}
[2020-05-05T21:37:42,768][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-05-05T21:37:42,770][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[2020-05-05T21:37:42,803][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>["//127.0.0.1:9200"]}
[2020-05-05T21:37:42,827][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/basic_ls_config"], :thread=>"#<Thread:0x5cb02d6c@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:105 run>"}
[2020-05-05T21:37:43,449][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-05-05T21:37:43,476][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2020-05-05T21:37:43,513][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-05-05T21:37:44,002][INFO ][logstash.runner ] Logstash shut down.

The index has a default value of logstash-*

Though I am a newb and am assuredly missing something.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.