Persistent Queues: logstash could not start after forced shutdown

Hi All,

the logstash process was stopped by ^C and could not be started until the queue files have been removed. The following errors were thrown:

C:\programms\logstash\bin>logstash -f C:\projects\logstash.conf --path.settings C:/etc/logstash
Could not find log4j2 configuration at path /etc/logstash/log4j2.properties. Using default config which logs to console
18:01:10.755 [LogStash::Runner] ERROR logstash.agent - Cannot load an invalid configuration {:reason=>""}
18:01:11.148 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s]}}
18:01:11.175 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx@localhost:9200/, :path=>"/"}
18:01:11.448 [[.monitoring-logstash]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<URI::HTTP:0x362fd46a URL:http://elastic:xxxxxx@localhost:9200/>}
18:01:11.462 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x42c1d56 URL:http://localhost:9200>]}
18:01:11.465 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>1000, "pipeline.max_inflight"=>2}
18:01:11.476 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.pipeline - Pipeline .monitoring-logstash started
18:01:11.612 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
18:01:12.502 [pool-2-thread-1] ERROR logstash.inputs.metrics - Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}
18:01:13.510 [pool-2-thread-3] ERROR logstash.inputs.metrics - Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}
18:01:14.519 [pool-2-thread-4] ERROR logstash.inputs.metrics - Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}
18:01:15.526 [pool-2-thread-1] ERROR logstash.inputs.metrics - Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}
...

Unfortunately I can not reproduce the error anymore.
It create a kind of uncertainty, that a forced shutdown could screw up the persistent queue. Is the a way to make it more resilient somehow?
I fully understand, that it'd be hard to provide an advise without analysing the broken queue structure, but I'd try to ask anyway :slight_smile:

the non-standard settings:

queue.type: persisted
path.queue: c:/data/logstash/queue
queue.max_bytes: 1024mb
xpack.monitoring.elasticsearch.url: "http://localhost:9200"
xpack.monitoring.collection.interval: 1s
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "changeme"

Thank you!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.