Logstash sometimes ignoring datastream configuration in elasticsearch output

Hello,
we are running logstash 8.4.0 with multiple pipelines outputting to elasticsearch.
Most of the time this works fine, but sometimes logstash will ignore the datastream configuration on startup and tries to write into the ecs-logstash index.
We cannot reproduce this behaviour consistently, but it will happen every 5 restarts or so.

Logstash will then continously write the following error message:

[2022-10-20T11:12:23,719][ERROR][logstash.outputs.elasticsearch][logs-fsecure-prod][a0f441f48a90bca64a9e011a7f67a0e99c01a321f9792cbf83dc7ed9e81f80f8] Elasticsearch setup did not complete normally, please review previously logged errors {
:message=>"Got response code '403' contacting Elasticsearch at URL 'https://elastic01p.XXXXX:9200/ecs-logstash'", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError}

The error can be fixed by restarting the logstash service (sometimes multiple times).
We witness the problematic behaviour on any random pipeline, all having a similar configuration to the following:

output {
	elasticsearch {
		hosts => ["https://elastic01p.XXXXX:9200", "https://elastic02p.XXXXX:9200"]
		cacert => "/etc/logstash/Root-CA.pem"
		user => "${ES_USER}"
		password => "${ES_PASSWORD}"
		data_stream => "true"
		data_stream_type => "logs"
		data_stream_dataset => "fsecure"
		data_stream_namespace => "prod"
	}
}

I have not found any bug report so far, but a similar error message was reported in Logstash input configuration - #3 by ttyser by a different user.
It seems like a race condition of some sort.

Best regards,
hti

1 Like

Same behaviour has been reported in Possible race condition causing Elasticsearch output plugin to overwrite supplied index with rollover alias

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.