Logstash pipeline DLQ issue

Hi,
I am upgrading my logstash from 7.x.x to 8.2.0. Problem is, logstash is not able to run the pipeline, showing the following error.

[2022-07-18T11:08:52,606][ERROR][org.logstash.common.io.DeadLetterQueueWriter][main][447b305b91484edbed17421a6a1732821599fce21ed1ba05c87e58275ef6e875] cannot write event to DLQ(path: /app/logstash/failed/queue/main): reached maxQueueSize of 2147483648 {"index"=>{"_index"=>"index_name", "_id"=>nil, "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"only write ops with an op_type of create are allowed in data streams"}}}

Logstash filters the data from cloud foundry and send the events to elastic and create an index.

This thing was working fine in 7.x.x version but after the upgrade its failing.

I tried to add the following line in logstash.yml to make pipeline 7.x.x to compatible with 8.x.x but the same result

ecs_compatibility => disabled

It seems elastic 8.x.x version is not able to process the _doc.

Do anyone know the workaround, It is really frustrating.

Thanks!

In 8.0 the value of the data_stream option on the elasticsearch output changed from "false" to "auto", which I think means logstash will check whether the ES instance it is talking to supports so, and if it does it will use them. You could try setting that option on your elasticsearch output

 data_stream => false

Thanks, I tried as per your say but same error.


022-07-18T13:17:24,228][ERROR][org.logstash.common.io.DeadLetterQueueWriter][main][8af87a010e4f7ef4ee89a5594dca85e0d43e11832c10e9e2655609ced3c0844b] cannot write event to DLQ(path: /app/logstash/failed/queue/main): reached maxQueueSize of 2147483648

here is my output.conf file

filter {
  mutate {
      add_field => { "[@metadata][index_name]" => "%{[@metadata][type]}" }
 }

  if [appCode] {
      mutate {
          lowercase => [ "appCode" ]
      }
      mutate {
          replace => { "[@metadata][index_name]" => "%{[@metadata][type]}-%{[appCode]}" }
         }
  }

}
output {
    elasticsearch {
      user => username
      password => password
      hosts => ['https://host1:9200','https://host2:9200','https://host3:9200']
      ssl_certificate_verification => false
      data_stream => false
      manage_template => false
      index => "%{[@metadata][index_name]}-8-%{+YYYY.MM.dd}"
    }
}

That is a completely different error! My guess is that you ran 2 GB of messages through that got the data stream error, and doing that filled up the DLQ. You now need to empty the DLQ before you can send any more data through your main pipeline.

If the only error that you had for the messages in the DLQ is the data stream error, then you can create a pipeline with a DLQ input that sends them to the elasticsearch output. If they were getting other errors then you will need to write a pipeline that modifies the events in the DLQ so that they no longer get those errors.

Alternately, if you don't care about the data in the DLQ (which would be odd, because why would you use a DLQ if you don't care about the contents) you could stop logstash and delete the DLQ files.

earlier DLQ space was 1 gb, I increased it to 2gb.

I tried to delete the dlq messages so many times, but it fill up again and shows error.

Same pipelines works fine with 7.16.0 version.

Then you need to read and address the error messages that cause the events to be sent to the DLQ.

Yeah, right. Found the problem but not sure about the fix.

response: {"index"=>{"_index"=>"Index-name-abe", "_id"=>nil, "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"only write ops with an op_type of create are allowed in data streams"}}}

As I said, set data_stream => false on the output. You may need to delete the existing index for this to work.

Tried the following and deleted the index as well.

data_stream => false on the output

but same dlq issue and data stream is getting created not index, thats strange!

@Badger I found the problem, template in kibana was created in data view. I just deleted it and
created the template in legacy mode and it worked.

Thanks for your inputs.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.