Dead Letter Queue and its usage

There have already been a lot of discussions on dead_letter_queue but FWIW I could not find answers to all my questions, and thus posting them out here.

My setup (local machine):
logstash - 6.8.1
ES - Version: 6.8.1, Build: default/deb/1fad4e1/2019-06-18T13:16:52.517138Z, JVM: 1.8.0_201
DLQ - logstash-input-dead_letter_queue (1.1.5)

conf:
input {
tcp {
port => 5000
codec => json
}
}
some filters
output {
# For debugging, uncomment below lines.
# Not recommended for production config
stdout {
codec => rubydebug { metadata => true }
}
elasticsearch { has config to push to ES here}
}

logstash.yml:  
      config.reload.automatic: true

      log.level: debug
      path.logs: "/var/log/logstash"

      pipeline:
        id: main
        workers: 2

      path:
        data: "/var/lib/logstash"
        config: "/etc/logstash/conf.d/*.conf"
        logs: "/var/log/logstash"
        dead_letter_queue: "/var/lib/logstash/dead_letter_queue"

      queue:
        type: persisted

      dead_letter_queue.enable: true

What I want to ask/clarify:

  1. To test the DLQ functionality I stoppped elasticsearch service, and pushed events to logstash, but they did not appear in DLQ.
  • after starting ES again, some events got processed to ES, the rest did not go to DLQ. (all got printed in logs though)
  1. I started ES service and then restarted it, while restarting I pushed events to LogStash, and they were pushed to DLQ.

  2. Does the below count as 400 or 404, and thus all events after the above line in the log qualify for being pushed to DLQ?
    [2019-07-10T19:07:29,670][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}`

  3. Does the below count as 400 or 404, and thus all events after the above line in the log clarify for being pushed to DLQ?
    [2019-07-10T19:07:35,680][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8}
    [2019-07-10T19:07:37,812][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
    [2019-07-10T19:07:42,878][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
    {
    "@version" => 1562765849452,
    "preferred_language" => nil,
    "email" => "estest@yopmail.com",
    "_table" => "some_table",
    "last_modified_on" => "2019-07-10T19:07:29.445911+05:30",
    "@metadata" => {
    "dead_letter_queue" => {
    "plugin_id" => "elasticsearch",
    "reason" => "Could not index event to Elasticsearch. status: 404, action: ["update", {:_id=>"74", :_index=>"audience_member_1010", :_type=>"_doc", :routing=>nil, :_retry_on_conflict=>1}, #LogStash::Event:0xbb18f2b], response: {"update"=>{"_index"=>"audience_member_1010", "_type"=>"_doc", "_id"=>"74", "status"=>404, "error"=>{"type"=>"document_missing_exception", "reason"=>"[_doc][74]: document missing", "index_uuid"=>"sFfI0NvxQH6LXvUWdxboJQ", "shard"=>"2", "index"=>"a_m_1010"}}}",
    "entry_time" => 2019-07-10T13:37:43.748Z,
    "plugin_type" => "1df853fc7924f0e50a2d305c50130fdbd431b88976e576aeeaca3890c7269d99"
    }
    },
    "last_name" => nil,
    "deleted" => false,
    "@timestamp" => 2019-07-10T13:37:29.452Z,
    "created_on" => "2019-07-10T19:07:29.445911+05:30",
    "custom_data" => {}
    }

There were more fields but are removed.

  1. My understanding for 404 response code is:
  • Basically if ES is not available (maybe it was stopped, url changed, or machine crashed)
  • But this does not seem to be how DLQ is populated.

For the response code to be either 400 or 404 requires that elasticsearch respond, which it cannot do if it is stopped.

okay, that makes sense.
So essentially, the dead letter queue will never be populated if my ES instance goes does down/crashes/its url changes?

Correct.

Thanks for the swift response.

Will logstash drop those messages/events or keep them in the queue until it is able to push them to some output or keep them forever?

I believe it will queue them, eventually resulting in back-pressure stopping the inputs.

I was able to write to dead letter queue successfully but despite putting in configuration for re-processing them, I'm unable to see the processed objects in elastic search.

Is there a setting that explicitly tells logstash to send it back again to elastic search?

I have a file called dlq.conf in the /etc/logstash/conf.d/ directory that looks like.

input {
        dead_letter_queue {
        path => "/var/lib/logstash/dead_letter_queue"
        pipeline_id => "main"
        commit_offsets => true
    }
}

Also my logstash.yml looks like

/etc/logstash/logstash.yml file
config.reload.automatic: true

log.level: info

pipeline:
  id: main
  workers: 2

path:
  config: "/etc/logstash/conf.d/*.conf"
  data: "/var/lib/logstash"
  logs: /var/log/logstash

queue:
  type: persisted

dead_letter_queue.enable: true
path.dead_letter_queue: "/var/lib/logstash/dead_letter_queue"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.