An error in Logstash with Beats tagged

Im getting an error in logstash but it comes from Beats , i guess looking at the logs.

[[main]<beats] rejectedExecution - Failed to submit a listener notification task. Event loop shut down?

Can someone please help me to fix this issue?

Can you provide the full stack trace and also version of logstash and the logstash beats input plugin ( bin/logstash-plugin list --verbose beats) ?

Hello,
I managed to fix the issue. it was very funny. When i removed the "Hosts" from the logstash config file, it started working. But i have another issue now. Beats output says this:

2018-04-18T12:54:30.468Z ERROR logstash/async.go:235 Failed to publish events caused by: write tcp 10.85.7.194:29868->10.85.7.207:5044: write: connection reset by peer
2018-04-18T12:54:31.468Z ERROR pipeline/output.go:92 Failed to publish events: write tcp 10.85.7.194:29868->10.85.7.207:5044: write: connection reset by peer

logstash version is 6.2.3 and filebeat version is also 6.2.3. Please let me know if you need any more information

  1. can you provide your current input section of the logstash configuration?
  2. are there any errors on the logstash side?
  3. how many beats are sending to logstash? do you have an idea on the event/second rate?

Hi Joao,

  1. The input section in logstash:
    input {
    beats {
    port => 5044
    type => "direct"
    codec => plain
    {
    charset => "ISO-8859-1"
    }
    }
    }
  2. I dont see any errors but warnings in the logs.
  3. I dont know any idea about that. But the rate of data is very low. Is there any configuration for that?

Let me know, if you need more information.

Thanks

what is the output section like? are any events at all reaching the logstash outputs?
can you show the warnings?

Please see the output section.

  1. output {
    if "_grokparsefailure" in [tags] {
    elasticsearch {
    hosts => "localhost:9200"
    index => "cmdc2-error-%{cmdcLogId}"
    document_type => "error_logs"
    codec => "json"
    }
    } else {
    elasticsearch {
    hosts => "localhost:9200"
    index => "cmdc2-log-%{+YYYY.MM.dd}"
    document_type => "%{target}_logs"
    codec => "json"
    }
    }
    }

  2. Warning logs are like this.
    [WARN ] 2018-04-18 13:23:11.647 [Ruby-0-Thread-35@[main]>worker1: /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:384] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"cmdc2-error-%{cmdcLogId}", :_type=>"error_logs", :_routing=>nil}, #LogStash::Event:0x6b58e99c], :response=>{"index"=>{"_index"=>"cmdc2-error-%{cmdcLogId}", "_type"=>"error_logs", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [cmdc2-error-%{cmdcLogId}], must be lowercase", "index_uuid"=>"na", "index"=>"cmdc2-error-%{cmdcLogId}"}}}}
    [WARN ] 2018-04-18 13:23:11.641 [Ruby-0-Thread-34@[main]>worker0: /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:384] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"cmdc2-error-%{cmdcLogId}", :_type=>"error_logs", :_routing=>nil}, #LogStash::Event:0x6d71ce87], :response=>{"index"=>{"_index"=>"cmdc2-error-%{cmdcLogId}", "_type"=>"error_logs", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [cmdc2-error-%{cmdcLogId}], must be lowercase", "index_uuid"=>"na", "index"=>"cmdc2-error-%{cmdcLogId}"}}}}
    [WARN ] 2018-04-18 13:23:11.648 [Ruby-0-Thread-34@[main]>worker0: /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:384] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"cmdc2-error-%{cmdcLogId}", :_type=>"error_logs", :_routing=>nil}, #LogStash::Event:0x21a3abd7], :response=>{"index"=>{"_index"=>"cmdc2-error-%{cmdcLogId}", "_type"=>"error_logs", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [cmdc2-error-%{cmdcLogId}], must be lowercase", "index_uuid"=>"na", "index"=>"cmdc2-error-%{cmdcLogId}"}}}}
    [WARN ] 2018-04-18 13:23:11.648 [Ruby-0-Thread-35@[main]>worker1: /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:384] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"cmdc2-error-%{cmdcLogId}", :_type=>"error_logs", :_routing=>nil}, #LogStash::Event:0x37deac09], :response=>{"index"=>{"_index"=>"cmdc2-error-%{cmdcLogId}", "_type"=>"error_logs", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [cmdc2-error-%{cmdcLogId}], must be lowercase", "index_uuid"=>"na", "index"=>"cmdc2-error-%{cmdcLogId}"}}}}

Also, i cant see the data transmitted from beats to elasticsearch. I meant in elasticsearch logs.

it seems that events with the _grokparsefailure don't have the cmdcLogId field, so logstash is trying to write to an index that is literally called "cmdc2-error-%{cmdcLogId}".

Because elasticsearch doesn't allow index names with these %{} characters, it's rejecting the data, and logstash is continuously retrying. Because logstash isn't able to move forward with this data, it also stops consuming data from beats and eventually rejects the connections from beats, causing the errors you're seeing in filebeat.

Does that make sense? As long as the events are either sent correctly to one of the outputs or dropped you should start seeing data flow correctly without errors.

To be frank, the same set up works with Heka --> Logstash. This is the first time, i am trying with Filebeat.

SO, do you want me to remove that index and try?

No, just to confirm that the data that reaches logstash and, more specifically, reaches the output section either:
a) contains a tag _grokparsefailure + cmdcLogId field: this is the first conditional block
b) doesn't contain the tag _grokparsefailure and has a target field: this is for the else block

For debugging purposes you can put a stdout { codec => rubydebug } before the if (in your output section), to see the event before it is sent to elasticsearch

SO the output section looks like this:

output {
stdout { codec => rubydebug }
if "_grokparsefailure" in [tags] {
elasticsearch {
hosts => "localhost:9200"
index => "cmdc2-error-%{cmdcLogId}"
document_type => "error_logs"
codec => "json"
}
} else {
elasticsearch {
hosts => "localhost:9200"
index => "cmdc2-log-%{+YYYY.MM.dd}"
document_type => "%{target}_logs"
codec => "json"
}
}

1 Like

Yes, i can see the lines reaching logstash.
{
"@timestamp" => 2018-04-18T13:41:34.522Z,
"cmdcInstanceId" => nil,
"type" => "direct",
"fileId" => "0",
"host" => "appin1a",
"offset" => 366882,
"source" => "/var/log/nds/cmdc/cmdc.audit",
"@version" => "1",
"message" => "2018/04/18 13:41:25.127 [TWCAsyncProcessor] [TWC-pool-3-thread-1]: INFO: [98291:105377] TWC request=MercurySortRequest ",
"instanceId" => nil,
"prospector" => {
"type" => "log"
},
"beat" => {
"name" => "appin1a",
"hostname" => "appin1a",
"version" => "6.2.3"
},
"cmdcLogId" => nil,
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}
{
"@timestamp" => 2018-04-18T13:41:34.522Z,
"cmdcInstanceId" => nil,
"type" => "direct",
"fileId" => "0",
"host" => "appin1a",
"offset" => 367199,
"source" => "/var/log/nds/cmdc/cmdc.audit",
"@version" => "1",
"message" => "2018/04/18 13:41:25.135 [BaseAsyncApi] [CMDC-pool-2-thread-11]: INFO: [98291] CMDC response status=200 CMDC=9ms TWC=7ms #TWC=1 ",
"instanceId" => nil,
"prospector" => {
"type" => "log"
},
"beat" => {
"name" => "appin1a",
"hostname" => "appin1a",
"version" => "6.2.3"
},
"cmdcLogId" => nil,
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}
{
"@timestamp" => 2018-04-18T13:41:34.523Z,
"cmdcInstanceId" => nil,
"type" => "direct",
"fileId" => "0",
"host" => "appin1a",
"offset" => 367833,
"source" => "/var/log/nds/cmdc/cmdc.audit",
"@version" => "1",
"message" => "2018/04/18 13:41:26.482 [TWCAsyncProcessor] [TWC-pool-3-thread-2]: INFO: [98292:105378] TWC request=MercurySortRequest ",
"instanceId" => nil,
"prospector" => {
"type" => "log"
},
"beat" => {
"name" => "appin1a",
"hostname" => "appin1a",
"version" => "6.2.3"
},
"cmdcLogId" => nil,
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}
{
"@timestamp" => 2018-04-18T13:41:34.523Z,
"cmdcInstanceId" => nil,
"type" => "direct",
"fileId" => "0",
"host" => "appin1a",
"offset" => 368148,
"source" => "/var/log/nds/cmdc/cmdc.audit",
"@version" => "1",
"message" => "2018/04/18 13:41:26.485 [BaseAsyncApi] [CMDC-pool-2-thread-6]: INFO: [98292] CMDC response status=200 CMDC=3ms TWC=3ms #TWC=1 ",
"instanceId" => nil,
"prospector" => {
"type" => "log"
},
"beat" => {
"name" => "appin1a",
"hostname" => "appin1a",
"version" => "6.2.3"
},
"cmdcLogId" => nil,
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}
{
"@timestamp" => 2018-04-18T13:41:35.523Z,
"cmdcInstanceId" => nil,
"type" => "direct",
"fileId" => "0",
"host" => "appin1a",
"offset" => 368785,
"source" => "/var/log/nds/cmdc/cmdc.audit",
"@version" => "1",
"message" => "2018/04/18 13:41:34.838 [TWCAsyncProcessor] [TWC-pool-3-thread-1]: INFO: [98293:105379] TWC request=MercurySortRequest ",
"instanceId" => nil,
"prospector" => {
"type" => "log"
},
"beat" => {
"name" => "appin1a",
"hostname" => "appin1a",
"version" => "6.2.3"
},
"cmdcLogId" => nil,
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}
{
"@timestamp" => 2018-04-18T13:41:35.523Z,
"cmdcInstanceId" => nil,
"type" => "direct",
"fileId" => "0",
"host" => "appin1a",
"offset" => 369100,
"source" => "/var/log/nds/cmdc/cmdc.audit",
"@version" => "1",
"message" => "2018/04/18 13:41:34.841 [BaseAsyncApi] [CMDC-pool-2-thread-4]: INFO: [98293] CMDC response status=200 CMDC=4ms TWC=3ms #TWC=1 ",
"instanceId" => nil,
"prospector" => {
"type" => "log"
},
"beat" => {
"name" => "appin1a",
"hostname" => "appin1a",
"version" => "6.2.3"
},
"cmdcLogId" => nil,
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}

this seems to be the issue, there's a _grokparsefaiure tag, so the event is sent to:

if "_grokparsefailure" in [tags] {
  elasticsearch {
    hosts => "localhost:9200"
    index => "cmdc2-error-%{cmdcLogId}"
    document_type => "error_logs"
    codec => "json"
  }
}

but there's no value for cmdcLogId, so index => "cmdc2-error-%{cmdcLogId}" won't be computed correctly.

Thats great. I removed that section from output and no more warnings are displayed. How can i make sure that the logs were processed and sent to elasticsearch

your if conditional strategy makes sense, but I suggest not having an index name that depends on a field that may not exist (in this case, cmdcLogId)

Ok. I got you. I remove that index and try again. To make sure that logs were sent from logstash to elasticsearch, should i monitor now at elasticsearch side?

Yes, but also on the logstash side you can query the logstash api to find out how many events each output is processing.

Typically you'll want to see that the elasticsearch output in the first "if clause" process no events (which means no errors)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.