Getting Failed Action error in logstash-plain.log

(Anil) #1

I am sending the logs to ELK server from different system using filebeat. I am getting only one line log information in Kibana. Other logs are not present. I am sending more than 100 lines of logs. When I checked logstash-plain.log file, I found something wrong like this:

[2017-09-28T06:49:24,469][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-09-28T06:49:25,169][WARN ][    ] Beats input: SSL Certificate will not be used
[2017-09-28T06:49:25,169][WARN ][    ] Beats input: SSL Key will not be used
[2017-09-28T06:49:25,169][INFO ][    ] Beats inputs: Starting input listener {:address=>""}
[2017-09-28T06:49:25,212][INFO ][logstash.pipeline        ] Pipeline main started
[2017-09-28T06:49:25,278][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-09-28T07:04:10,232][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>409, :action=>["create", {:_id=>"%{[@metadata][computed_id]}", :_index=>"logstash-2017.09.28", :_type=>"log", :_routing=>nil}, 2017-09-28T06:03:54.530Z MyHostName[15/Mar/2017:07:27:40 +0000] [Module1=14%|Module2=07%|Module3=11%|Module4=15%|Module5=27%|Module6=27%|Module7=33%|Module8=51%]], :response=>{"create"=>{"_index"=>"logstash-2017.09.28", "_type"=>"log", "_id"=>"%{[@metadata][computed_id]}", "status"=>409, "error"=>{"type"=>"version_conflict_engine_exception", "reason"=>"[log][%{[@metadata][computed_id]}]: version conflict, document already exists (current version [1])", "index_uuid"=>"sGN1VTXOSFKSQ5ZvTlloDg", "shard"=>"4", "index"=>"logstash-2017.09.28"}}}}

The logs which I am sending to ELK using FIlebeat has the following format:

[15/Mar/2017:07:27:40 +0000] [Module1=14%|Module2=07%|Module3=11%|Module4=15%|Module5=27%|Module6=27%|Module7=33%|Module8=51%]
[15/Mar/2017:07:27:40 +0000] [Module1=14%|Module2=07%|Module3=11%|Module4=15%|Module5=27%|Module6=27%|Module7=33%|Module8=51%]

In Kibana I've to see percentage of code coverage for all the modules individually. My logstash filter config file 10-filter.conf file is like this:

filter {

   if [type] == "performance"
	grok {
		  patterns_dir => ["/etc/logstash/patterns"]

		 #match => { "message" => "\[%{HTTPDATE:timestamp}\] \[%{DATA:params1}\] \[%{DATA:params2}\] \[%{DATA:tweet}\]" }

		  match => { "message" => "\[%{HTTPDATE:timestamp}\] \[%{DATA:params1}\]" }

		 source => "params1"
		 field_split =>"|"

	match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z","ISO8601"]
		target => "@timestamp"

   mutate {
				convert => [ "Module1", "integer" ]
				convert => [ "Module2", "integer" ]
				convert => [ "Module3", "integer" ]
				convert => [ "Module4", "integer" ]
				convert => [ "Module5", "integer" ]
				convert => [ "Module6", "integer" ]
				convert => [ "Module7", "integer" ]
				convert => [ "Module8", "integer" ]
		remove_field => ["timestamp","params1"]

I am facing the following problems:

  1. I am getting only one log entry in Kibana, not all the logs.
  2. Modules are not separated. Maybe, something is wrong in filter config file.

(Magnus Bäck) #2

Here's the interesting part of the error message:

"error"=>{"type"=>"version_conflict_engine_exception", "reason"=>"[log][%{[@metadata][computed_id]}]: version conflict, document already exists (current version [1])

What does your elasticsearch output plugin configuration look like?

(Anil) #3

Here is the content of 30-elasticsearch-output.conf file:

output {
    elasticsearch {
        hosts => ["localhost:9200"]
        document_id => "%{[@metadata][computed_id]}"
        action => "create"
    stdout {codec => rubydebug }

(Magnus Bäck) #4

Okay. The error message indicates that there is no [@metadata][computed_id] field in the events. If you reconfigure your stdout output to stdout {codec => rubydebug { metadata => true } } it'll dump the complete event, including the metadata, so you can inspect it.

(Anil) #5

Hi Magnus,
Thanks for finding out the issue. But I have one question. Is computed_id field default in logstash? Does computed_id field have default value?

Which file does the logstash dump the data to? I know only /var/log/logstash/ directory for logstash logs. I checked the log files but I couldn't get event logs.

(Magnus Bäck) #6

Is computed_id field default in logstash?

No. I assumed you were attempting to create it yourself.

Which file does the logstash dump the data to?

It should be in one of the files in /var/log/logstash (if that's where you've configured Logstash to store the logs).

(Anil) #7

Thanks Magnus.
You helped me a lot.
My problem related to computed_id is solved now.

But why am I not getting the Modules separated as fields? I am using the kv plugin also. As far as I know my filter config file is proper. Can this issue be because of patterns_dir as patterns are not there in /etc/logstash/patterns but in /usr/share/some_path/patterns? I thought this could be the issue but I updated this other path also in patterns_dir. But still the same issue.

(Magnus Bäck) #8

Please show your current configuration and an example of an event (as stored in Elasticsearch) that wasn't processed correctly.

(Anil) #9

Magnus, I found the mistake which was causing the problem in fields separation. Filebeat was sending the document_type as log while the logstash filter plugin was checking against performance as document_type. It was my mistake. Thanks for instant help.

(system) #10

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.