Getting Failed Action error in logstash-plain.log

I am sending the logs to ELK server from different system using filebeat. I am getting only one line log information in Kibana. Other logs are not present. I am sending more than 100 lines of logs. When I checked logstash-plain.log file, I found something wrong like this:

[2017-09-28T06:49:24,469][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-09-28T06:49:25,169][WARN ][logstash.inputs.beats    ] Beats input: SSL Certificate will not be used
[2017-09-28T06:49:25,169][WARN ][logstash.inputs.beats    ] Beats input: SSL Key will not be used
[2017-09-28T06:49:25,169][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-09-28T06:49:25,212][INFO ][logstash.pipeline        ] Pipeline main started
[2017-09-28T06:49:25,278][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-09-28T07:04:10,232][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>409, :action=>["create", {:_id=>"%{[@metadata][computed_id]}", :_index=>"logstash-2017.09.28", :_type=>"log", :_routing=>nil}, 2017-09-28T06:03:54.530Z MyHostName[15/Mar/2017:07:27:40 +0000] [Module1=14%|Module2=07%|Module3=11%|Module4=15%|Module5=27%|Module6=27%|Module7=33%|Module8=51%]], :response=>{"create"=>{"_index"=>"logstash-2017.09.28", "_type"=>"log", "_id"=>"%{[@metadata][computed_id]}", "status"=>409, "error"=>{"type"=>"version_conflict_engine_exception", "reason"=>"[log][%{[@metadata][computed_id]}]: version conflict, document already exists (current version [1])", "index_uuid"=>"sGN1VTXOSFKSQ5ZvTlloDg", "shard"=>"4", "index"=>"logstash-2017.09.28"}}}}

The logs which I am sending to ELK using FIlebeat has the following format:

[15/Mar/2017:07:27:40 +0000] [Module1=14%|Module2=07%|Module3=11%|Module4=15%|Module5=27%|Module6=27%|Module7=33%|Module8=51%]
[15/Mar/2017:07:27:40 +0000] [Module1=14%|Module2=07%|Module3=11%|Module4=15%|Module5=27%|Module6=27%|Module7=33%|Module8=51%]

In Kibana I've to see percentage of code coverage for all the modules individually. My logstash filter config file 10-filter.conf file is like this:

filter {

   if [type] == "performance"
   {
	grok {
		  patterns_dir => ["/etc/logstash/patterns"]

		 #match => { "message" => "\[%{HTTPDATE:timestamp}\] \[%{DATA:params1}\] \[%{DATA:params2}\] \[%{DATA:tweet}\]" }

		  match => { "message" => "\[%{HTTPDATE:timestamp}\] \[%{DATA:params1}\]" }
	}

		kv
		{
		 source => "params1"
		 field_split =>"|"
		}


   date
   {
	match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z","ISO8601"]
		target => "@timestamp"
   }

   mutate {
				convert => [ "Module1", "integer" ]
				convert => [ "Module2", "integer" ]
				convert => [ "Module3", "integer" ]
				convert => [ "Module4", "integer" ]
				convert => [ "Module5", "integer" ]
				convert => [ "Module6", "integer" ]
				convert => [ "Module7", "integer" ]
				convert => [ "Module8", "integer" ]
		}
   mutate
   {
		remove_field => ["timestamp","params1"]
   }
  }
}

I am facing the following problems:

  1. I am getting only one log entry in Kibana, not all the logs.
  2. Modules are not separated. Maybe, something is wrong in filter config file.

Here's the interesting part of the error message:

"error"=>{"type"=>"version_conflict_engine_exception", "reason"=>"[log][%{[@metadata][computed_id]}]: version conflict, document already exists (current version [1])

What does your elasticsearch output plugin configuration look like?

Here is the content of 30-elasticsearch-output.conf file:

output {
    elasticsearch {
        hosts => ["localhost:9200"]
        document_id => "%{[@metadata][computed_id]}"
        action => "create"
    }
    stdout {codec => rubydebug }
}

Okay. The error message indicates that there is no [@metadata][computed_id] field in the events. If you reconfigure your stdout output to stdout {codec => rubydebug { metadata => true } } it'll dump the complete event, including the metadata, so you can inspect it.

Hi Magnus,
Thanks for finding out the issue. But I have one question. Is computed_id field default in logstash? Does computed_id field have default value?

Which file does the logstash dump the data to? I know only /var/log/logstash/ directory for logstash logs. I checked the log files but I couldn't get event logs.

Is computed_id field default in logstash?

No. I assumed you were attempting to create it yourself.

Which file does the logstash dump the data to?

It should be in one of the files in /var/log/logstash (if that's where you've configured Logstash to store the logs).

Thanks Magnus.
You helped me a lot.
My problem related to computed_id is solved now.

But why am I not getting the Modules separated as fields? I am using the kv plugin also. As far as I know my filter config file is proper. Can this issue be because of patterns_dir as patterns are not there in /etc/logstash/patterns but in /usr/share/some_path/patterns? I thought this could be the issue but I updated this other path also in patterns_dir. But still the same issue.

Please show your current configuration and an example of an event (as stored in Elasticsearch) that wasn't processed correctly.

Magnus, I found the mistake which was causing the problem in fields separation. Filebeat was sending the document_type as log while the logstash filter plugin was checking against performance as document_type. It was my mistake. Thanks for instant help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.