Error 400 during parsing logs from logstash

Hello Team,

I have recently moved ES from 6.x to 7.x and faced a issue in one of pipeline which was working fine in ES 6.x version, but in ES 7.x I'm getting 400 error in logstash.

Logstash is flooded with this error:

[2020-01-09T06:19:16,088][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://es-client.es.svc.cluster.local:9200/_bulk"}
[2020-01-09T06:19:16,089][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://es-client.es.svc.cluster.local:9200/_bulk"}
[2020-01-09T06:19:16,089][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://es-client.es.svc.cluster.local:9200/_bulk"}
[2020-01-09T06:19:16,090][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://es-client.es.svc.cluster.local:9200/_bulk"}
[2020-01-09T06:19:16,092][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://es-client.es.svc.cluster.local:9200/_bulk"}

Here are my output and filters:
Filter:

filter {
   mutate {
      add_field =>  { "processedInFrankfurt" => "%{[@timestamp]}" }
}
        date {
                 match => [ "date", "YYYY-MM-dd" ]
              }
    fingerprint {
      source => [ "source", "account-id", "geo", "node-id", "product-id", "product-version" ]
      target => "[@metadata][fingerprint]"
      method => "SHA256"
      key => "afterallweareonlyordinarymen"
      concatenate_sources => true
    }
  }

Output:

output {
  elasticsearch {
     retry_on_conflict => 5
     action => "update"
     doc_as_upsert => true
     hosts => ["elasticsearch_url:9200"] 
     index => "logstash_index"
     user => "user"
     password => "password"
     document_type => "_doc"
     document_id => "%{[@metadata][fingerprint]}"
  }
 }

Thanks in advance!!!

1 Like

Is there a more informative error message in the elasticsearch logs?

Hello Badger,

Unfortunately there is nothing much in logs which can provide more information about this error.

[2020-01-14T07:57:41,130][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2020-01-14T07:57:41,146][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"_doc"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2020-01-14T07:57:41,159][WARN ][logstash.pipeline        ] CAUTION: Recommended inflight events max exceeded! Logstash will run with up to 22000 events in memory in your current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently 250), or changing the number of pipeline workers (currently 88) {:pipeline_id=>"tau-cloudepo", :thread=>"#<Thread:0x78aac5c4 run>"}
[2020-01-14T07:57:41,190][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2020-01-14T07:57:41,243][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Got response code '400' contacting Elasticsearch at URL 'http://elasticsearch_url:9200/_template/logstash'", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:80:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:291:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278:in `block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285:in `block in Pool'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:348:in `template_put'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:86:in `template_install'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:21:in `install'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:9:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/common.rb:118:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/common.rb:49:in `block in install_template_after_successful_connection'"]}
[2020-01-14T07:57:41,333][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"tau-cloudepo", :thread=>"#<Thread:0x78aac5c4 run>"}
[2020-01-14T07:57:41,407][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:"tau-cloudepo"], :non_running_pipelines=>[]}

However there are few logs I got from this pipeline which seems suspicious to me.

I asked about the elasticsearch logs, not the logstash logs.

Sorry for confusions,

as there are no logs from elasticsearch side.

Which Logstash version are you using?

The output plugin is of version 9.2.0 dated June 2018, and Elastic 7 came out around on April 2019.

It could be a Logstash or Logstash plugin too old for the Elasticsearch server used.

I'm using 6.4.1 logstash version, let me try with newer versions of Logstash.

Thanks in Advance !!!

1 Like

You're welcome. Give a shout out how it went :slight_smile:

By the way, you may want to have a look at some open source parsers, such as https://github.com/empow/logstash-parsers/.

It can get very tricky to nail the real meaning of the data. These kind of parsers interface into Logstash as .conf files, and can greatly help you consolidate and normalize numerous log dumps into information that can be effectively used (using the Elastic Common Schema, MITRE rationale etc.). See https://blog.empow.co/loganalysis.