I have the exact same error. I am unable to determine from the error what is the pipeline causing the error and exactly why it is mismatching
06/12/2017 00:03:39 Error: [400] {"error":{"root_cause":[{"type":"parsing_exception","reason":"Unknown key for a START_OBJECT in [bool].","line":3,"col":13}],"type":"parsing_exception","reason":"Unknown key for a START_OBJECT in [bool].","line":3,"col":13},"status":400}
ES logs are flooded with this error and it must create a huge impact for my report. Ideally the error would spit out the subject of the parse error to try to reproduce it and confirm what the error is. I dont know how to do this though so i ll try to follow this post for the moment until someone answers
Ok the previous error, i have enabled LOG_LEVEL=debug, mentions a specific pipeline I have
[2017-12-08T23:37:00,092][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Elasticsearch hosts=>["elasticsearch:9200"], index=>"myindex", query=>"\n\t {\n\t\t \"bool\": {\n\t\t\t\"must\": [\n\t\t\t { \"match\": { \"message\": \"query\": \"WebHookHelper deleting AppID instanceId\", \"type\": \"phrase\" }}\n\t\t\t]\n\t\t }\n\n\t }", id=>"53eb1887ad75b4c5e6d069205c78d27900d377eb0b52438be93fb7a3297bdefa", enable_metric=>true, codec=><LogStash::Codecs::JSON id=>"json_1511329a-1c81-4e9d-b3f1-ebe1bdef607f", enable_metric=>true, charset=>"UTF-8">, size=>1000, scroll=>"1m", docinfo=>false, docinfo_target=>"@metadata", docinfo_fields=>["_index", "_type", "_id"], ssl=>false>
Error: [400] {"error":{"root_cause":[{"type":"parsing_exception","reason":"Unknown key for a START_OBJECT in [bool].","line":3,"col":13}],"type":"parsing_exception","reason":"Unknown key for a START_OBJECT in [bool].","line":3,"col":13},"status":400}
Exception: Elasticsearch::Transport::Transport::Errors::BadRequest
Stack: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/base.rb:202:in `__raise_transport_error'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/base.rb:319:in `perform_request'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/http/faraday.rb:20:in `perform_request'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/client.rb:131:in `perform_request'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-api-5.0.4/lib/elasticsearch/api/actions/search.rb:183:in `search'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-elasticsearch-4.1.0/lib/logstash/inputs/elasticsearch.rb:152:in `run'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:574:in `inputworker'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:567:in `block in start_input'
Is there a misconfiguration for LogStash::Codecs::JSON ?
Here is the query I am making (inside the pipeline)
input {
elasticsearch {
hosts => "elasticsearch:9200"
index => "myindex"
query => '
{
"bool": {
"must": [
{ "match": { "message": "query": "WebHookHelper deleting AppID instanceId", "type": "phrase" }}
]
}
}'
}
}
output {
csv {
# This is the fields that you would like to output in CSV format.
# The field needs to be one of the fields shown in the output when you run your
# Elasticsearch query
fields => ["timestamp", "message"]
# This is where we store output. We can use several files to store our output
# by using a timestamp to determine the filename where to store output.
path => "/tmp/output/deletedApps-%{+YYYY-MM-dd}.csv"
csv_options => {"col_sep" => "\t" "row_sep" => "\n"}
}
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.