[2017-01-17T09:55:06,783][WARN ][o.e.l.LicenseService ] [testing.test]
#
# License [will expire] on [Thursday, January 19, 2017]. If you have a new license, please update it.
# Otherwise, please reach out to your support contact.
#
# Commercial plugins operate with reduced functionality on license expiration:
# - security
# - Cluster health, cluster stats and indices stats operations are blocked
# - All data operations (read and write) continue to work
# - watcher
# - PUT / GET watch APIs are disabled, DELETE watch API continues to work
# - Watches execute and write to the history
# - The actions of the watches don't execute
# - monitoring
# - The agent will stop collecting cluster and indices metrics
# - The agent will stop automatically cleaning indices older than [xpack.monitoring.history.duration]
# - graph
# - Graph explore APIs are disabled
But nothing relevant to the logstash error:
[2017-01-17T09:59:36,187][ERROR][logstash.outputs.elasticsearch] Encountered an unexpected error submitting a bulk request! Will retry. {:error_message=>"undefined methodresponse' for #LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError:0x7d8581c0", :class=>"NoMethodError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:223:in safe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:187:insafe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:109:in submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:76:inretrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:27:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:12:inmulti_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:42:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:331:inoutput_batch'", "org/jruby/RubyHash.java:1342:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:330:inoutput_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:288:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:258:instart_workers'"]}`
Hi, I was having the same problem as you.
I was parsing huge logs files (for a total of 150Go), and after 30 min, the process just stopped with the kind of errors you got.
After a day of hard debugging, I concluded that the problem was with the plugin logstash-output-elasticsearch.
The version bundle with logstash 5.1.2 is the version 5.4 of the plugin. Upgrade it to the last version (6.2.4 at the time I write this message), and the errors are gone, logstash continue to forward to elasticsearch.
To update your plugin, do the following (on Debian) :
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.