Logstash throwing 406 when running as Service on Linux

I had setup Elk on Linux box with Filebeat as data shipper. I am getting very weird issue there. When i am running logstash as service with service logstash start in that case it is getting the data from file beat but throwing below exception while pushing to ES.

{:timestamp=>"2019-04-02T06:08:36.447000-0400", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200/\"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :error_message=>"[406] {\"error\":\"Content-Type header [text/plain; charset=ISO-8859-1] is not supported\",\"status\":406}", :error_class=>"Elasticsearch::Transport::Transport::Errors::NotAcceptable", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:146:in__raise_transport_error'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:256:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in non_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:inbulk'", "org/jruby/ext/thread/Mutex.java:149:in synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:in safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:in retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:inmulti_receive'", "org/jruby/RubyArray.java:1653:in each_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:inmulti_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:in worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:inmulti_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in output_batch'", "org/jruby/RubyHash.java:1342:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:inworker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:in start_workers'"], :client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :level=>:error}

But when i am running that as foregroud service /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf then it is able to write data to ES.

ElK version is as follows:

Elastic Search: 6.4.0
Kibana : 6.4.0
Logstash: 6.4.0
Filebeat: 6.6.1

What does the output configuration look like?

Here is my output configuration..

output {

elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[fields][application]}-%{+YYYY.MM.dd}"

}

Also its working without any issue when i run in foreground from /usr/share/logstash/bin.

Hi, I'm experiencing exactly the same behaviour on Centos 7. Logstash and Filebeat latest versions freshly installed. Test by configuring filebeat to monitor simple log file and send to logstash. Logstash errors with error_message [406] as described by kumarvivek633

What user is the daemon starting as? And what user are you running the foreground test as? Do you have SELinux enabled?

On my system, Logstash daemon is running under logstash userid. Input test is from Filebeat, currently installed on the same machine, running as root. SELinux is disabled

I fixed the issue. Actually it was a very basic problem. There were two instances of logstash installed on the server and the service was pointing to older version. So i fixed that.

Have you tried running it it the foreground as logstash? Give logstash a shell so you can su to it.

I did, but I've found the problem - I had an older version of Logstash communicating with a newer version of Elasticsearch. Re installed and its now working.
Thanks for your replies and assistance :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.