Output to elasticsearch gives errors but works with stdout

I think the error could be related to the ruby code, how do I fix this ?

Config:

input {
  beats {
    port => 5044
    ssl => false
  }
}

filter {
  if [type] == "apache" {
ruby {
code => " if event['message']
event['message'] = event['message'].gsub('\x','Xx')
event['message'] = event['message'].gsub('\x','XXx')
end
"
}

json {
      source => "message"
    }
}
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Logstash log:

{:timestamp=>"2016-07-01T13:23:30.475000+0100", :message=>"Connection refused", :class=>"Manticore::SocketException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:37:in initialize'", "org/jruby/RubyProc.java:281:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:79:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:256:incall_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:153:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:84:inperform_request'", "org/jruby/RubyProc.java:281:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:257:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:67:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/sniffer.rb:32:inhosts'", "org/jruby/ext/timeout/Timeout.java:147:in timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/sniffer.rb:31:inhosts'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:79:in reload_connections!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:72:insniff!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in start_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:insynchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in start_sniffing!'", "org/jruby/RubyKernel.java:1479:inloop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in `start_sniffing!'"], :level=>:error}
{:timestamp=>"2016-07-01T13:23:52.470000+0100", :message=>"CircuitBreaker::rescuing exceptions", :name=>"Beats input", :exception=>LogStash::Inputs::Beats::InsertingToQueueTakeTooLong, :level=>:warn}
{:timestamp=>"2016-07-01T13:23:52.470000+0100", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
{:timestamp=>"2016-07-01T13:23:52.471000+0100", :message=>"CircuitBreaker::rescuing exceptions", :name=>"Beats input", :exception=>LogStash::Inputs::Beats::InsertingToQueueTakeTooLong, :level=>:warn}
{:timestamp=>"2016-07-01T13:23:52.471000+0100", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
{:timestamp=>"2016-07-01T13:23:53.471000+0100", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}
{:timestamp=>"2016-07-01T13:23:53.472000+0100", :message=>"CircuitBreaker::Open", :name=>"Beats input", :level=>:warn}
{:timestamp=>"2016-07-01T13:23:53.473000+0100", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::OpenBreaker, :level=>:warn}
{:timestamp=>"2016-07-01T13:23:53.972000+0100", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}

What's the connection with elasticcache and this?

Oh Sorry I meant ElasticSearch not Elastic Cache !!

:message=>"Connection refused"

This should be quite clear; there's nothing answering on localhost:9200. Is Elasticsearch running? If yes, is is really listening on that port or could there be a firewall blocking the connection (unlikely)?

lsof -i :9200
COMMAND   PID          USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
java    32619 elasticsearch  489u  IPv6 341712524      0t0  TCP localhost:9200 (LISTEN)

As soon as I remove the the ruby code the output does go to elasticsearch

Ok so iv elastablished that there is a break in communication to elasticsearch:

{:timestamp=>"2016-07-04T10:09:50.361000+0100", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}

I have no idea why this is happening now all of a sudden, I tried sending it a minimal amount of logs to cross out that its not overloaded with logs. But I am still having this issue.

Do you please have any tips on what to check or how to fix ?

The connection does get established, but it breaks somewhere.

 lsof -i :9200
    COMMAND   PID          USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
    java    17582 elasticsearch  506u  IPv6 396166915      0t0  TCP localhost:9200 (LISTEN)
    java    17582 elasticsearch 6953u  IPv6 396167077      0t0  TCP localhost:9200->localhost:42094 (ESTABLISHED)
    java    17582 elasticsearch 6954u  IPv6 396168016      0t0  TCP localhost:9200->localhost:42105 (ESTABLISHED)
    java    17582 elasticsearch 6955u  IPv6 396167082      0t0  TCP localhost:9200->localhost:42101 (ESTABLISHED)
    java    17582 elasticsearch 6956u  IPv6 396167085      0t0  TCP localhost:9200->localhost:42103 (ESTABLISHED)
    java    17582 elasticsearch 6972u  IPv6 396167099      0t0  TCP localhost:9200->localhost:42120 (ESTABLISHED)
    java    17582 elasticsearch 6973u  IPv6 396167101      0t0  TCP localhost:9200->localhost:42121 (ESTABLISHED)
    java    17582 elasticsearch 6974u  IPv6 396169012      0t0  TCP localhost:9200->localhost:42122 (ESTABLISHED)
    java    17582 elasticsearch 6975u  IPv6 396169014      0t0  TCP localhost:9200->localhost:42123 (ESTABLISHED)
    java    17582 elasticsearch 6976u  IPv6 396169016      0t0  TCP localhost:9200->localhost:42124 (ESTABLISHED)
    java    17918      logstash   43u  IPv6 396168013      0t0  TCP localhost:42094->localhost:9200 (ESTABLISHED)
    java    17918      logstash   44u  IPv6 396060623      0t0  TCP localhost:42101->localhost:9200 (ESTABLISHED)
    java    17918      logstash   45u  IPv6 396178530      0t0  TCP localhost:42103->localhost:9200 (ESTABLISHED)
    java    17918      logstash   46u  IPv6 396165809      0t0  TCP localhost:42107->localhost:9200 (ESTABLISHED)
    java    17918      logstash   47u  IPv6 396155511      0t0  TCP localhost:42104->localhost:9200 (ESTABLISHED)
    java    17918      logstash   51u  IPv6 396118384      0t0  TCP localhost:42112->localhost:9200 (ESTABLISHED)
    java    17918      logstash   52u  IPv6 396171847      0t0  TCP localhost:42109->localhost:9200 (ESTABLISHED)
    java    17918      logstash   53u  IPv6 396137975      0t0  TCP localhost:42110->localhost:9200 (ESTABLISHED)
    java    17918      logstash   54u  IPv6 396177333      0t0  TCP localhost:42111->localhost:9200 (ESTABLISHED)
    java    17918      logstash   55u  IPv6 396118385      0t0  TCP localhost:42113->localhost:9200 (ESTABLISHED)
    java    17918      logstash   59u  IPv6 396162483      0t0  TCP localhost:42117->localhost:9200 (ESTABLISHED)
    java    17918      logstash   60u  IPv6 396107470      0t0  TCP localhost:42120->localhost:9200 (ESTABLISHED)

I think the problem could be here,
Elasticsearch log:

[2016-07-04 12:22:02,389][WARN ][cluster.routing.allocation.decider] [Ultra-Marine] high disk watermark [90%] exceeded on [YMlUds73Q9WSvS2bk7WKAA][Ultra-Marine][/var/lib/elasticsearch/elasticsearch/nodes/0] free: 23.6gb[8.7%], shards will be relocated away from this node
[2016-07-04 12:22:02,389][INFO ][cluster.routing.allocation.decider] [Ultra-Marine] rerouting shards: [high disk watermark exceeded on one or more nodes]

1 Like

I deleted all my elasticsearches indexes to fix this.
curl -XDELETE 'http://localhost:9200/filebeat-*'

1 Like