Hi Guys
I'm tried to setup a ELK cluster using the latest 2.0 releases, after a period of about 15-20 minutes it seems Logstash can no longer connect to Elasticsearch and the Logstash logs get flooded with the messages below. Initially I thought the issue was with the cluster, so I started with a fresh single server install on Ubuntu 14.04 and the problem quickly started reoccurring -
{:timestamp=>"2015-10-31T23:20:37.468000+0000", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200/\"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Cannot find Serializer for class: org.jruby.RubyObject", :error_class=>"JrJackson::ParseError", :backtrace=>["com/jrjackson/JrJacksonBase.java:83:in `generate'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.3.6/lib/jrjackson/jrjackson.rb:59:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json/adapters/jr_jackson.rb:20:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json/adapter.rb:25:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json.rb:136:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.14/lib/elasticsearch/api/utils.rb:102:in `__bulkify'", <SNIP>
{:timestamp=>"2015-10-31T23:20:37.469000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>"JrJackson::ParseError", :backtrace=>["com/jrjackson/JrJacksonBase.java:83:in `generate'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.3.6/lib/jrjackson/jrjackson.rb:59:in `dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json/adapters/jr_jackson.rb:20:in `dump'", <SNIP>
There are no errors in the Elasticsearch logs, and Kibana is showing a status of Green. The only visual indicator of a problem is the logs stop appearing in Kibana.
My setup is quite basic, Logstash is receiving Syslog on ports 1514 and 1515. I've done it like this as I want to create a self-contained config file for each device (with input, filter and output). I'm only testing this with 2 config files at the moment which are (There is probably some better way of doing it, but I'm still learning)
Fortigate.conf -
input {
syslog {
type => "syslog"
port => 1514
add_field => ["parser", "fortigate"]
}
}
filter {
if [parser] == "fortigate" {
grok { match => [ "message", "<(?<ruleID>.*)>(?<msg>.*)" ] }
kv { source => "msg" }
geoip {
source => "srcip"
target => "src_geoip"
database => "/etc/logstash/GeoLiteCity.dat"
}
geoip {
source => "dstip"
target => "dst_geoip"
database => "/etc/logstash/GeoLiteCity.dat"
}
}
}
output {
if [parser] == "fortigate" {
elasticsearch {
index => [ "logstash-%{+YYYY.MM.DD}" ]
}
}
}
Fortimail.conf
input {
syslog {
type => "syslog"
port => 1515
add_field => ["parser", "fortimail"]
}
}
filter {
if [parser] == "fortimail" {
grok { match => [ "message", "<(?<ruleID>.*)>(?<msg>.*)" ] }
kv { source => "msg" }
grok { match => [ "msg", "client_name=\".*\[(?<srcip>.*?)\].*?\"" ] }
mutate { rename => { "dst_ip" => "dstip" } }
geoip {
source => "srcip"
target => "src_geoip"
database => "/etc/logstash/GeoLiteCity.dat"
}
geoip {
source => "dstip"
target => "dst_geoip"
database => "/etc/logstash/GeoLiteCity.dat"
}
}
}
output {
if [parser] == "fortimail" {
elasticsearch {
index => [ "logstash-%{+YYYY.MM.DD}" ]
}
}
}
Could anybody please help shed some light on this? I've spent the best part of 3 days on this and its driving crazy