Hi,
I am trying to configure an ELK Stack with multiple lines. The layout is:
Logstash Forwarder --> Logstash --> Elasticsearch --> Kibana
Some of our logs will be multiline so I want ELK to merge those entries into a single event. This is what my Filter section looks like:
filter {
if [type] == "InputLog" {
fingerprint {
method => "UUID"
}
multiline {
pattern => "^%{TIMESTAMP_ISO8601}"
negate => true
what => "previous"
periodic_flush => true
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel} \[%{DATA:thread}\] \(%{JAVACLASS:javaclass}.%{NUMBER:line}\) \- %{GREEDYDATA:ebi_message}" }
}
syslog_pri { }
kv {
source => "@message"
}
}
}
The problem is that with this configuration, a lot of logs are lost; even though they show in the StdOut (I have StdOut and Elasticsearch as outputs), these logs don't make it to Elasticsearch. In fact, many times the whole stack seems to crash and I need to restart the services. The logs in Logstash appear to be alright (at least from what I can see in logstash.stdout); the problem is that for some reason they never get to Elasticsearch.
Although I always have this issue, I could not find a pattern of the last event that gets to Elasticsearch; it's not that the first event with multiple lines triggers this problem. Of course, if I delete the multiline filter all logs get to Elasticsearch as expected.
Not sure if this is helpful, but the following log appears repeatedly in logstash.log:
{:timestamp=>"2015-08-07T19:02:32.306000+0000", :message=>"Got error to send bulk of actions: no method 'type' for arguments (org.jruby.RubyArray) on Java::OrgElasticsearchActionIndex::IndexRequest", :level=>:error}
{:timestamp=>"2015-08-07T19:02:32.307000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>60, :exception=>#<NameError: no method 'type' for arguments (org.jruby.RubyArray) on Java::OrgElasticsearchActionIndex::In
dexRequest>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.5-java/lib/logstash/outputs/elasticsearch/protocol.rb:262:in build_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logs tash-output-elasticsearch-1.0.5-java/lib/logstash/outputs/elasticsearch/protocol.rb:223:in
bulk'", "org/jruby/RubyArray.java:1613:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.5-java/lib /logstash/outputs/elasticsearch/protocol.rb:222:in
bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.5-java/lib/logstash/outputs/elasticsearch.rb:519:in submit'", "/opt/logstash/vendor/bundle/j ruby/1.9/gems/logstash-output-elasticsearch-1.0.5-java/lib/logstash/outputs/elasticsearch.rb:518:in
submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.5-java/lib/logstash/outputs/elasticsearch.
rb:543:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.5-java/lib/logstash/outputs/elasticsearch.rb:542:in
flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer
.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1341:in
each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:216:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/st ud/buffer.rb:193:in
buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:112:in buffer_initialize'", "org/jruby/RubyKernel.java:1511:in
loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/st
ud-0.0.20/lib/stud/buffer.rb:110:in `buffer_initialize'"], :level=>:warn}
Any ideas of the possible cause for this issue?
Thanks!