Using LogStash version 6.2.4.
I have a rather complex pipeline that crashes when I use elasticsearch{}
output, but works fine when I used stdout { codec => rubydebug}
.
There are around 115k document - also quite simple - being outputted, almost no free text, mostly attribute values.
I watched `/_node/stats/event and I saw it just have out: 0, all of a sudden crash. Not that it ramped up or something, just immediately crashed.
I tired running with -b 250, -b 125, b -75
i tried setting -Xms and -Xmx to 3g and it didn't crash. but I'm not sure this should crash becasue there's too much data. Can I throttle this plugin? change batch size perhaps or workers. I'm lost.
The output configuration:
elasticsearch {
hosts => "localhost:9200"
user => "<USER>"
password => "<PASSWORD>"
action => update
index => <index>
document_id => "%{id}"
scripted_upsert => true
script => '
ctx._source["dummy"] = "dummy" // Just so the script will be compiled
'
}
I don't get much logs, but here's what I have
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid30485.hprof ...
Heap dump file created [1499118783 bytes in 9.915 secs]
2018-05-16 16:21:34 +0300: Listen loop error: #<IOError: closed stream>
org/jruby/RubyIO.java:3405:in `select'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:322:in `handle_servers'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:296:in `block in run'
2018-05-16 16:21:34 +0300: Listen loop error: #<IOError: closed stream>
org/jruby/RubyIO.java:3405:in `select'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:322:in `handle_servers'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:296:in `block in run'
2018-05-16 16:21:34 +0300: Listen loop error: #<IOError: closed stream>
org/jruby/RubyIO.java:3405:in `select'