Logstash exception in pipeline worker

I'm getting this error when I publish the events in filebeat:

2017/05/24 15:20:00.468812 sync.go:85: ERR Failed to publish events caused by: read tcp 10.0.3.150:56617->167.114.251.27:19130: i/o timeout
2017/05/24 15:20:00.468844 single.go:91: INFO Error publishing events (retrying): read tcp 10.0.3.150:56617->167.114.251.27:19130: i/o timeout
2017/05/24 15:20:17.127813 metrics.go:39: INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_but_not_acked_events=4

Causing Kibana wouldn't be able to get the logs. Here is error message from logstash:

[[main]>worker11] ERROR logstash.pipeline - Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash. {"exception"=>"-1", "backtrace"=>["java.util.ArrayList.elementData(ArrayList.java:418)", "java.util.ArrayList.remove(ArrayList.java:495)", "org.logstash.FieldReference.parse(FieldReference.java:37)", "org.logstash.PathCache.cache(PathCache.java:37)", "org.logstash.PathCache.isTimestamp(PathCache.java:30)", "org.logstash.ext.JrubyEventExtLibrary$RubyEvent.ruby_set_field(JrubyEventExtLibrary.java:122)",

We just upgraded it the new version of ELK stack, and the pipeline has never broken before in the old version. Is this a known bug or is there something I need to change with my filter in order to work with ELK 5?

logstash filter (as you can see it's really simple. Therefore, I don't understand why it breaks:

filter {
  kv { 
    trim_key => "<>\[\],`\."
    remove_field => ["\\%{some_field}", "{%{some_field}"]
    include_brackets => false
  }

  #prune {
  #  blacklist_names => ["%{[^}]+}."]
  #}

  #ruby {
  #  code => "
  #    hashes_to_remove = []
  #    event_hash = event.to_hash
  #    event_hash.keys.each { |k| event_hash[k.gsub('.', '_')] = event_hash[k]; hashes_to_remove << k if k.include?('.') }
  #    hashes_to_remove.each { |field| event_hash.delete(field) }
  #    event.overwrite(LogStash::Event.new(event_hash))
  #  "
  #}
}

assuming you have multiple configs can you run them one by one in debug mode and look at the outputs and paste it here

ps. my guess is you are missing a plugin (you have to reinstall after upgrade)

what configs? logstash or filebeat?

let me explain a bit more, to make it easier to debug&update (my opinion) i use multiple logstash configs (windows, syslog, firewall ....).

if you are also using multiple logstash configs run them one by one in debug mode, and find the broken one :slight_smile: and share the error :slight_smile: and config

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.