I'm getting this error when I publish the events in filebeat:
2017/05/24 15:20:00.468812 sync.go:85: ERR Failed to publish events caused by: read tcp 10.0.3.150:56617->167.114.251.27:19130: i/o timeout
2017/05/24 15:20:00.468844 single.go:91: INFO Error publishing events (retrying): read tcp 10.0.3.150:56617->167.114.251.27:19130: i/o timeout
2017/05/24 15:20:17.127813 metrics.go:39: INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_but_not_acked_events=4
Causing Kibana wouldn't be able to get the logs. Here is error message from logstash:
[[main]>worker11] ERROR logstash.pipeline - Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash. {"exception"=>"-1", "backtrace"=>["java.util.ArrayList.elementData(ArrayList.java:418)", "java.util.ArrayList.remove(ArrayList.java:495)", "org.logstash.FieldReference.parse(FieldReference.java:37)", "org.logstash.PathCache.cache(PathCache.java:37)", "org.logstash.PathCache.isTimestamp(PathCache.java:30)", "org.logstash.ext.JrubyEventExtLibrary$RubyEvent.ruby_set_field(JrubyEventExtLibrary.java:122)",
We just upgraded it the new version of ELK stack, and the pipeline has never broken before in the old version. Is this a known bug or is there something I need to change with my filter in order to work with ELK 5?
logstash filter (as you can see it's really simple. Therefore, I don't understand why it breaks:
filter {
kv {
trim_key => "<>\[\],`\."
remove_field => ["\\%{some_field}", "{%{some_field}"]
include_brackets => false
}
#prune {
# blacklist_names => ["%{[^}]+}."]
#}
#ruby {
# code => "
# hashes_to_remove = []
# event_hash = event.to_hash
# event_hash.keys.each { |k| event_hash[k.gsub('.', '_')] = event_hash[k]; hashes_to_remove << k if k.include?('.') }
# hashes_to_remove.each { |field| event_hash.delete(field) }
# event.overwrite(LogStash::Event.new(event_hash))
# "
#}
}