Hi,
I just migrated from 2.4.x to 5.1.1 (Elasticsearch, Kibana and Logstash). I finally got everything up and running again except for a Logstash process which has a LOG4J input plugin.
Note that :
- my ELK processes are all running on Linux 64-bit.
- the processes that are generating the information through log4j haven't changed
- my ELK infrastructure hasn't changed.
- the actual Logstash xxx.conf file with LOG4J input, filter and output hasn't changed.
When I start this Logstash process now it seems to log pretty standard startup information, but that's it. No logging while it's running, no errors.
I tried starting the Logstash process with the same input part but without a filter and the standard stdout rubydebug output, but it behaves in exactly the same way ... nothing seems to be processed.
Meaning that since I'm using 5.1.1 the following doesn't do anything anymore while I'm sure there's a feed.
input {
log4j {
mode => "server"
port => 23140
}
}
output {
stdout { codec => rubydebug }
}
I tried starting Logstash with debug level information and I see that connections are established (which I already verified using netstat) and then I get following logging being repeated :
[2017-01-12T13:50:20,181][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2017-01-12 13:50:20 +0100}
[2017-01-12T13:50:21,184][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2017-01-12 13:50:21 +0100}
[2017-01-12T13:50:21,400][DEBUG][logstash.pipeline ] Pushing flush onto pipeline
Any help would be hugely appreciated since my queue monitoring is now dead in the water