I am using logstash 1.5.4. Input plugin is JMX to monitor JVM stats.
Output plugin is Elasticsearch and I index the logs directly.
My application works on TomcatServer. Whenever I restart Tomcat, as soon as tomcat is up, getting "Failed to flush outgoing items" error. But after that it works normally.
Please shed some light on why this happens?
How to avoid this error during Tomcat server restart?
Tomcat is the application server. I am just trying to correlate how logstash behaves, like what it does with the un processed events in input queue and output queue , etc..
Let me try to rephrase to avoid confusion on my statement.
Tomcat is my web application server. Tomcat web application enables ports 1099/1199/1299 to send JVM stats.
Logstash receives input from > JMX input plugin , catalina.out file/log files and sends it via > elasticsearch output plugin.
scenario 1: whenever tomcat is started up, a bulk of events will be sent via logstash input and subsequently indexed in ES output.
error scenario: at times, when i restart tomcat, i am getting unable to send bulk items error message thrown by ES cluster. this error is thrown in logstash log.
my assumption is logstash holding the data in the pipeline and hence it is not able to process data faster.
If tomcat is stopped, Will Logstash continue to process already available items to output?
i have provided log trace in my earlier comment.
I did not mean to say Tomcat is responsible for this error. it was a issue in Logstash 1.5.4 version. But I wanted to understand this:
Input plugin of Logstash is dependent on events which will arise only if tomcat is up.
will tomcat restart anyway affect logstash processing?
P.S: I tried to use the scenario that Tomcat restart sends bulk of events to Logstash input, which causes Logstash output elasticsearch plugin to a still.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.