How Logstash works during Tomcat Restart?

I am using logstash 1.5.4. Input plugin is JMX to monitor JVM stats.

Output plugin is Elasticsearch and I index the logs directly.

My application works on TomcatServer. Whenever I restart Tomcat, as soon as tomcat is up, getting "Failed to flush outgoing items" error. But after that it works normally.

Please shed some light on why this happens?
How to avoid this error during Tomcat server restart?

Please show the full error message.

What role does Tomcat serve here?

:timestamp=>"2016-03-18T04:13:19.574000+0000", :message=>"Got error to send bulk of actions: <<My-ES-URL:80 failed to respond", :level=>:error}
{:timestamp=>"2016-03-18T04:13:19.574000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>"Manticore::ClientProtocolException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:35:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:70:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:245:incall_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.4.4-java/lib/manticore/response.rb:148:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/transport/http/manticore.rb:71:inperform_request'", "org/jruby/RubyProc.java:271:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/transport/base.rb:190:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/transport/http/manticore.rb:54:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/client.rb:119:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/actions/bulk.rb:80:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.7-java/lib/logstash/outputs/elasticsearch/protocol.rb:104:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.7-java/lib/logstash/outputs/elasticsearch.rb:542:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.7-java/lib/logstash/outputs/elasticsearch.rb:541:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.7-java/lib/logstash/outputs/elasticsearch.rb:566:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.7-java/lib/logstash/outputs/elasticsearch.rb:565:inflush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1341:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:216:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:193:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:159:in buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.7-java/lib/logstash/outputs/elasticsearch.rb:531:inreceive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/outputs/base.rb:88:in handle'", "(eval):441:inoutput_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:244:in outputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:166:instart_outputs'"], :level=>:warn}

Tomcat is the application server. I am just trying to correlate how logstash behaves, like what it does with the un processed events in input queue and output queue , etc..

I'm not getting the full picture. Is Logstash sending data to Tomcat? If not I don't understand why Logstash would get a hiccup of Tomcat is bounced.

Let me try to rephrase to avoid confusion on my statement.

  1. Tomcat is my web application server. Tomcat web application enables ports 1099/1199/1299 to send JVM stats.

  2. Logstash receives input from > JMX input plugin , catalina.out file/log files and sends it via > elasticsearch output plugin.

scenario 1: whenever tomcat is started up, a bulk of events will be sent via logstash input and subsequently indexed in ES output.

error scenario: at times, when i restart tomcat, i am getting unable to send bulk items error message thrown by ES cluster. this error is thrown in logstash log.

my assumption is logstash holding the data in the pipeline and hence it is not able to process data faster.
If tomcat is stopped, Will Logstash continue to process already available items to output?

It's very odd that restarting tomcat causes ES to stop indexing, they aren't linked in anyway after all.

Can you show us the error you receive?

i have provided log trace in my earlier comment.
I did not mean to say Tomcat is responsible for this error. it was a issue in Logstash 1.5.4 version. But I wanted to understand this:
Input plugin of Logstash is dependent on events which will arise only if tomcat is up.
will tomcat restart anyway affect logstash processing?

P.S: I tried to use the scenario that Tomcat restart sends bulk of events to Logstash input, which causes Logstash output elasticsearch plugin to a still.