spuder
(Spuder)
May 20, 2015, 6:29pm
4
I've seen the same issue, and it appears that others have encountered the same problem.
Try reinstalling your plugins
The frequency has been reduced since I upgraded to 1.5rc4
Related:
opened 05:31PM - 13 Apr 15 UTC
closed 04:28PM - 11 May 15 UTC
bug
I've encountered this problem 3 times now over the course of a month.
A brand … new install of logstash 1.5rc2 on ubuntu 14.04 will get into a state where I can not stop or restart the process. (Installed with chef using community cookbooks).
When attempting to restart logstash, it timesout and says 'got TERM'
```
service logstash_server restart
timeout: run: logstash_server: (pid 11797) 839785s, got TERM
```
Also asking stackoverflow for help in identifying the problem
http://unix.stackexchange.com/questions/195998/how-to-identify-why-a-process-wont-die
kill
I've been encountering the same problem with logstash 1.5rc4 and kafka 0.8.2. I installed logstash from the chef cookbook.
For 2 days, logstash would not read any data from kafka when started as a service, but when I started it from the command line, the data would stream in correctly.
I rebooted my kafka servers, and the problem appears to have mostly gone away. I suspect it was a zookeeper related problem.
http://stackoverflow.com/questions/29276912/kafka-suddenly-reset-the-consumer-offset#…
opened 03:13PM - 20 Apr 15 UTC
closed 05:56AM - 18 May 15 UTC
bug
This issue is related to #2992
If logstash 1.5rc2 encounters a problem writing… to elasticsearch, it will cause logtsash to crash.
Full logs are shown here:
http://pastebin.com/tdy8KWay
The interesting lines are:
```
{:timestamp=>"2015-04-17T11:34:24.192000-0600", :message=>"Got error to send bulk of actions to elasticsearch server at swat-elasticsearchpool.ndlab.local : Read timed out", :level=>:error}
{:timestamp=>"2015-04-17T11:34:24.193000-0600", :message=>"Failed to flush outgoing items", :outgoing_count=>5000, :exception=>#<Manticore::Timeout: Read timed out>, :backtrace=>["/opt/logstash/server/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:35:in `initialize'", "org/jruby/RubyProc.java:271:in `call'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:61:in `call'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:224:in `call_once'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:127:in `code'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:50:in `perform_request'", "org/jruby/RubyProc.java:271:in `call'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:187:in `perform_request'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:33:in `perform_request'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/client.rb:115:in `perform_request'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.7/lib/elasticsearch/api/actions/bulk.rb:80:in `bulk'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch/protocol.rb:82:in `bulk'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:413:in `submit'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:412:in `submit'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:438:in `flush'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:436:in `flush'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193:in `buffer_flush'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:159:in `buffer_receive'", "/opt/logstash/server/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:402:in `receive'", "/opt/logstash/server/lib/logstash/outputs/base.rb:88:in `handle'", "(eval):233:in `initialize'", "org/jruby/RubyProc.java:271:in `call'", "/opt/logstash/server/lib/logstash/pipeline.rb:279:in `output'", "/opt/logstash/server/lib/logstash/pipeline.rb:235:in `outputworker'", "/opt/logstash/server/lib/logstash/pipeline.rb:163:in `start_outputs'"], :level=>:warn}
{:timestamp=>"2015-04-17T14:09:04.890000-0600", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
```
Because of issue #2992 the logstash service will fail to be restarted by the upstart watchdog on ubuntu 14.04.
The config files are:
```
input {
kafka {
zk_connect => 'swat-zoo05.example.com:2181/kafka'
consumer_threads => 3
topic_id => 'foobar'
}
}
filter {
mutate {
gsub => [ "[id][batchID]", "\D", "" ]
}
mutate {
convert => [ "[id][batchID]", "integer"]
}
}
filter {
mutate {
gsub => [ "[id][docID]", "\D", "" ]
}
mutate {
convert => [ "[id][docID]", "integer"]
}
}
output {
elasticsearch {
host => 'swat-elasticsearchpool.example.com'
cluster => 'foobar-elastic'
embedded => false
protocol => 'http'
}
}
```
1 Like