Updated to 1.6.1 and seeing an issue. Time to jump to 1.7?


(Brian Dunbar) #1

I have four ES nodes, one of which is a client, and is streaming archived log data into the cluster.

After upgrading all nodes to 1.6.1. [1] I restarted my intake script. While it appears to be adding data, it's also feeding this back to stdin. Is this a problem?

Should I make the lightspeed jump to 1.7? Move everyone back to 1.6 ?

Jul 21 11:53:44 ris-webstats01 muo_month_3_cache.sh: /stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:159:in `buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.4-java/lib/logstash/outputs/elasticsearch.rb:426:in `receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.0-java/lib/logstash/outputs/base.rb:88:in `handle'", "(eval):159:in `output_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.0-java/lib/logstash/pipeline.rb:244:in `outputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.0-java/lib/logstash/pipeline.rb:166:in `start_outputs'"], :level=>:warn}

[1] I added a third data node, which yum helpfully installed at 1.6.1 which caused issues with shards. The previous three nodes installed, weeks before, with 1.6.0.


(Mark Walkom) #2

That's a Logstash problem not an ES one.

What version of LS are you on?


(Brian Dunbar) #3
# /opt/logstash/bin/logstash --version
logstash 1.5.0

(Mark Walkom) #4

Upgrade LS to >1.5.2, there are some fixes there that will help.


(Brian Dunbar) #5

Yum offered to update me to 1.5.3. After I did so this was thrown into /var/log/logstash. It seems to be feeding the bits into elasticsearch okay ..

 cat logstash.err
'[DEPRECATED] use `require 'concurrent'` instead of `require 'concurrent_ruby'`

(system) #6