Hi,
Finally the error was reproduce in our environament another time.
Searching into our logs, we discover this kind of traces:
[2017-02-07T08:18:29,862][WARN ][logstash.outputs.redis ] DEBUG [2017-02-06 17:21:02,288] - LlamadaWidPadre.callHost[36] Parametres crida - Entorno:VW cia:ALZ modHost:UL03CO00, :identity=>"default", :exception=>#<Redis::TimeoutError: Connection timed out>, :backtrace=>["base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/connection/ruby.rb:111:in
_write_to_socket'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/connection/ruby.rb:105:in _write_to_socket'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/connection/ruby.rb:131:in
write'", "org/jruby/RubyKernel.java:1479:in loop'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/connection/ruby.rb:130:in
write'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/connection/ruby.rb:374:in write'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/client.rb:271:in
write'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/client.rb:250:in io'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/client.rb:269:in
write'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/client.rb:228:in process'", "org/jruby/RubyArray.java:1613:in
each'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/client.rb:222:in process'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/client.rb:367:in
ensure_connected'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/client.rb:221:in process'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/client.rb:306:in
logging'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/client.rb:220:in process'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis/client.rb:120:in
call'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis.rb:1070:in rpush'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis.rb:58:in
synchronize'", "base_path/tools/logstash/vendor/jruby/lib/ruby/1.9/monitor.rb:211:in mon_synchronize'", "base_path/tools/logstash/vendor/jruby/lib/ruby/1.9/monitor.rb:210:in
mon_synchronize'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis.rb:58:in synchronize'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/redis-3.3.2/lib/redis.rb:1069:in
rpush'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-redis-3.0.3/lib/logstash/outputs/redis.rb:244:in send_to_redis'", "org/jruby/RubyProc.java:281:in
call'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-json-3.0.2/lib/logstash/codecs/json.rb:42:in encode'", "base_path/tools/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-redis-3.0.3/lib/logstash/outputs/redis.rb:150:in
receive'", "base_path/tools/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in multi_receive'", "org/jruby/RubyArray.java:1613:in
each'", "base_path/tools/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in multi_receive'", "base_path/tools/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:19:in
multi_receive'", "base_path/tools/logstash/logstash-core/lib/logstash/output_delegator.rb:42:in multi_receive'", "base_path/tools/logstash/logstash-core/lib/logstash/pipeline.rb:331:in
output_batch'", "org/jruby/RubyHash.java:1342:in each'", "base_path/tools/logstash/logstash-core/lib/logstash/pipeline.rb:330:in
output_batch'", "base_path/tools/logstash/logstash-core/lib/logstash/pipeline.rb:288:in worker_loop'", "base_path/tools/logstash/logstash-core/lib/logstash/pipeline.rb:258:in
start_workers'"]}`
Isn't visible directly and for this reason i attach the next picture:
The real problem was that our logstash agent generates an infinite loop when trace has this kind of characters [NULNULNULNULNULNULNULNULNULNUL.....].
Our filter section is:
> filter {
> if [message] =~ /(INFO|ERROR|DEBUG|WARN|FATAL|TRACE|SEVERE|NOTICE)(\s*\[20)/ {
> grok {
> patterns_dir => ["${LOGSTASH_PATH_GROK_PATTERNS}"]
> match => { "message" => "%{LOGLEVEL:loglevel}(\s*)\[?%{TIMESTAMP_ISO8601:mytimestamp}" }
> }
> } else if [message] =~ /(INFO|ERROR|DEBUG|WARN|FATAL|TRACE|SEVERE|NOTICE)(\s*\[+[a-zA-Z]+)/ {
> grok {
> patterns_dir => ["${LOGSTASH_PATH_GROK_PATTERNS}"]
> match => { "message" => "%{LOGLEVEL:loglevel}(\s*)(\[)?(\[(.*?)\])+(\])?(\s*)\[%{TIMESTAMP_ISO8601:mytimestamp}" }
> }
> } else if [message] =~ /\d{4}-\d{2}-\d{2}/ {
> grok {
> patterns_dir => ["${LOGSTASH_PATH_GROK_PATTERNS}"]
> match => { "message" => "%{TIMESTAMP_ISO8601:mytimestamp}" }
> }
> }
> date {
> match => [ "mytimestamp", "YYYY-MM-dd HH:mm:ss,SSS" ]
> locale => "en"
> }
> ruby {
> code => "event.set('lag_seconds', Time.now.to_f - event.get('@timestamp').to_f)"
> }
> if [lag_seconds] > 1296000 {
> drop { }
> }
> mutate {
> remove_field => [ "mytimestamp", "type"]
> }
> }
is for this reason that socket timeout exception appears, because this sockets remains open and logstash (trapped in infinite loop) dont write into it.
Some ideas?
Regards