LS2 'stuck'?


(Sukrit Dasgupta) #1

Hi group,

I have a LS2 setup taking input from Redis and sending output to remote ES2.

LS2 has multiple multiline filters and two outputs: debug and ES2. LS2 seems to be 'stuck'/'frozen' after going through several entries. Any way to figure out whats going on? Is this a known issue?

If I kill and restart, LS2 comes up fine, goes through several messages from Redis and reaches the same state.

Any thoughts or help?

Thanks!


(Christian Dahlqvist) #2

What does your LS config look like? What is the state of the ES cluster?


(Sukrit Dasgupta) #3

Thanks for your reply!

ES2 state looks like this: (just one node currently since I am trying things out before going full blown)

{
  "cluster_name" : "live",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 5,
  "active_shards" : 5,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 5,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 50.0
}

LS2 config is broken up into multiple files to merge filters. The input and output section looks like this:
input {
redis {
host => 'localhost'
data_type => 'list'
key => 'logstash'
type => 'redis-input'
}
}

output {
 elasticsearch {
 hosts => ["10.86.205.62:9200"]
}
 stdout { codec => rubydebug }
}

(Magnus Bäck) #4

Are there any clues in the Logstash and/or Elasticsearch logs? In the Logstash case, cranking up the logging level with --verbose or even --debug can provide additional clues.


(Sukrit Dasgupta) #5

Thanks for your reply!

So, I did start LS2 with --verbose and I see the logs stop at:

{:timestamp=>"2015-11-23T01:59:56.058000-0800", :message=>"Using mapping template", :template=>{"template"=>"logstash-*", "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true}, "dynamic_templates"=>[{"message_field"=>{"match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed", "ignore_above"=>256}}}}}], "properties"=>{"@version"=>{"type"=>"string", "index"=>"not_analyzed"}, "geoip"=>{"type"=>"object", "dynamic"=>true, "properties"=>{"location"=>{"type"=>"geo_point"}}}}}}}, :level=>:info}
{:timestamp=>"2015-11-23T01:59:56.203000-0800", :message=>"New Elasticsearch output", :hosts=>["10.86.205.62:9200"], :level=>:info}

Nothing happens here even when LS2 stops sending. I removed the rubydebug stdout just to check whether putting everything on stdout has any effect. But still the same happens.

At the ES2 end, I am capturing packets to see whats coming from LS2 and I see packets stop arriving on ES2. The ES2 logs dont say much except updating indices:

[2015-11-23 09:18:37,854][INFO ][cluster.metadata         ] [srv-9] [logstash-2015.10.29] update_mapping [lumberjack]
[2015-11-23 09:19:47,124][INFO ][cluster.metadata         ] [srv-9] [logstash-2015.11.07] update_mapping [lumberjack]

Any specific debugs that can be enabled for this?


(Magnus Bäck) #6

Reduce the complexity of your system until it starts to work again by e.g. disabling the elasticsearch output and removing filters and whatever else you have. You mention that you're using multline filters. That would be my prime suspect. Logstash will get stuck waiting for the line that should "close" an event if that line never shows up (e.g. because the multiline regexp is wrong).


(Sukrit Dasgupta) #7

Thanks @magnusbaeck.

Yeah, thats what I am suspecting as well. Trying to move away from multiline as much as possible. Instinctively, I feel thats been the root-cause of all the issues LS2 is giving me.

Thanks!


(system) #8