Logstash shipper events stuck at redis


(Kishore) #1

Kindly verify my redis input configuration for logstash indexer, below is the input config details which worked for some time later it is not working.
I have upgraded to logstash 2.1.1 from logstash 1.4.2

input{ redis { host => "host" port => "7000" data_type => "list" key => "syslog" } }


(Magnus Bäck) #2

Well, what happens? Is there an error or warning message in the logs? Do you get additional clues if you crank up the log level with --verbose or --debug?


(Kishore) #3

I couldn't find any log level in the settings, could you please tell me how to find current logging level


(Magnus Bäck) #4

The log level is set via command-line options, listed above. Without those options you get the default log level.


(Kishore) #5

I am getting below message when i use --verbose.

Flushing buffer at interval {:instance=>"LogStash::Outputs::ElasticSearch::Buffer:0x7b452a3e operations_mutex=Mutex:0x7582db26>, max_size=500, operations_lock=Java::JavaUtilConcurrentLocks::ReentrantLock:0x71bd414e>, submit_proc=Proc:0x66f9096e/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.2.0-java/lib/logstash/outputs/elasticsearch/common.rb:55>, logger=#<Cabin::Channel:0x541a57c metrics=Cabin::Metrics:0x1eed23ab metrics_lock=#Mutex:0x54902e52>, metrics={}, channel=Cabin::Channel:0x541a57c ...>>, subscriber_lock=Mutex:0x7a0cefd7>, level=:info, subscribers={12590=>Cabin::Outputs::IO:0x5e94a32 io=<IO:fd 1>, lock=Mutex:0x50de1663>}, @data={}>, last_flush=2016-02-02 04:07:35 -0500, @flush_interval=1, stopping=#Concurrent::AtomicBoolean:0x4db129f8>, buffer=[], flush_thread=Thread:0x49a69456 run>>", :interval=>1, :level=>:info}


(Magnus Bäck) #6

Surely that's not the only thing you get with verbose logging. Anything in there that hints at a problem?


(Kishore) #7

Apart from above message, getting redis warning.

Redis connection problem {:exception=>#Redis::CommandError: CROSSSLOT Keys in request don't hash to the same slot>, :level=>:warn}


(system) #8