Problems with Redis input

Hi experts
I have config to use Redis input for logstash. Here is my input logstash

input {
  redis {
    host => "***.***.**.***"
    type => "redis-input"
    data_type => "list"
    key => "filebeat"
  }
}

Whenever i stop logstash, i can push data in Redis but when i start logstash i cant do anything Redis database which told Logstash to use.
Here is my logstash log. It's seem very normal. I dont know where im wrong.
Please help me
Thanks

[2017-08-28T17:13:11,644][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://***.***.**.***:9200/]}}
[2017-08-28T17:13:11,652][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://***.***.**.***:9200/, :path=>"/"}
[2017-08-28T17:13:11,826][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://***.***.**.***:9200/"}
[2017-08-28T17:13:11,860][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//***.***.**.***:9200"]}
[2017-08-28T17:13:11,935][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-08-28T17:13:11,972][INFO ][logstash.inputs.redis    ] Registering Redis {:identity=>"redis://@***.***.**.***:6379/0 list:filebeat"}
[2017-08-28T17:13:11,973][INFO ][logstash.pipeline        ] Pipeline main started
[2017-08-28T17:13:12,072][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

when i start logstash i cant do anything Redis database which told Logstash to use.

What, exactly, do you mean by this?

Hi @magnusbaeck
Thank for reply
i use filebeat to send log to Redis. It's work. Running well but when i start logstash to collect the data of Redis that filebeat sent before, i cant access the Redis db0 anymore, i use RedisDesktopManager and the data of db0 Redis gone, cant push anydata into db0.
I create my own API to send Redis data and it's also cant push into db0.
I try to find the solution but when i stop logstash it's running normal again. That's what i mean.
I dont know where the problem is :frowning:

i cant access the Redis db0 anymore, i use RedisDesktopManager and the data of db0 Redis gone, cant push anydata into db0.
I create my own API to send Redis data and it’s also cant push into db0.

Why not? What happens? Any error message?

Hi @magnusbaeck
I dont know why, here is redis log after i turn on logstash and the Redis db start to no data i/o

2230:M 29 Aug 09:09:11.043 * 10 changes in 300 seconds. Saving...
2230:M 29 Aug 09:09:11.043 * Background saving started by pid 2277
2277:C 29 Aug 09:09:11.047 * DB saved on disk
2277:C 29 Aug 09:09:11.048 * RDB: 0 MB of memory used by copy-on-write
2230:M 29 Aug 09:09:11.144 * Background saving terminated with success

After some stop + add new data + start logstash to check the log, i found that line

[2017-08-29T09:04:09,800][WARN ][logstash.inputs.redis ] Redis connection problem {:exception=>#<Redis::CannotConnectError: Error connecting to Redis on ...*:6379 (Errno::ECONNREFUSED)>}

but only 1 line for 3 times restart, these other is the same with logstash log that i posted.

Here is full redis log for 3 times restart

932:M 29 Aug 09:04:10.391 # User requested shutdown...
932:M 29 Aug 09:04:10.392 * Saving the final RDB snapshot before exiting.
932:M 29 Aug 09:04:10.398 * DB saved on disk
932:M 29 Aug 09:04:10.398 * Removing the pid file.
932:M 29 Aug 09:04:10.399 # Redis is now ready to exit, bye bye...
2230:M 29 Aug 09:04:10.422 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2230:M 29 Aug 09:04:10.422 # Server started, Redis version 3.2.3
2230:M 29 Aug 09:04:10.423 * DB loaded from disk: 0.000 seconds
2230:M 29 Aug 09:04:10.423 * The server is now ready to accept connections on port 6379
2230:M 29 Aug 09:09:11.043 * 10 changes in 300 seconds. Saving...
2230:M 29 Aug 09:09:11.043 * Background saving started by pid 2277
2277:C 29 Aug 09:09:11.047 * DB saved on disk
2277:C 29 Aug 09:09:11.048 * RDB: 0 MB of memory used by copy-on-write
2230:M 29 Aug 09:09:11.144 * Background saving terminated with success
2230:M 29 Aug 09:14:31.035 * 10 changes in 300 seconds. Saving...
2230:M 29 Aug 09:14:31.035 * Background saving started by pid 2301
2301:C 29 Aug 09:14:31.038 * DB saved on disk
2301:C 29 Aug 09:14:31.039 * RDB: 0 MB of memory used by copy-on-write
2230:M 29 Aug 09:14:31.136 * Background saving terminated with success

Any idea or familiar error to this?
Thanks
`

Hi @magnusbaeck
I change the index so i found my data already on ES
Logstash take all data of Redis and flush the db, is it right?
Now whenever i send data into Redi, logstash immediately take it into ES and no data in Redis. WOW

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.