Been poking around for the past few hours trying to get this working but no luck. I have a simple file input and simple stdout output (alongside a redis output, but let's ignore for now).
I'll toss out that I've already removed all .sincedb files located in /var/lib/logstash which is the logstash home directory set by the repos I believe during initial install of logstash.
If I run logstash as a service, the init.d file (again, default init.d from the repos) sets LS_LOG_FILE to logstash.log, and everything seems to get redirected there (stdout just has a comment in it saying logs are written to logstash.log). However, I never see anything show up in logstash.log (so no stdout / redis output). However, if I run /opt/logstash/bin/logstash -f manually, then I do see a stream of messages flying by which come from the file inputs. Again, no output in the log files, just printed to stdout (literally).
I'm not trying to read from the start of files, but I do want it to get new contents. My understanding is that if I delete .sincedb* and restart logstash, it should automatically start pulling new logs/lines from the files, and again, it seems to do so if I run in the foreground and watch stdout, but nothing ever shows up in the logs or redis if I run as a service.
Any suggestions? I'm not sure where else to look, as permissions on /var/log/logstash/* seem fine, there are no errors in the logs for me to look at, and running manually doesn't toss any errors either (but also doesn't write to redis...)
That's the reason - was testing. I only care about redis in the end and the
logs
aren't making it to redis. I've been trying to find the source ( if redis
was
overloaded, too many connections to receivers, etc). My understanding at
this
point is the logs are being dropped / missed on each shipper / endpoint.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.