Logstash Output Not Working For File Input

Been poking around for the past few hours trying to get this working but no luck. I have a simple file input and simple stdout output (alongside a redis output, but let's ignore for now).

I'll toss out that I've already removed all .sincedb files located in /var/lib/logstash which is the logstash home directory set by the repos I believe during initial install of logstash.

If I run logstash as a service, the init.d file (again, default init.d from the repos) sets LS_LOG_FILE to logstash.log, and everything seems to get redirected there (stdout just has a comment in it saying logs are written to logstash.log). However, I never see anything show up in logstash.log (so no stdout / redis output). However, if I run /opt/logstash/bin/logstash -f manually, then I do see a stream of messages flying by which come from the file inputs. Again, no output in the log files, just printed to stdout (literally).

I'm not trying to read from the start of files, but I do want it to get new contents. My understanding is that if I delete .sincedb* and restart logstash, it should automatically start pulling new logs/lines from the files, and again, it seems to do so if I run in the foreground and watch stdout, but nothing ever shows up in the logs or redis if I run as a service.

Any suggestions? I'm not sure where else to look, as permissions on /var/log/logstash/* seem fine, there are no errors in the logs for me to look at, and running manually doesn't toss any errors either (but also doesn't write to redis...)

Providing your config as well as the LS version you are running would be helpful.

You're right - that could probably be useful. I'm currently running logstash 1.5.3 and the configuration is below.

input {
  file {
    type => "syslog"
    path => ["/var/log/auth.log", "/var/log/syslog"]
  }

  file {
    type => "webapp"
    path => ['/opt/webapp/logs/*.log']
  }

  file {
    type => "nginx_access"
    path => ['/var/log/nginx/*access.log']
  }

  file {
    type => "nginx_error"
    path => ['/var/log/nginx/*error.log']
  }

  file {
    type => "supervisor"
    path => ['/var/log/supervisor/*.log']
  }
}

output {
  redis {
    host => "<redis_host>"
    data_type => "list"
    key => "logstash"
    codec => json
    batch => true
    batch_events => 50
    batch_timeout => 5
  }
  stdout { codec => rubydebug }
}

Ahh ok, I get it now :smile:
That'd be the OS taking stdout and redirecting it somewhere, but dunno where.

You should really only use stdout for testing when calling LS manually, but what's the use case here that requires you to do it?

That's the reason - was testing. I only care about redis in the end and the
logs
aren't making it to redis. I've been trying to find the source ( if redis
was
overloaded, too many connections to receivers, etc). My understanding at
this
point is the logs are being dropped / missed on each shipper / endpoint.

Understood.

Better to just call the logstash binary directly then :slight_smile: