JSON events not being sent via redis, non-JSON events are


(Darren) #1

Hi,

I'm in the process of getting an ELK stack set up at the request of the company I work for. So far, I'm very impressed! However, I've hit a bit of stumbling block whilst following the guides in the Logstash book. I'm trying to get Apache access logs into Elastcisearch (chapter 5 of the book).

I've got a 3 box Elasticsearch cluster in EC2, along with a Logstash server and a box to run Kibana on. I've set the Apache CustomLog format as per the book, and events are being logged to the log file accordingly. I've already got syslog events in a shipper config and they're being forwarded to redis on the Logstash server no problem. I've then added the Apache bits, restarted Logstash, but none of the events that are getting written to the access log are being forwarded on. My shipper.conf looks like:

input {
  file {
    type => "syslog"
    path => ["/var/log/messages", "/var/log/secure"]
    exclude => ["*.gz"]
  }
file {
    type => "apache"
    path => ["/var/log/httpd/logstash_access_log"]
    codec => "json"
  }  
}

output {
  stdout { }
  redis {
    host => "10.200.20.214"
    data_type => "list"
    key => "logstash"
  }
}

I've run a tcpdump on this server when generating access logs, but can't see them being sent onwards. I've checked my configs in relation to the code from the book at http://logstashbook.com/code/5/ and they're identical, yet it's not working for me, and for the life of me, I can't see why. Can anyone tell me what's wrong please? I can't see any errors in any logs and the syslog type events are all good.

Thanks in advance


(Magnus Bäck) #2

Does the user that Logstash runs as have read access to /var/log/httpd/logstash_access_log? Enabling more logging with --verbose or even --debug will tell you more about what going on.


(Darren) #3

Hi Magnus,

Thanks for your reply. I've managed to resolve this now. I did as you suggested and ran logstash with --debug and I noticed that all the access logs started flooding through logstash to my elasticsearch cluster. At this point I thought it was still a permissions issue, so I ran logstash from the command line as the logstash user. Lo and behold, it failed to pick anything up from the access logs. I double checked the permissions and although the permissions on the file were OK, the permissions on the parent directory (/var/log/httpd) were restricted to root only. A cheeky chmod later and it's working fine now.

I'm surprised this didn't get picked up by Logstash and warned about, given that when I first started monitoring /var/log/messages, errors were logged stating logstash couldn't access the file. Would it be worth raising a bug/feature request so that this would get picked up in future?

Thanks for your help


(Magnus Bäck) #4

I think this is covered by https://github.com/logstash-plugins/logstash-input-file/issues/6, the proposed solution to which will add a warning when a filename pattern evaluates to zero files.


(system) #5