Hello, I have 2 cloudfront distributions I am looking to analyze. The system is a fresh VM of Ubuntu 16.04 with latest Elasticsearch and Logstash installed via apt-get
I have the following 2 configs:
In both cases I started with the CLOUDFRONT_ACCESS_LOG
from https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/aws and added in some matches to the URL to perform some aggregates on.
When I run both of these via the command line
$ /opt/logstash/bin/logstash -f /etc/logstash/conf.d/userlogs.conf
and
$ /opt/logstash/bin/logstash -f /etc/logstash/conf.d/companylogs.conf
There are no parse errors and everything is coming out fine in elasticsearch queries.
I then reset everything with curl -XDELETE localhost:9200/* && sudo rm -f /var/lib/logstash/.sincedb*
I start the service sudo service logstash start
and wait a few moments then try my search again, but this time I have tons of _grokparsefailure
and _dateparsefailure
listed in tags.
In the above you can see the search returned a type of company_logs
but appears to be trying to grok with logs from the user cdn. There is 100% no path of /v1/users
on the company CDN
Additionally while trying to debug the service you can see I added a raw_data
field that just returns the original message add_field => [ "raw_data", "%{message}" ]
What is odd is in the parse failures the raw_data is spitting back out "%{message}"
while when I run these configs from the command line that doesn't happen.
Right now I can only assume running these configs from the command line work because they are in isolation of each other being run one at a time, while the logstash service is loading both of the configs files and doing them in tandem.
What am I missing with setting up logstash as a service? I do not see any other config files in /etc/logstash