I'm using auditbeat to send syslog messages to logstash and then onto elasticsearch. I have deployed the template from auditbeat to elasticsearch and the messages being sent through are being correctly parsed and show nicely in Kibana.
I need to filter out some messages in logstash before they get to elasticsearch and the impression is that I will need to use grok in the filter to parse it to select the right information. (Running on separate systems and more sense to have logstash filter out than require the privileges etc on the server being monitored to keep playing with auditbeat rules)
For elasticsearch to manage these correctly then it appears that the templates must have the correct information in to parse and so I shouldn't need to create them from scratch in logstash.
Any idea how I can get them to effectively copy and paste in to the logstash filter?
I'm guessing the auditbeat.json file in the index-pattern directory, but can a json definition be used in a logstash filter or does it have to be translated?
This is what is in it (Fields cut down as too large for post):
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.