I am using the Filebeat AWS module to fetch Cloudtrail logs. To load the dashboards, first I configured Filebet output to Elasticsearch and send some data to test and it was having ECS field names. However, while I sent data through Logstash, I observed fields are not coming ECS compatible. As a result, I will not be able to use default AWS dashboards if I get data via Logstash.
I tried loading Filebeat index template from Logstash dynamically but it didn't work. Is there any way I can get data in ECS compatible fields while sending it through Logstash? Due to this I won't be able to use default dashboards.
Here is a screenshots showing what difference is there in fields.(Left=non-ECS, Righ=ECS)
Here is what I suggest ... this is assuming a fairly recent elastic stack, the latest being 7.11.1
Configure Filebeat with the modules and settings you like pointed to kibana and elasticsearch and run setup. In a sense seems like you already did that... but if not clean up and do that.
Now in the filebeat.yml comment out the kibana and point the output to logstash.
Here is the logstash that will support all the beats functionality. This will do the new default naming ILM, ECS, pipelines etc...etc.
Start logstash then start filebeat... take a look...
If you are a bit new... perhaps you really don't need to rename the index right now... perhaps get used to how we have set it up...
You can filter for anything you need very easily. (KQL, DSL, Visualizations)
You can leverage the built in dashboards and Viz.
All that AWS data has tags applied... perhaps explore first... We have set these indexes and names for a good reason, an often see teams start to migrate back to our Defaults over time.
That said elasticsearch is built to be flexible, and so we encourage users to do what fits them.
What you want to do is not a lot in total, but it can be a bit of a challenging at first
As to a specific blog I think you should get acquainted with the basic topics.
Search our site we have tons of blogs, plus some excellent Free training and webinars
Sure thing. I would go through the training and webinars. The reason we want to create an index like cloud-audit-aws* is we are going to create an index pattern cloud-* that represents all cloud data such as cloud-audit-azure*, cloud-audit-aws*, cloud-audit-gcp* etc.
What folks do is make a copy of the default templates and pipelines as a base and get started from there and route the data you want to their own indices.
You will need to create your pipeline manually and then name it there...
You can see and create the pipelines with these APIs
This will show all of them then find the aws ones you need, get it and put it back with the name you want. Copy the one you want, name it PUT it back...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.