Filebeat events doesnt come with ECS compatibility while coming through Logstash

I am using the Filebeat AWS module to fetch Cloudtrail logs. To load the dashboards, first I configured Filebet output to Elasticsearch and send some data to test and it was having ECS field names. However, while I sent data through Logstash, I observed fields are not coming ECS compatible. As a result, I will not be able to use default AWS dashboards if I get data via Logstash.

I tried loading Filebeat index template from Logstash dynamically but it didn't work. Is there any way I can get data in ECS compatible fields while sending it through Logstash? Due to this I won't be able to use default dashboards.

Here is a screenshots showing what difference is there in fields.(Left=non-ECS, Righ=ECS)

Hi @ankitdevnalkar

Here is what I suggest ... this is assuming a fairly recent elastic stack, the latest being 7.11.1

  1. Configure Filebeat with the modules and settings you like pointed to kibana and elasticsearch and run setup. In a sense seems like you already did that... but if not clean up and do that.

  2. Now in the filebeat.yml comment out the kibana and point the output to logstash.

  3. Here is the logstash that will support all the beats functionality. This will do the new default naming ILM, ECS, pipelines etc...etc.

  4. Start logstash then start filebeat... take a look...

################################################
# beats->logstash->es default config.
################################################
input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][pipeline] {
    elasticsearch {
      hosts => "http://localhost:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      pipeline => "%{[@metadata][pipeline]}" 
      user => "elastic"
      password => "secret"
    }
  } else {
    elasticsearch {
      hosts => "http://localhost:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      user => "elastic"
      password => "secret"
    }
  }
}
2 Likes

Thanks @stephenb I will try this out.

@stephenb can I achieve this with a custom index? For example
index => cloud-audit-aws%{YYYY.MM}

Absolutely, but you will need to

  • Create your own template with all the fields mapped to ECS
  • Use the existing pipeline to parse the data or create your own.
  • Create your own ILM policy if you want ILM
  • Update Filebeat and Logstash Config accordingly
1 Like

Thanks @stephenb . I am bit new to this. can you refer a blog or article if someone has done this earlier ?

If you are a bit new... perhaps you really don't need to rename the index right now... perhaps get used to how we have set it up...

You can filter for anything you need very easily. (KQL, DSL, Visualizations)
You can leverage the built in dashboards and Viz.

All that AWS data has tags applied... perhaps explore first... We have set these indexes and names for a good reason, an often see teams start to migrate back to our Defaults over time.

That said elasticsearch is built to be flexible, and so we encourage users to do what fits them.
What you want to do is not a lot in total, but it can be a bit of a challenging at first

As to a specific blog I think you should get acquainted with the basic topics.

Search our site we have tons of blogs, plus some excellent Free training and webinars

  1. Indexes
  2. Templates
  3. Index Lifecycle Management
  4. Pipelines / Ingest Processor

Sure thing. I would go through the training and webinars. The reason we want to create an index like cloud-audit-aws* is we are going to create an index pattern cloud-* that represents all cloud data such as cloud-audit-azure*, cloud-audit-aws*, cloud-audit-gcp* etc.

Yup totally understand and that is a good plan.

What folks do is make a copy of the default templates and pipelines as a base and get started from there and route the data you want to their own indices.

@stephenb is there any way I can load a pipeline file here? I figured out with template but unsure about loading the pipeline file.

You will need to create your pipeline manually and then name it there...

You can see and create the pipelines with these APIs

This will show all of them then find the aws ones you need, get it and put it back with the name you want. Copy the one you want, name it PUT it back...

GET _ingest/pipeline

Yup there is some work to be done....

2 Likes

thanks a lot @stephenb ! you made my day today :star_struck:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.