Cisco Filebeat module loading 4286 fields

Hello community,

I’m using the latest stable Elasticsearch and Filebeat version 7.9.1 and enabled the cisco module to analyse the logs. I know there is an open issue about the module not showing the message fields on the logs UI (https://github.com/elastic/kibana/issues/72069 ).

But my question is about when creating the index pattern, usually if using Logstash with specific filters for cisco devices, I get around 150 fields on the pattern. But if using Cisco Module -> Filebeat -> Elasticsearch without Logstash I’m getting close to 4300 fields, this doesn’t seem right. (Everything else works as expected, dashboard, discover, etc.)

The most curious thing is that after stopping / starting the Filebeat service and deleting/recreating indexes and index patterns, SOMETIMES (without making any modifications to configuration files) I get close to 170 fields on the Filebeat index pattern. This I can’t replicate.

At the moment I’m back using Logstash.

If there is something I can provide like screenshots or configuration files let me know, or if this is a known issue can you point me to the right direction?

Thanks,

Are you using the supplied filebeat template? I think it defines all known ECS fields from all filebeat modules, it sets the total field limit to 10,000. I'm not up to your exact version however.

All those fields will show in the Kibana index pattern screens but only fields that contain data should show in the Discover screen.

I infer, but can't find it documented, that the mapping of this large number of fields must not be much of a performance concern. However, it seems that following the "out of the box" config, you could send all filebeat modules to a filebeat-* index that would have a huge number of populated fields.

You can check this by looking at your index in discover.

For your case where it only gets 170 fields, look at the elasticsearch log for when the index was created and see what templates were used. The message will look like this:

creating index, cause [rollover_index], templates [filebeat_default, filebeat-7.6.2-....

and list which templates were used. If the provided filebeat-7.9.1 template isn't there, that could cause it.

Also, looking at the doc https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-cisco.html

There may really be a LOT of fields

Thanks so much for shedding a light on this, I am/ was indeed worried about performance and index size on the disk if the quantity of fields is too much or somehow not loading right.

I will check the logs next time as suggested.

As mentioned I’m back with Logstash as we also need to be able to use the log view in the UI, but will keep testing Filebeat on our test environment.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.