We have filebeat collecting logs in kubernetes. We use a data view that is created to view these logs filebeat-*
.
Using the filebeat indexes to search our logs has been working fine up until now, but now we are starting to reach the maximum field limit mapping.total_fields.limit.
We want some way to automatically fan out the incoming filebeat logs into application specific indexes,g
acme-<kubernetes.label.app>-<date>
acme-app1-2022.05.23-000001
acme-app2-2022.05.23-000001
- Then we can access these logs via a new dataview
acme-*
- Currently in filebeat config we use preprocessor to decode the message field into an
acme
field. We want to stop decoding the json in filebeat, and instead we will decode the json into the application specific indexes. So that the total_field limit does not exceed in the file beat index.
We are using the latest filebeat 8.2 which makes use of datastreams.
Is this possible, if so how might we do this and how much of this can be automated?