Implications of using ECK Beats to write to logs-* metrics-*

Hey guys!

We've been building up our Kubernetes platform using ECK and its custom Beat resources. We recently ran into an issue where we needed to apply different ILM policies to different types of data and we found data streams. We figured out a way to ship filebeat and metricbeat data into the default logs-* and metrics-* with custom namespaces that allow us to apply different lifecycle and security policies like we needed.

But after seeing some info from @ruflin here Logstash Integration with Elasticsearch Data Streams · Issue #12178 · elastic/logstash · GitHub we wanted to ask about the tradeoffs about this approach, as far as we understand some of the documentation and blog posts this should be fine as long as the data has the right format which we've done as follows:

filebeat config:

setup.template.enabled: false
setup.ilm.enabled: false
output.elasticsearch:
  index: "logs-%{[data_stream.dataset]}-%{[data_stream.namespace]}"
processors:
- add_fields:
    target: data_stream
    fields:
      type: logs
      dataset: generic
      namespace: default
- script:
    lang: javascript
    id: dataset_override
    source: >
      function process(event) {
        var ns = event.Get("kubernetes.namespace");
        if (ns != null) event.Put("data_stream.namespace", "k8s." + ns);

        var ds = event.Get("event.dataset");
        if (ds != null) event.Put("data_stream.dataset", ds);
        else event.Put("event.dataset", "generic");
      }

metricbeat config:

setup.template.enabled: false
setup.ilm.enabled: false
output.elasticsearch:
  index: "metrics-%{[data_stream.dataset]}-%{[data_stream.namespace]}"
processors:
- add_fields:
    target: data_stream
    fields:
      type: metrics
      dataset: generic
      namespace: default
- script:
    lang: javascript
    id: dataset_override
    source: >
      function process(event) {
        var ns = event.Get("kubernetes.namespace");
        if (ns != null) event.Put("data_stream.namespace", "k8s." + ns);

        var mod = event.Get("event.module");
        var ms = event.Get("metricset.name");
        if (mod != null && ms != null) event.Put("data_stream.dataset", mod + "." + ms);
      }

After this setup we can use the Kubernetes and System integration assets, like the dashboards, without any noticeable issues.

Great to see you got this working. I assume you installed the system and kubernetes integration? The main thing to keep in mind here is that we don't test with this setup but in most cases it should just work. I'm writing most cases, because for some integration (or modules in your context) we might make a breaking change when run by Elastic Agent to fix some long standing issues.

Thanks ruflin. Yes I installed the integration assets and like I said so far so good. I suppose if we run into issues with the dashboards then we can just edit the beat dashboards to reference the new indices and use those.

Once agent and fleet is ga we'll gladly make the switch properly.

On the dashboard side, I think it is more likely that 1-2 fields change. But as you said, you should be able to adjust these. One important thing here: If you upgrade packages it currently overwrites all the assets including dashboards, so it would get rid of your edits. Make sure to work on a copy :wink:

Hope we can move you to Elastic Agent and Fleet soon so you don't have to deal with the above anymore.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.