Processing fields with Filebeat via Kubernetes annotations

Hi, we've been looking into bypassing logstash and send logs directly from filebeat to elasticsearch. As we have a kubernetes deployment, as seen in here we can add annotations to handle things like multiline, excluded lines and modules. My question is: how would I go about handling applying a grok to a log line and extracting fields like timestamp, log level, etc? Is "co.elastic.logs/processors" the annotation for this? If so, how would I define one processor?

Right now in logstash we have (among other things) the following grok:

grok {
match => [ "message",
          "(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME})\s+%{LOGLEVEL:level} %{NUMBER:pid} --- .+? :\s+(?<logmessage>.*)"
        ]}

What is the equivalent in kubernetes annotation?

Hi @vicmarbev, thanks for your question. We've moved your post to Beats/Filebeat part of Discuss as it's mostly about Filebeat configuration in Kubernetes and not specific to ECK (https://github.com/elastic/cloud-on-k8s).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.