Filebeats not working in 7.x, and front end app logging


I’ve been using the ELK stack to capture our logs from our Kubernetes cluster. I had filebeats setup on the cluster and everything was working fine until I updated from 6.7 to 7.3 (I also updated the docker image that the filebeat DaemonSet was pulling from).
When I initially made the update, all the logs ended up being pushed into one index, whereas before they were split by day and namespace. This is the configuration I have for the output to Elasticsearch and the index template:

      hosts: ['<url>']
      index: "filebeat-%{[kubernetes.namespace]:default}-%{[agent.version]}-%{+yyyy.MM.dd}"
      name: "filebeat-%{[kubernetes.namespace]:default}"
      pattern: "filebeat-%{[kubernetes.namespace]:default}-*"

However since then, the logs have stopped showing up all together, including in the newly created log index. I've searched and haven't found any other indices that match the index pattern filebeat-*. I can’t see any errors in the Daemonset logs on Kubernetes so I’m wondering where the output is getting dumped? As it is clearly not following the configuration above.

On a separate note, what would be the easiest way to send logs from a React webapp into Elastic? Currently we are sending them to Sentry using POST requests. If we were to use filebeats, I assume the only way would be to store the logs locally, setup FB locally and have it read from them, or to send them over a UDP/TCP web socket and have FB read from there? I've seen other implementations that use Logstash as an intermediate but would rather not have to set it up for now.


Hi @ngg971,

from your description I would say there's a high probability something about the index names and index templates doesn't match up. Could you give us an idea about which index templates are present in your cluster? Not including the agent version in the template name, for example, runs a high risk of causing version conflicts that break the update process.

Regarding your question about logging from the browser, two rather simple solutions come to mind.

One of them you touched one, which is storing the log in a file locally and ingesting it via filebeat. That has the advantage of providing an inherent buffer that can compensate for expected or unexpected indexing disruptions and enable replay in case of data loss.

The other one would be to directly have the server-side part of the webapp index the events it receives from the browser directly in Elasticsearch without writing it to a file. That would cut out any intermediate beyond your own web application server, but put the burden of batching/buffering on your application.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.