Differentiate between Nginx Access and Error logs without separate field from Filebeats

Hi all,

I'm running Filebeats in a container on DC/OS (a container orchestration platform similar to Kubernetes).

These containers can access the logs of their parent VM, which means they can access the logs of any other container running on this VM.

/var/lib/mesos/slave/slaves/<uuid>/frameworks/<uuid>/executors/**<container-name>**.<uuid>/runs/latest/stdout

With one Filebeats container running on each parent VM, I receive every single container's Stdout and Stderr logs in Logstash.

Most of our logs are from Node microservices, but we have a couple of Nginx containers too.

I can see my Nginx containers' output in Logstash but I'm having difficulty filtering/tagging the different logs correctly.

I want to be able to do...

if [log-type] == "nginx_access" > add_field["type", "access"]
else if [log-type] == "nginx_access" > add_field["type", "error"]

... but because the logs are coming from stdout rather than a specified file path (e.g. access.log), I can't set up Filebeats to send a "log-type" field on certain files. I could do this on microservice name but that means we have to change the filter every time a new service is added - not ideal for an automated environment.

I guess I'm looking for something like a regex filter to do the job for me:

if [message] =~ "^\{.*\}[\s\S]*$" > do stuff

This is a line I use further up my filter to determine which log entries are Node-outputted JSON.

But I wouldn't really know where to start with this, and I'm hoping there's a less rubbish way of doing this than regex.

Thanks in advance

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.