I am trying to configure Filebeat as DaemonSet on our Kubernetes platform.
It's sending through systemlogs as expected, including the event info.
I am trying to get it to do the same for nginx, apache2 and eventually other modules.
However, there is no information / logs coming in (any more?) for apache / nginx containers.
The Docker filebeat image is built from the following Dockerfile:
ARG FILEBEAT_VERSION
FROM docker.elastic.co/beats/filebeat:${FILEBEAT_VERSION}
# Elevated permissions are necessary to set proper permissions
USER 0
ADD --chown=root:filebeat config/filebeat.yml /usr/share/filebeat/
RUN chmod go-w /usr/share/filebeat/filebeat.yml
USER 1000
# Enable modules
RUN filebeat modules enable mysql nginx apache redis system elasticsearch haproxy iptables kibana logstash rabbitmq traefik
Any help would be appreciated, I'm a bit lost at the moment.
Also do you see anything interesting in Filebeat's logs regarding autodiscover? Anything regarding the apache Module's configuration for instance?
If this does not help, you should dive deeper and debug the Fileebat by running manually the binary inside the Daemonset in debug level and see how autodiscover detects the containers/pods and with which configuration starts the modules?
Thank you for coming back to me. I've added the paths, if that doesn't help I'll reply again.
However, in the meantime, how do you enable the filebeat logs / where can I see them? Is it simply the kubectl logs output for the filebeat pods? I've not been able to detect any thing that hints to errors there, I can see it's looking for the apache information but it does not report on errors or anything that isn't found (as far as I can see).
I've looked at the results / what is pushed to ES right now, but it's only sending syslogs.
It's as if it's not recognising the module config at all.
It's also not really doing what we'd like it to do. If the log record matches one of the modules, it should be parsed as is configured in that module.
If it doesn't match any known module / syntax, we want the log record to be sent as-is, we don't want any log records to not be sent / excluded unless we explicitly configure it to be skipped...
Additionally, I've tried other configurations which are taken from the docs.
There are multiple ways reported to enable modules, I've activated 2 of them and neither seem to work:
filebeat.modules:
- module: nginx
- module: mysql
- module: system
will not work since modules will need to "know" the logs' paths like /var/log/containers/*${data.kubernetes.container.id}.log like in the autodiscover examples. The module by default will use /var/log/redis/redis-server.log*.
Since the setup you are trying to deploy is not so simple to work out of the box I would suggest to start by trying a simple setup with only one autodiscover condition and one service to be monitored like the one I provided to you in my previous post.
Along with this, you can kubectl exec -it filebeat-pod -- /bin/bash and start a second instance of filebeat so as to follow the logs like ./filebeat -e -d "*", and check in the logs if you see anything related to autodiscover and if the modules are detected and able to start.
However I am receiving syslogs but no logs from nginx. Debug mode shows some log records that add kubernetes meta data, but nothing in relation to nginx logs (while I am reloading the frontend running on nginx).
The output of filebeat modules list also shows no modules enabled, but I am not sure what the proper way to activate them is. Specifying the --modules argument does not seem to activate the modules, building a separate image and running filebeat modules enable does.
I should note though, that I am not sure if the var.paths config I am currently using is correct, but I have some troubles finding out what the proper solution is (opinions seem to differ online).
That said, I do not really understand why I am receiving syslogs, but no logs from nginx, even though that's the only module I've configured...
Any guidance in getting a first module setup would be greatly appreciated.
P.s.: as extra information, this is an excerpt from the kubectl describe pod -n <NS> -o yaml <podname>:
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
Autodiscover is responsible to identify any incoming matching pods and spawn the respective Module to monitor them. So in this case the nginx pod should be caught by autodiscover watcher and Metricbeat would start an nginx module under the hood. All these could be tracked in Filebeat's logs.
In your configuration, I see some things like hints and stuff and I'm not sure how all these are mixed. I could only trust the minimal configuration example from the docs as the starting point for debugging:
Having said these, I would suggest to try with that and if nothing shows up in the logs (running Filebeat in debug mode `./filebeat -e -d "*') regarding autodiscover then it seems that something goes wrong with the conditions and the config cannot catch the Pod creation event. Please try to debug (feel free to share the logs with us) this in this structured way and if there is no success I will be able to try it on my end for you.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.