Getting Tomcat logs from Kubernetes pods

What is the recommended way to get Tomcat logs from Kubernetes pods? I have Elasticsearch, Kibana, and Filebeat 7.5.0 helm charts installed and am getting sysout and syserr from the pods but I also need the logs from /usr/local/tomcat/logs/ of each pod.

Hi @KenRider and welcome to discuss :slight_smile:

It is usually not recommended to write logs to the containers filesystem, so if possible, try to configure your application to log everything to stdout or stderr.

If it is unavoidable, one solution can be to use an streaming sidecar container, that is a container in the same pod that redirects all the content of certain files to its stdout, and then filebeat can capture it.

You could also deploy filebeat as a logging agent in the pod, so it directly reads the files from the container and sends them to the output. But it can complicate the configuration as this filebeat would need the credentials of the used output.

I would normally change it to write to stdout/stderr but that's not an option in this case (yet). I've also tried a streaming sidecar but it messes up the Tomcat stack traces in its error logs and doesn't handle log rotation well.

I implemented Filebeat with the Apache module configured in a sidecar. I see harvesters starting for each of the logs and the harvester noting when they're inactive. I've also verified the sidecar connects to Elasticsearch. However, I'm not seeing anything in Kibana.

I turned on debug logging in the Filebeat sidecar. I see a line that says 47 events have been published to elasticsearch and there just happen to be 47 lines in the Tomcat logs.

So, now I'm wondering why I don't see them in Kibana. Is there something I need to do in Elasticsearch and/or Kibana? I do see all the normal container logs in Kibana.

Here's my filebeat.yml, if that helps.

filebeat:
  config:
    modules:
      path: /usr/share/filebeat/modules.d/*.yml
      reload:
        enabled: true
  modules:
  - module: apache
    access:
      enabled: true
      var.paths:
      - "/usr/local/tomcat/logs/localhost_access_log.*.txt"
    error:
      enabled: true
      var.paths:
      - "/usr/local/tomcat/logs/application.log*"
      - "/usr/local/tomcat/logs/catalina.*.log"
      - "/usr/local/tomcat/logs/host-manager.*.log"
      - "/usr/local/tomcat/logs/localhost.*.log"
      - "/usr/local/tomcat/logs/manager.*.log"
output:
  elasticsearch:
    host: '${NODE_NAME}'
    hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master.elastic-system:9200}'
logging:
  level: debug
  to_stderr: true

I have a couple of questions after looking at the Apache module Filebeats documentation.

First, when I exec into my Filebeat sidecar and do a filebeat modules list, it doesn't show apache as enabled. It seems to be using it based on my debug output. Do I need to enable it and, if so, how?

Second, the documentation says to run filebeat setup -e to setup the indices and dashboards. Do I need to do this as well?

What are you looking at in Kibana? Is it possible that you are using different index names for the container logs?

Could you try to look for the events from the developer console? A query like the following one should show events coming from the Apache module:

GET filebeat-*/_search?q=event.module:apache

Something that sometimes happens is that there is some kind of problem with timezones, in that case the events are stored, but in a wrong time.

filebeat modules list can be a bit misleading in this scenario. filebeat modules subcommands are intended to handle configuration files under the path configured in filebeat.config.modules.path as modules. But in your case you are defining the configuration in the main configuration file directly. As this is a sidecar and you are only going to configure one module I would suggest you to remove the filebeat.config.modules section and to don't use filebeat modules subcommands.

If you see the harvesters being started in the logs, and a consistent number of events published, then you have the apache module working.

This is very recommended, but it is enough if you already did it for the filebeats deployed with the kubernetes DaemonSet if they are in the same version.
If you don't run it, filebeat events should still be stored, but with incorrect mappings and without dashboards.

I was just about to update this thread :slight_smile: I think I was just missing the log entries in the Kibana Discover blade and there weren't any dashboards defined on the Dashboard blade.

I updated my daemonset Filebeat config to include the following.

    setup:
      kibana:
        host: '${KIBANA_HOST:kibana-kibana:5601}'
        protocol: "http"
      dashboards:
        enabled: true

I now have dashboards, including the Apache one, and I can find the log entries in Discover.

1 Like

I took what I learned and put it all together into a new blog post, Getting Tomcat logs from Kubernetes pods.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.