Filebeat unable to monitor container custom log path

Hello,

I want to monitor the containers logs using filebeat kubernetes deplyment and the log format is in json format it is just monitoring the logs from containers but not this json file saved inside the container

So far i have enabled filebeat deployment following link
Run Filebeat on Kubernetes | Filebeat Reference [8.7] | Elastic

But it is not monitoring the application log path configured via ecs logging by springboot container:

Following are the log path from container

root@service-consumer-5b4c5f65bd-9qhf9:/# ls /logs/
**ECS-consumer.log **
ECS-consumer.log.json

Need to understand how i can monitor this json & log files using filebeat kubernetes deployment

Regards
Pratiksha

Hello,

Any help will be appreciated

Regards
Pratiksha

You need to share the filebeat.yml file that you are using, without it is impossible to know what filebeat is reading.

Hi @leandrojmp

sure pleas find below filebeat.yml configuration which we are trying to use for log containers log monitoring

#######################

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
        - /var/log/containers/*.json
        - /logs/*.json
       - /logs/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          node: ${NODE_NAME}
          hints.enabled: true
          hints.default_config:
            type: container
            paths:
              - /var/log/containers/
              - /logs/


    processors:
      - add_cloud_metadata:
      - add_host_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['http://XX.XX.XX.XX:9201']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}


---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:8.4.3
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: XX.XX.XX.XX
        - name: ELASTICSEARCH_PORT
          value: "9201"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: elastic
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config

          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: root
          mountPath: /
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0640
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      - name: root
        hostPath:
          path: /
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: kube-system
roleRef:
  kind: Role
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: filebeat-kubeadm-config
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: kube-system
roleRef:
  kind: Role
  name: filebeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
- apiGroups: ["apps"]
  resources:
    - replicasets
  verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
  resources:
    - jobs
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat
  # should be the namespace where filebeat is running
  namespace: kube-system
  labels:
    k8s-app: filebeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: filebeat-kubeadm-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---
######################

/logs/ is the path from container where application is saving its logs generated using ECS-Logging

And this is how logs are saved inside container to which i want to monitor

Logging configuration done for springboot application as below

Is this a typo or it looks like this in your yml file? These two items need to be on the same column.

hi @leandrojmp yes it is typo they both are in same line

I'm not sure this is correct, the container input should be used to collect containers logs, not logs from applications running on containers.

I do not use k8s or docker, but If I'm not wrong the logs would be on the host and you would use filebeat to collect logs from the host with this input.

What you should use is a filestream input to collect your application logs, something like this:

    filebeat.inputs:
    - type: filestream
      paths:
        - /logs/*.json
        - /logs/*.log

hi @leandrojmp is it possible to configure the filestream input from filebeat kubernetes deployment?

I have no idea, as I said before I do not use k8s or docker, but if this config that you shared is mounted as the filebeat.yml for the filebeat running inside the pod/container, then probably it will work.

You need to test it.

yes actually i tried manually installing filebeat on container and then it is working as expected. but there i have an issue to package filebeat along with my application so that container will have both filebeat & application whenever it starts & run. but i do not finding the right configuration to merge filebeat with my application during docker image creation

Pretty sure if you just change the logger to console then the container logs approach will work

hi @stephenb i have springboot application with json logs available through ECS-logging will console logging work for it ?

Try it :slight_smile:

Here is my logback.xml for a sample app....

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="co.elastic.logging.logback.EcsEncoder">
            <serviceName>cardatabase</serviceName>
        </encoder>
    </appender>
    <!--
    <include resource="org/springframework/boot/logging/logback/file-appender.xml"/>
    <appender name="json-file" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <encoder class="co.elastic.logging.logback.EcsEncoder">
            <serviceName>cardatabase</serviceName>
        </encoder>
        <file>${LOG_FILE}.json</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.%i.gz</fileNamePattern>
            <maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
            <maxHistory>${LOG_FILE_MAX_HISTORY:-0}</maxHistory>
        </rollingPolicy>
    </appender>
    -->

    <root level="INFO">
        <appender-ref ref="console"/>
        <!-- uncomment this if you want to log in json  -->
        <!-- <appender-ref ref="json-file"/> -->
        <!-- uncomment this if you also want to log in plain text -->
        <!-- <appender-ref ref="FILE"/> --> 
       
    </root>
</configuration>

Did you look at this?

There is a sample log4j as well

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="DEBUG">
    <Appenders>
        <Console name="LogToConsole" target="SYSTEM_OUT">
            <EcsLayout serviceName="my-app" serviceVersion="my-app-version" serviceEnvironment="my-app-environment" serviceNodeName="my-app-cluster-node"/>
        </Console>
        <File name="LogToFile" fileName="logs/app.log">
            <EcsLayout serviceName="my-app" serviceVersion="my-app-version" serviceEnvironment="my-app-environment" serviceNodeName="my-app-cluster-node"/>
        </File>
    </Appenders>
    <Loggers>
        <Root level="info">
            <AppenderRef ref="LogToFile"/>
            <AppenderRef ref="LogToConsole"/>
        </Root>
    </Loggers>
</Configuration>

notice console you could take at the file part or leave it

hello @stephenb yes we tried above configuration from ECS logging java reference link we got 2 log file one .log & another .json file in but only .log file content are visible and printed on console not the .json data which actually contains the trace id & transaction id of logs

PFB logback-spring.xml configuration from our setup

<?xml version="1.0" encoding="UTF-8"?> 
<configuration> 
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/> 
    <include resource="org/springframework/boot/logging/logback/console-appender.xml" /> 
    <include resource="org/springframework/boot/logging/logback/file-appender.xml" /> 
    <include resource="co/elastic/logging/logback/boot/ecs-file-appender.xml" /> 
    <root level="INFO"> 
        <appender-ref ref="CONSOLE"/> 
        <appender-ref ref="ECS_JSON_FILE"/> 
        <appender-ref ref="FILE"/> 
    </root> 
</configuration>

we will revalidate this with the configuration provided by you and check again. Also plz share your thoughts on logback configuration i am sharing

Hi @stephenb

We tried above shared configuration by you and we are able to get the logs in Json format but it is not exactly same log what it is storing in Json log file named app.log.json which is generated after configuring log correlation in application properties as below

application.properties


server.port=8013
logging.level.org.springframework.context=INFO
#admin.baseurl.path = ${EMP_ADMIN_BASE_URL}
#elastic.apm.server-url= ${APM_SERVICE_URL}


admin.baseurl.path = http://localhost:8001/admin
elastic.apm.server-url=https://34.173.47.87:8200/ 


spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=admin

spring.jpa.show-sql = true
spring.jpa.hibernate.ddl-auto = update
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect


elastic.apm.enabled=true
elastic.apm.service-name=spring-boot-consumer
elastic.apm.environment=dev
elastic.apm.application-packages=net.javaguides.springboot.config
elastic.apm.log-level=ERROR
elastic.apm.enable_log_correlation=true
elastic.apm.verify_server_cert=false


spring.application.name=employee-management-webapp
logging.file.name=logs/ECS-consumer.log

spring.devtools.add-properties=false
logging.level.web=DEBUG





# Set the logging level for the root logger
logging.level.root=INFO

# Use Logback as the logging system
#logging.config=classpath:logback-spring.xml
#
## Configure the Elasticsearch client
#elasticsearch.host=34.173.47.87
#elasticsearch.port=9201
#elasticsearch.scheme=http
#elasticsearch.username=elastic
#elasticsearch.password=elastic


Logback-spring.xml configuration below

<?xml version="1.0" encoding="UTF-8"?>

<configuration>

    <property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}"/>

    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>

    <include resource="org/springframework/boot/logging/logback/console-appender.xml" />

    <include resource="org/springframework/boot/logging/logback/file-appender.xml" />

    <include resource="co/elastic/logging/logback/boot/ecs-file-appender.xml" />

    <root level="INFO">

        <appender-ref ref="CONSOLE"/>

        <appender-ref ref="ECS_JSON_FILE"/>

        <appender-ref ref="FILE"/>

    </root>

</configuration>

After performing above configuration able to get the logs in custom json file as below

{"@timestamp":"2023-04-20T09:24:45.456Z","log.level": "WARN","message":"Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [168] milliseconds.","ecs.version": "1.2.0","service.name":"employee-management-webapp","event.dataset":"employee-management-webapp","process.thread.name":"http-nio-8013-exec-1","log.logger":"org.apache.catalina.util.SessionIdGeneratorBase","transaction.id":"c5ad4dabf07c348f","trace.id":"3314291be4ed035a74d22f53ff5e59b5"}
{"@timestamp":"2023-04-20T09:25:07.476Z","log.level": "INFO","message":"Details entered in employee form.","ecs.version": "1.2.0","service.name":"employee-management-webapp","event.dataset":"employee-management-webapp","process.thread.name":"http-nio-8013-exec-9","log.logger":"net.javaguides.springboot.controller.EmployeeController","transaction.id":"0ab254641be66797","trace.id":"d87403c0f01d91eacf75e4555568cb77","TransactionId":"34FF46F0369D4D4B97EEC40D45042A33"}

But in console the logs are printing as below where it is missing many information such as trace id, transaction id

2023-03-30 10:03:13.655 DEBUG 7392 --- [http-nio-8013-exec-5] o.s.web.servlet.DispatcherServlet        : GET "/login", parameters={}
2023-03-30 10:03:13.656 DEBUG 7392 --- [http-nio-8013-exec-5] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped to net.javaguides.springboot.controller.MainController#login()
2023-03-30 10:03:13.658 DEBUG 7392 --- [http-nio-8013-exec-5] o.s.w.s.v.ContentNegotiatingViewResolver : Selected 'text/html' given [text/html, application/xhtml+xml, image/avif, image/webp, image/apng, application/xml;q=0.9, application/signed-exchange;v=b3;q=0.7, */*;q=0.8]
2023-03-30 10:03:13.662 DEBUG 7392 --- [http-nio-8013-exec-5] o.s.web.servlet.DispatcherServlet        : Completed 200 OK
2023-03-30 10:03:18.884 DEBUG 7392 --- [http-nio-8013-exec-7] o.s.web.servlet.DispatcherServlet        : GET "/", parameters={}
2023-03-30 10:03:18.885 DEBUG 7392 --- [http-nio-8013-exec-7] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped to net.javaguides.springboot.controller.EmployeeController#viewHomePage(Model)
2023-03-30 10:03:18.940 DEBUG 7392 --- [http-nio-8013-exec-7] o.s.w.s.v.ContentNegotiatingViewResolver : Selected 'text/html' given [text/html, application/xhtml+xml, image/avif, image/webp, image/apng, application/xml;q=0.9, application/signed-exchange;v=b3;q=0.7, */*;q=0.8]
2023-03-30 10:03:18.993 DEBUG 7392 --- [http-nio-8013-exec-7] o.s.web.servlet.DispatcherServlet        : Completed 200 OK
2023-03-30 10:03:22.656 DEBUG 7392 --- [http-nio-8013-exec-8] o.s.web.servlet.DispatcherServlet        : GET "/showNewEmployeeForm", parameters={}
2023-03-30 10:03:22.657 DEBUG 7392 --- [http-nio-8013-exec-8] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped to net.javaguides.springboot.controller.EmployeeController#showNewEmployeeForm(Model)
2023-03-30 10:03:22.659  INFO 7392 --- [http-nio-8013-exec-8] n.j.s.controller.EmployeeController      : Details entered in employee form.

has context menu
Compose

Looks like you may be missing several critical line from the reference here

In src/main/resources/logback-spring.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}"/>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
    <include resource="org/springframework/boot/logging/logback/console-appender.xml" />
    <include resource="org/springframework/boot/logging/logback/file-appender.xml" />
    <!---- MISSING --->
    <include resource="co/elastic/logging/logback/boot/ecs-console-appender.xml" />
    <include resource="co/elastic/logging/logback/boot/ecs-file-appender.xml" />
    <root level="INFO">
        <!--- MISSING -->
        <appender-ref ref="ECS_JSON_CONSOLE"/> 
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="ECS_JSON_FILE"/>
        <appender-ref ref="FILE"/>
    </root>
</configuration>

Thank You @stephenb after following above configuration we are able to see the json complete logs in console

Thank you so much :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.