Filebeat sidecar with Sonarqube in Kubernetes

Hello,

I have a configMap :

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-configmap
  namespace: k8s-mxt-sonarqube
data:
    filebeat.yml: |-
      filebeat.inputs:
      - type: container
        enabled: true
        paths:
          - /opt/sonarqube/logs/*
      output.console:
        pretty: true

And a deploy file :

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: sonarqube-lts-sonarqube
  namespace: k8s-mxt-sonarqube
  selfLink: >-
    /apis/apps/v1/namespaces/k8s-mxt-sonarqube/statefulsets/sonarqube-lts-sonarqube
  labels:
    app: sonarqube
    app.kubernetes.io/component: sonarqube-lts-sonarqube
    app.kubernetes.io/instance: sonarqube-lts
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: sonarqube-sonarqube-lts-sonarqube
    app.kubernetes.io/part-of: sonarqube
    app.kubernetes.io/version: 8.9.1-enterprise
    chart: sonarqube-1.0.14
    heritage: Helm
    release: sonarqube-lts
  annotations:
    meta.helm.sh/release-name: sonarqube-lts
    meta.helm.sh/release-namespace: k8s-mxt-sonarqube
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sonarqube
      release: sonarqube-lts
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: sonarqube
        release: sonarqube-lts
      annotations:
        checksum/config: 73afbc2bf56a6b9a22a89710a4fed79492df750cf4254d1d6aaf23553082e1fa
        checksum/init-fs: beba02ae26e71255ebcf384988745bf0dc5c983e8562ba6f3177f0a39e3a3737
        checksum/init-sysctl: fa4b1ce5074ddb64c2272b98495c6815bb7117d338b69a8faefc73cffbe53145
        checksum/plugins: 6e6d346512eeadca2124a374ff181df3608857a062bdcfde7057162cae1e404b
        checksum/secret: 6bbef80678443e7afe578b946001f7670a4846b26348dc088c42782e1701e3c6
    spec:
      volumes:
        - name: config
          configMap:
            name: sonarqube-lts-sonarqube-config
            items:
              - key: sonar.properties
                path: sonar.properties
            defaultMode: 420
        - name: init-sysctl
          configMap:
            name: sonarqube-lts-sonarqube-init-sysctl
            items:
              - key: init_sysctl.sh
                path: init_sysctl.sh
            defaultMode: 420
        - name: init-fs
          configMap:
            name: sonarqube-lts-sonarqube-init-fs
            items:
              - key: init_fs.sh
                path: init_fs.sh
            defaultMode: 420
        - name: install-plugins
          configMap:
            name: sonarqube-lts-sonarqube-install-plugins
            items:
              - key: install_plugins.sh
                path: install_plugins.sh
            defaultMode: 420
        - name: sonarqube
          emptyDir: {}
        - name: tmp-dir
          emptyDir: {}
        - name: sonarqube-log
        - name: filebeat-config
          configMap:
            name: filebeat-configmap
            items:
              - key: filebeat.yml
                path: filebeat.yml
      initContainers:
        - name: init-sysctl
          image: 'busybox:1.32'
          command:
            - sh
            - '-e'
            - /tmp/scripts/init_sysctl.sh
          resources: {}
          volumeMounts:
            - name: init-sysctl
              mountPath: /tmp/scripts/
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: true
      containers:
        - name: sonarqube
          image: 'sonarqube:8.9.1-enterprise'
          ports:
            - name: http
              containerPort: 9000
              protocol: TCP
          envFrom:
            - configMapRef:
                name: sonarqube-lts-sonarqube-postgres-config
          env:
            - name: SONAR_WEB_JAVAOPTS
            - name: SONAR_CE_JAVAOPTS
            - name: SONAR_JDBC_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: sonarqube-lts-sonarqube
                  key: postgresql-password
            - name: SONAR_WEB_SYSTEMPASSCODE
              valueFrom:
                secretKeyRef:
                  name: sonarqube-lts-sonarqube-monitoring-passcode
                  key: SONAR_WEB_SYSTEMPASSCODE
          resources:
            limits:
              cpu: 800m
              memory: 4096M
            requests:
              cpu: 400m
              memory: 2Gi
          volumeMounts:
            - name: config
              mountPath: /opt/sonarqube/conf/sonar.properties
              subPath: sonar.properties
            - name: sonarqube
              mountPath: /opt/sonarqube/data
              subPath: data
            - name: sonarqube
              mountPath: /opt/sonarqube/temp
              subPath: temp
            - name: sonarqube-log
              mountPath: /opt/sonarqube/logs
              subPath: logs
            - name: tmp-dir
              mountPath: /tmp
          livenessProbe:
            exec:
              command:
                - sh
                - '-c'
                - "#!/bin/bash\n# A Sonarqube container is considered healthy if the health status is GREEN or YELLOW\nhost=\"$(hostname -i || echo '127.0.0.1')\"\nif wget --header=\"X-Sonar-Passcode: ${SONAR_WEB_SYSTEMPASSCODE}\" -qO- http://${host}:9000/api/system/health | grep -q -e '\"health\":\"GREEN\"' -e '\"health\":\"YELLOW\"'; then\n\texit 0\nfi\nexit 1\n"
            initialDelaySeconds: 260
            timeoutSeconds: 1
            periodSeconds: 30
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            exec:
              command:
                - sh
                - '-c'
                - "#!/bin/bash\n# A Sonarqube container is considered ready if the status is UP, DB_MIGRATION_NEEDED or DB_MIGRATION_RUNNING\n# status about migration are added to prevent the node to be kill while sonarqube is upgrading the database.\nhost=\"$(hostname -i || echo '127.0.0.1')\"\nif wget -qO- http://${host}:9000/api/system/status | grep -q -e '\"status\":\"UP\"' -e '\"status\":\"DB_MIGRATION_NEEDED\"' -e '\"status\":\"DB_MIGRATION_RUNNING\"'; then\n\texit 0\nfi\nexit 1\n"
            initialDelaySeconds: 260
            timeoutSeconds: 1
            periodSeconds: 30
            successThreshold: 1
            failureThreshold: 6
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            capabilities: {}
            privileged: false
            runAsUser: 1000
            runAsNonRoot: false
            readOnlyRootFilesystem: false
            allowPrivilegeEscalation: false
        - name: filebeat-sonar
          image: "filebeat:7.14.0"
          imagePullPolicy: Always
          volumeMounts:
            - name: sonarqube-log
              mountPath: /opt/sonarqube/logs
            - name: filebeat-config
              mountPath: /etc/filebeat.yml
              subPath: filebeat.yml
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: sonarqube
      serviceAccount: sonarqube
      securityContext: {}
      schedulerName: default-scheduler
  serviceName: sonarqube-lts-sonarqube
  podManagementPolicy: OrderedReady
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      partition: 0
  revisionHistoryLimit: 10

The problem is that i never catch the /opt/sonarqube/logs/ ... take a look at my filebeat log plz, thanks

2021-08-24T08:10:06.890Z INFO instance/beat.go:473 filebeat start running.
2021-08-24T08:10:06.892Z INFO memlog/store.go:119 Loading data file of '/usr/share/filebeat/data/registry/filebeat' succeeded. Active transaction id=0
2021-08-24T08:10:06.892Z INFO memlog/store.go:124 Finished loading transaction log file for '/usr/share/filebeat/data/registry/filebeat'. Active transaction id=0
2021-08-24T08:10:06.893Z INFO [registrar] registrar/registrar.go:109 States Loaded from registrar: 0
2021-08-24T08:10:06.893Z INFO [crawler] beater/crawler.go:71 Loading Inputs: 0
2021-08-24T08:10:06.893Z INFO [crawler] beater/crawler.go:108 Loading and starting Inputs completed. Enabled inputs: 0
2021-08-24T08:10:06.893Z INFO cfgfile/reload.go:164 Config reloader started
2021-08-24T08:10:06.893Z INFO cfgfile/reload.go:224 Loading of config files completed.
2021-08-24T08:10:09.887Z INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:101 add_cloud_metadata: hosting provider type not detected.
2021-08-24T08:10:36.898Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":100000}},"id":"3022e9f291ec133379f9992cf0b643656745f97ec14de0fa958ac4d0dd8dc480"},"cpuacct":{"id":"3022e9f291ec133379f9992cf0b643656745f97ec14de0fa958ac4d0dd8dc480","total":{"ns":263924647}},"memory":{"id":"3022e9f291ec133379f9992cf0b643656745f97ec14de0fa958ac4d0dd8dc480","mem":{"limit":{"bytes":9223372036854771712},"usage":{"bytes":44503040}}}},"cpu":{"system":{"ticks":90,"time":{"ms":100}},"total":{"ticks":230,"time":{"ms":245},"value":230},"user":{"ticks":140,"time":{"ms":145}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"2e12dcd9-1398-44c4-a4ac-d7e0de4064e4","uptime":{"ms":30089},"version":"7.14.0"},"memstats":{"gc_next":18886864,"memory_alloc":12504152,"memory_sys":75580424,"memory_total":54791792,"rss":95092736},"runtime":{"goroutines":18}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1,"scans":1},"output":{"events":{"active":0},"type":"elasticsearch"},"pipeline":{"clients":0,"events":{"active":0},"queue":{"max_events":4096}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":6},"load":{"1":1.1,"15":0.35,"5":0.41,"norm":{"1":0.1833,"15":0.0583,"5":0.0683}}}}}}

"filebeat":{"harvester":{"open_files":0,"running":0}}

That doesn't look right.

I think you're missing the volume mount of the ConfigMap; something like beats/filebeat-daemonset.yaml at master · elastic/beats · GitHub

I changed my conf and now i have an error, it's better, i'm near ! hehe cool :slight_smile:

"filebeat":{"harvester":{"open_files":5,"running":5}},

2021-08-25T09:32:51.619Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for file.     {"input_id": "bf983e17-14c2-4946-b7fa-9b4e2c7c205f", "source": "/opt/sonarqube/logs/temp/sharedmemory", "state_id": "native::5727-64775", "finished": false, "os_id": "5727-64775", "harvester_id": "bf5ae88a-dc64-4a82-a74a-b262a7fb99c0"}
2021-08-25T09:32:51.620Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for file.     {"input_id": "bf983e17-14c2-4946-b7fa-9b4e2c7c205f", "source": "/opt/sonarqube/logs/logs/es.log", "state_id": "native::8432867-64775", "finished": false, "os_id": "8432867-64775", "harvester_id": "44c5dde4-8f60-404f-9f55-34300ae15771"}
2021-08-25T09:32:51.620Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for file.     {"input_id": "bf983e17-14c2-4946-b7fa-9b4e2c7c205f", "source": "/opt/sonarqube/logs/logs/sonar.log", "state_id": "native::8432866-64775", "finished": false, "os_id": "8432866-64775", "harvester_id": "ea77e174-670f-48f8-a0b7-87343c843077"}
2021-08-25T09:32:51.620Z        ERROR   [reader_docker_json]    readjson/docker_json.go:231     Parse line error: parsing CRI timestamp: parsing time "2021.08.25" as "2006-01-02T15:04:05.999999999Z07:00": cannot parse ".08.25" as "-"

I think i have to configure my input in a better way with : Configure inputs | Filebeat Reference [7.14] | Elastic

Finaly made it, thanks Xeraa !

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.