Packetbeat not resolving and connecting to kibana in Daemonset

With this as reference I created a daemonset for packetbeat. It seems kibana is not getting resolved properly due to which packet beat keeps failing.

Here is the packetbeat.yaml file for reference:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: packetbeat-config
  namespace: test-namespace
  labels:
    k8s-app: packetbeat
    kubernetes.io/cluster-service: "true"
data:
  packetbeat.yml: |-
    setup.dashboards.enabled: true
    setup.template.enabled: true
    setup.template.settings:
      index.number_of_shards: 2
    packetbeat.interfaces.device: any
    packetbeat.protocols:
    - type: dns
      ports: [53]
      include_authorities: true
      include_additionals: true
    - type: http
      ports: [9200]
    - type: redis
      ports: [6379]
    packetbeat.flows:
      timeout: 30s
      period: 10s
    processors:
      - add_cloud_metadata:
      - add_kubernetes_metadata:
          host: ${HOSTNAME}
          indexers:
          - ip_port:
          matchers:
          - field_format:
              format: '%{[ip]}:%{[port]}'
    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
    setup.kibana:
      host: '${KIBANA_HOST}'
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      ssl.enabled: false
      ssl.verification_mode: none
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: packetbeat
  namespace: test-namespace
  labels:
    k8s-app: packetbeat
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: packetbeat
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: packetbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      containers:
      - name: packetbeat
        image: docker.elastic.co/beats/packetbeat:7.5.2
        imagePullPolicy: Always
        args: [
          "-c", "/etc/packetbeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
          capabilities:
            add:
            - NET_ADMIN
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch-http
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elasticsearchusername
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              key: elasticsearchusername
              name: elasticsearch-user
        - name: KIBANA_HOST
          value: https://kibana-kb-http:5601
        volumeMounts:
        - name: config
          mountPath: /etc/packetbeat.yml
          readOnly: true
          subPath: packetbeat.yml
        - name: data
          mountPath: /usr/share/packetbeat/data
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: packetbeat-config
      - name: data
        emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: packetbeat
subjects:
- kind: ServiceAccount
  name: packetbeat
  namespace: test-namespace
roleRef:
  kind: ClusterRole
  name: packetbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: packetbeat
  labels:
    k8s-app: packetbeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: packetbeat
  namespace: test-namespace
  labels:
    k8s-app: packetbeat
---

Also is there any possiblity to apply packetbeat to trafeik/kubernetes network? As in kubernetes cluster port 80,443 may not be using nodeports and might use ingress controllers in such cases network activity monitioring is questionable.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.