Packetbeat duplicated HTTP events in Kubernetes Cluster

Hi, we are experiencing a similar issue as in this post.

Configuration:

packetbeat.ignore_outgoing: true

setup.dashboards.enabled: true
setup.template.enabled: true
setup.template.name: "packetbeat"
setup.template.pattern: "packetbeat-*"
setup.template.settings:
  index.number_of_shards: 2

setup.kibana:
  host: "${KIBANA_HOST:kibana}:${KIBANA_PORT:5601}"
packetbeat.interfaces.device: any

packetbeat.protocols:
- type: http
  ports: [5000, 3000]
  include_body_for: ["application/json", "application/x-www-form-urlencoded"]
  hide_keywords: ["pass", "password", "passwd", "pwd", "token", "client_secret", "access_token", "id_token"]
  send_headers: ["User-Agent", "Cookie", "Set-Cookie"]
  split_cookie: true
  real_ip_header: "X-Forwarded-For"
  redact_authorization: true
  redact_headers: ['Cookie', 'Set-Cookie']
  
- type: tls
  enabled: false
  ports:
    - 443   # HTTPS
    - 80    # HTTPS

- type: cassandra
  enabled: false
- type: memcache
  enabled: false
- type: mysql
  enabled: false
- type: pgsql
  enabled: false
- type: thrift
  enabled: false
- type: mongodb
  enabled: false
  
processors:
  - truncate_fields:
      fields:
        - http.response.body
      max_bytes: 8388608
      fail_on_error: false
      ignore_missing: true

  - truncate_fields:
      fields:
        - http.request.body
      max_bytes: 5242880
      fail_on_error: false
      ignore_missing: true

  - copy_fields:
      fields:
        - from: network.forwarded_ip
          to: client.ip
      fail_on_error: false
      ignore_missing: true
        
  - add_kubernetes_metadata:
      host: ${NODE_NAME}
      default_indexers.enabled: false
      default_matchers.enabled: false
      indexers:
        - ip_port:
      matchers:
        - field_format:
            format: '%{[ip]}:%{[port]}'

The Cluster has multiple nodes and Packetbeat is deployed as a DaemonSet (one agent per physical node). When two pods that are in the same node communicate, there are no duplicates. When they are running in different nodes, 3 records show up:

  • A pair of duplicates: source pod -> destination pod. The only difference is the agent metadata, one with the source node and the other one with the destination node.
  • A third event: source pod -> destination service.

We couldn't find any way to remove these duplicates. Shouldn't packetbeat.ignore_outgoing: true prevent one of the duplicated events from being logged?

Any hints on how to solve this?

This situation is caused by Service and Pod share the same traffic. I fixed this issue by drop events from Service.

    processors:
      - drop_event:
          when:
            or:
              - network:
                  destination.ip: ['172.23.0.0/20']

Darran,

Thanks for your suggestion. By adapting your configuration to our CIDR range we could remove the third duplicated event (pod -> service, second item in the OP). However, the other case still remains.

As you can see in the picture, every pair of records is the same, except for the agent. When two pods are located in different nodes and they communicate, it seems that the Packetbeat agent in each node (because it is deployed as a DaemonSet) captures the same traffic. I would expect this not to happen given that packetbeat.ignore_outgoing is set to true.

When the two pods are in the same node, this duplication does not happen.

Any ideas on how to fix this?

Could you show the "type" and "kubernetes" columns?

I found a drop_event processor in our configuration:

processors:
  - add_kubernetes_metadata:
      host: ${NODE_NAME}
      namespace: prod
      indexers:
      - ip_port:
      matchers:
      - field_format:
          format: '%{[server.ip]}:%{[server.port]}'
  - add_fields:
      target: node
      fields:
        name: ${NODE_NAME}
  - drop_event:
      when:
        and:
          - equals:
              type: "http"
          - not:
              has_fields: ["kubernetes.namespace"]
1 Like

Darran,

We have desisted to use the add_kubernetes_metadata processor because often the pods were incorrectly matched. We also tested to make sure the processor wasn't the cause of the duplicates.

Instead, we modified your previous suggestion to delete the duplicates and keep the Pod -> Service traffic. Since the Kubernetes processor doesn't match services, we used reverse DNS to add fields with the origin and destination services/deployments.

It may not be optimal, but with this approach we have enough metadata to analyze inter-pod communication and have also mostly eliminated duplicates.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.