[packetbeat] memory usage is growing continuously when capturing redis traffic

server: virtual machine
cpu: 4 logic cpu/8 logic cpu
ram: 32GB

os: centos 7.2 x86_64
packetbeat: 7.1.1(i used binary release not rpm package)
redis: 4.0.8
kafka: 2.9.2-0.8.2.2

we have a redis cluster which has 8 dedicated redis servers, and running 2 redis instances on each server, totally 16 redis instances.
and i deployed 8 packetbeat on those servers to capture redis traffic and output to kakfa cluster.
what is strange is that only one of the packetbeat process uses memory more and more,
but it output content to kafka without problems.
on the other side, memory usage of other 7 packetbeat processes did not grow continuously,
and they shared the same config file and binary release.

i uploaded config file and also the pprof png file,
could anyone help me on this problem?
thanks in advance.

config file:

path.home: /data/app/packetbeat
max_procs: 1
logging:
  level: info
  to_files: true
  files:
    name: packetbeat.log
    keepfiles: 5
    rotateeverybytes: 20971520
    permissions: 0644
    interval: 168h

packetbeat.ignore_outgoing: true
packetbeat.interfaces.device: any
packetbeat.interfaces.type: af_packet
packetbeat.interfaces.buffer_size_mb: 128

packetbeat.flows:
  enabled: false

processors:
- add_locale:
    format: offset
- drop_fields:
    fields: ["host", "ecs", "agent"]

# packetbeat.interfaces.bpf_filter: "port 3306 or port 7001 or port 7002"
packetbeat.protocols:
  # mysql
  - type: mysql
    enabled: false
    ports: [3306, 3307, 3308, 3309, 3310]
    send_request: false
    send_response: false
    max_rows: 100
    max_row_length: 10485760
    processors:
    - include_fields:
        fields: ["name", "tags", "client", "server", "type", "method", "event", "query", "mysql"]
    - add_fields:
        target: ''
        fields:
          cluster_name: ${cluster_name_mysql:mysql}
    - drop_fields:
        fields: ["event.dataset", "event.kind", "event.category"]
  # redis
  - type: redis
    enabled: true
    ports: [6379, 7001, 7002]
    send_request: false
    send_response: false
    processors:
    - include_fields:
        fields: ["name", "tags", "client", "server", "type", "method", "resource", "event", "redis"]
    - add_fields:
        target: ''
        fields:
          cluster_name: ${cluster_name_redis:redis}
    - drop_fields:
        fields: ["event.dataset", "event.kind", "event.category"]

queue.mem:
  events: 4096
  flush.min_events: 256
  flush.timeout: 1s

# output configuration
# file
output:
  file:
    enabled: false
    path: "/data/app/packetbeat/data"
    filename: "packetbeat_file.out"
    number_of_file: 5
    rotate_every_kb: 20480
    #codec.json:
    #  pretty: true
    #codec.format:
    #  string: '%{[@timestamp]} %{[message]}'
# kafka
  kafka:
    enabled: true
    hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092", "kafka4:9092", "kafka5:9092"]
    topic: "packetbeat_01"
    partition.round_robin:
      reachable_only: true
    metadata:
      refresh_frequency: 5m
      full: false
    #codec.json:
    #  pretty: true
    #codec.format:
    #  string: '%{[@timestamp]} %{[message]}'

pprof png:

i also have a test to output to local file, but problem still exists,
and i found that the cpu usage of the bad guy is low(about ),
at the same time other 7 good guys use 100% cpu(i set the max_procs to 1),
so i don't know why the bad guy could not use cpu as much as other servers,
maybe caused by virtual machine?

bad guy:

good guy:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.