Open TCP connection count

Hello there.
I have a use case where we do have different applications running in separate containers. We want to monitor the user activity by keeping track of the open TCP connections on each application.
I am using the official metricbeat container image (docker.elastic.co/beats/metricbeat:6.5.0). It seems like the socket_summary module is what I'm looking for, especially the field
"system.socket.summary.tcp.all.count" where it's documented to be "All open TCP connections". However the data I get from the module doesn't change at all, even if I open multiple connections onto the application the count always remains "1". My suspicion is now that "system.socket.summary.tcp.all.count" doesn't return the connection count but just the socket count.

Is the documentation inaccurate or might there be a bug in the metricbeat?

Thanks in advance for your help!

Hi @rani,

Yes, socket_summary metricset collects summarized metrics of the connections of a system. Take into account that metricbeat is not able to collect these metrics per application at the moment.

When metricbeat is running in a container, to be able to collect network metrics of other applications, it needs one of these things:

  • To be running in the same network namespace
  • To have access to the the host proc filesystem (host /proc filesystem has to be mounted in the container and its path in the container has to be passed to metricbeat using --system.hostfs flag).

Could you share the options you are using to run the metricbeat container?

Hi @jsoriano,
Thanks for your reply.

The docker-compose.yml for the swarm contains:

host-metricbeat:
  image: docker.elastic.co/beats/metricbeat:6.5.0
    volumes:
      - /opt/elastic/config/metricbeat/host-metricbeat.yml:/usr/share/metricbeat/metricbeat.yml
      - type: bind
        source: /proc
        target: /hostfs/proc,readonly
      - type: bind
        source: /sys/fs/cgroup
        target: /hostfs/sys/fs/cgroup,readonly
      - type: bind
        source: /
        target: /hostfs,readonly
    configs:
      - source: ssl_cert
        target: /usr/share/filebeat/config/certificate.crt
       - source: ssl_key
         target: /usr/share/filebeat/config/private.key
       - source: ssl_ca_chain
         target: /usr/share/filebeat/config/ca-chain.crt

All the containers are started from the same docker-compose.yml and thus are running in the same docker network, so this should be the case.

Sadly, I haven't found a way to do this in a docker-compose file.

Sadly, I haven't found a way to do this in a docker-compose file.

Try to add the arguments to the command like this:

host-metricbeat:
  image: docker.elastic.co/beats/metricbeat:6.5.0
    command: [
      "-c", "/usr/share/metricbeat/metricbeat.yml",
      "-e",
      "-system.hostfs=/hostfs",
   ]
    volumes:
      - /opt/elastic/config/metricbeat/host-metricbeat.yml:/usr/share/metricbeat/metricbeat.yml
      - type: bind
        source: /proc
        target: /hostfs/proc,readonly
      - type: bind
        source: /sys/fs/cgroup
        target: /hostfs/sys/fs/cgroup,readonly
      - type: bind
        source: /
        target: /hostfs,readonly
    configs:
      - source: ssl_cert
        target: /usr/share/filebeat/config/certificate.crt
       - source: ssl_key
         target: /usr/share/filebeat/config/private.key
       - source: ssl_ca_chain
         target: /usr/share/filebeat/config/ca-chain.crt

I've tried it and my config now looks like this:

host-metricbeat:
	image: docker.elastic.co/beats/metricbeat:6.5.0
	command: [
	  "-c", "/usr/share/metricbeat/metricbeat.yml",
	  "-e",
	  "-system.hostfs=/hostfs",
	]
	volumes:
	  - /opt/elastic/config/metricbeat/host-metricbeat.yml:/usr/share/metricbeat/metricbeat.yml
	  - type: bind
		source: /proc
		target: /hostfs/proc
	  - type: bind
		source: /sys/fs/cgroup
		target: /hostfs/sys/fs/cgroup,readonly
	  - type: bind
		source: /
		target: /hostfs,readonly
	configs:
	  - source: ssl_cert
		target: /usr/share/filebeat/config/certificate.crt
	  - source: ssl_key
		target: /usr/share/filebeat/config/private.key
	  - source: ssl_ca_chain
		target: /usr/share/filebeat/config/ca-chain.crt

However there is no change in the metrics I get.

I just realized that the "readonly" flags don't work when specified like in my example above. I just changed it to:

host-metricbeat:
	image: docker.elastic.co/beats/metricbeat:6.5.0
	command: [
	  "-c", "/usr/share/metricbeat/metricbeat.yml",
	  "-e",
	  "-system.hostfs=/hostfs",
	]
	volumes:
	  - /opt/elastic/config/metricbeat/host-metricbeat.yml:/usr/share/metricbeat/metricbeat.yml
	  - type: bind
		source: /proc
		target: /hostfs/proc
		read_only: true
	  - type: bind
		source: /sys/fs/cgroup
		target: /hostfs/sys/fs/cgroup
		read_only: true
	  - type: bind
		source: /
		target: /hostfs
		read_only: true
	configs:
	  - source: ssl_cert
		target: /usr/share/filebeat/config/certificate.crt
	  - source: ssl_key
		target: /usr/share/filebeat/config/private.key
	  - source: ssl_ca_chain
		target: /usr/share/filebeat/config/ca-chain.crt

But the metrics still don't change.

Any other ideas @jsoriano, please?

Hi @rani,

Sorry for the late reply. I can confirm the issue, this should be fixed by upgrading one of the libraries we use, I have opened an issue to keep track if this: https://github.com/elastic/beats/issues/10637

Thanks a lot for reporting, and sorry again for the late reply!

@rani as a workaround in the meantime you can try to start metricbeat in the host namespace (network_mode: host).

Hi @jsoriano,

No problem, thanks for looking into it.
For the workaround: It seems like network_mode is ignored when we're deploying a stack in swarm mode (Docker compose reference). Are there any other workarounds on your mind, or do I have to wait until it's fixed?

Anyway, I'm gonna keep track of the issue, thank you.

@rani take a look to these comments, they may help you in this case: https://github.com/elastic/beats/issues/8685#issuecomment-456096492

This doesn't seem to work for me. :frowning:
Guess I'll have to wait. Thanks for your help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.