Elastic Agent SOMETIMES Opens Ports to Listen for Syslog Traffic (Buggy Behaviour)

Greetings,

I'm having this rather inconsistent behaviour where my elastic agent deployed on my Syslog server sometimes opens ports for incoming traffic and sometimes decides not to.

Currently the agent has few integrations setup on it (and is also a fleet server too).

I can confirm the following:

  • elastic-agent inspect | grep <PORT>, does indeed show that the ports are part of the config file.
  • watch netstat -plunt, does show that the elastic-agent is listening on the fleet-server integration ports but not the other syslog integration ports (see below).
  • cd /var/lib/elastic-agent/data/elastic-agent-da4d9c/logs && grep -i error -R *, doesn't show any particular relevant log entry.

Things I have tried:

  • Restarting the entire syslog server, doesn't open the ports.
  • elastic-agent restart, doesn't open the ports.
  • systemctl restart elastic-agent.service, doesn't open the ports but sometimes after trying it a million times it does open the ports.
Appendix
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      539/nginx: master p
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      492/sshd: /usr/sbin
tcp        0      0 127.0.0.1:8221          0.0.0.0:*               LISTEN      9200/fleet-server
tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      524/node
tcp        0      0 127.0.0.1:6789          0.0.0.0:*               LISTEN      9179/elastic-agent
tcp6       0      0 10.0.15.14:9200         :::*                    LISTEN      523/java
tcp6       0      0 10.0.1.51:9200          :::*                    LISTEN      523/java
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      523/java
tcp6       0      0 ::1:9200                :::*                    LISTEN      523/java
tcp6       0      0 :::80                   :::*                    LISTEN      539/nginx: master p
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      523/java
tcp6       0      0 ::1:9300                :::*                    LISTEN      523/java
tcp6       0      0 :::22                   :::*                    LISTEN      492/sshd: /usr/sbin
tcp6       0      0 :::8220                 :::*                    LISTEN      9200/fleet-server
udp        0      0 0.0.0.0:45351           0.0.0.0:*                           428/avahi-daemon: r
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           428/avahi-daemon: r
udp6       0      0 :::5353                 :::*                                428/avahi-daemon: r
udp6       0      0 :::58805                :::*                                428/avahi-daemon: r

It looks like there was an parsing issue with one the YML files due to invalid syntax.

For future readers, a not so quick way to troubleshoot is to try to make sense of the elastic-agent logs (elastic-agent status showed everything was healthy when it wasn't):

  1. cd /opt/Elastic/Agent/data/elastic-agent-<SOME_ID>/logs
  2. grep -i error -R *
  3. Try to make sense of the logs to pinpoint issue.
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.