Failed to publish events caused by: read tcp

I am new to ELK, and I noticed these ERROR logs coming from the Filebeat Docker container:

filebeat         | 2020-01-31T14:27:27.422Z     ERROR   logstash/async.go:256   Failed to publish events caused by: read tcp 172.22.0.3:59058->172.22.0.4:5044: i/o timeout
filebeat         | 2020-01-31T14:27:27.451Z     ERROR   logstash/async.go:256   Failed to publish events caused by: client is not connected
filebeat         | 2020-01-31T14:27:28.761Z     ERROR   pipeline/output.go:121  Failed to publish events: client is not connected

I am logging from a local dummy log file (see https://github.com/moryachok/elasticstack-lab)

Everything seems to be fine though in Kibana after creating the index pattern, and I was just wondering if this is something serious or not.

For What Its Worth:
I do have same annoying

  logstash/async.go:256   Failed to publish events
    caused by: read tcp 192.0.2.3:59058->192.0.2.4:5044: i/o timeout

error.

In ERROR logstash/async.go:256 Failed to publish events caused by: write tcp 10.XXX.XX.XX:43522->10.XXX.XX.XX:5044: i/o timeout 2019-08-06T16:04:30.967+0800 ERROR pipeline/output.go:121 Failed to publish events:IOtimeout is it also reported (and closed after 28 days)

Pointers on what to check (journalbeat side or the logstash side) are welcome.

Allocating/Assigning more memory for JVM of logstash does not help.

With

logger.elasticsearchoutput.name = logstash.outputs.elasticsearch
logger.elasticsearchoutput.level = debug

added to /etc/logstash/log4j2.properties did I not get clues indicating what is causing this.

So please share your thoughts about this issue.

Hi,
Which version of the stack are you using ?
Are you running the stack on Docker ?

7.5.2

Without docker ( the original poster is / was using docker )

$ dpkg -l logstash | grep ^ii
ii  logstash       1:7.5.2-1    all          An extensible logging pipeline
$ dpkg -l elasticsearch | grep ^ii
ii  elasticsearch  7.5.2        amd64        Distributed RESTful search engine built for the cloud
$ dpkg -l kibana | grep ^ii
ii  kibana         7.5.2        amd64        Explore and visualize your Elasticsearch data

and at client side

$ dpkg -l journalbeat | grep ^ii
ii  journalbeat    7.5.2        amd64        Journalbeat ships systemd journal entries to Elasticsearch or Logstash.

Okay thanks.
I had the stack running on docker perfectly fine with the version 7.5.1.
This specific error started to appear when I switched to 7.5.2.
I had changed nothing to the configuration of anything from the stack.
I suggest you try rolling back to 7.5.1 and see what happens.

Good advice. The problem indeed emerged roughly around the time from 7.5.1 to 7.5.2 switch.

It still works, including the problem ...

Yes I'm on 7.5.2 also and I am running Docker containers.

I am only experimenting at the moment, so I'll change everything in the Docker Compose file to 7.5.1 at some point.

1 Like

I have this as well, I also posted a thread. Logs reach ES fine through logstash but I get these errors every 1-2 minutes.

Yes, same here.

Changed journalbeat side. Sharing that modification and my thoughts about it here.
Your review is appreciated.

commit 8add1b95454f19ca87eda22849a36054e01d39d6
Author: Geert Stappers <stappers@hendrikx-itc.nl>
Date:   Fri Feb 7 11:46:33 2020 +0100

    Patience to journalbeat
    
    Set time-out of journalbeat to 90*60 seconds.
    At logstash side is "client idle time-out" set to 75*60 seconds.
    Those seventy-five minutes idle-time-out are one hour plus a quarter.
    The hour is based on that clients talk at least once a hour to logstash.
    (the extra quarters are indeed a fat margin)
    
    What the effect of the 90*60, 5400, seconds patience is
    when there is no logstash server is UNKNOWN.
    
    This change prevents (suppresses?)
      ERROR logstash/async.go:256   Failed to publish events caused by: read tcp journalbeatclient:43108->logstashserver:5044: i/o timeout
      ERROR logstash/async.go:256   Failed to publish events caused by: client is not connected
      ERROR pipeline/output.go:121  Failed to publish events: client is not connected
    as detected (triggered?) by the journalbeat client.
    
    Why this time-out setting is needed after several month with a need,
    is UNKNOWN.

diff --git a/salt/srv/salt/core/etc/journalbeat/journalbeat.yml b/salt/srv/salt/core/etc/journalbeat/journalbeat.yml
index 5164f0d0..19eb5f3e 100644
--- a/salt/srv/salt/core/etc/journalbeat/journalbeat.yml
+++ b/salt/srv/salt/core/etc/journalbeat/journalbeat.yml
@@ -127,6 +127,9 @@ setup.kibana:
 output.logstash:
   # The Logstash hosts
   hosts: ["minerva-logserver:5044"]
+  # and have patience with those hosts
+  timeout: 5400 # seconds
+  # (see also the commit message from 2020-02-07)
 
   # Optional SSL. By default is off.
   # List of root certificates for HTTPS server verifications

Where? And more important: has that location (URL) a pointer to here??

Here you go, i'll mention this thread now.

Did you see the fixes I applied in my thread?

experinced the same issue with 7.5.2 (docker) so I'm using 7.5.1

I actually found increasing timeouts in logstash and filebeat did not fix it for me - after spending a few days on this, I have found this is due to a bug in the Beats input of Logstash: https://github.com/elastic/logstash/issues/11540

I resolved it by upgrading my ELK stack to 7.6.0 but you can just upgrade the plugin in Logstash if required by following the command in the above git issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.