ERR Failed to publish events caused by: read tcp X.XXXX:55860->XXXXXXX:5044: i/o timeout

file beat is not able to send data and getting timeout error with logstash,
what could be the reason

Please show the logs and config. Without additional information it is very hard for anyone to help...

sorry for missing log,
Please find the below log from filebeat:

2018-07-27T16:59:36-05:00 ERR Failed to publish events caused by: read tcp (FilebeatIP):46926->(LogSTashIP):5044: i/o timeout
2018-07-27T16:59:36-05:00 INFO Error publishing events (retrying): read tcp (FilebeatIP):46926->(LogSTashIP):5044: i/o timeout
2018-07-27T17:00:03-05:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1 libbeat.logstash.published_but_not_acked_events=2047
2018-07-27T17:00:33-05:00 INFO No non-zero metrics in the last 30s

Hi,

Please share your filebeat configuration file and logstasg configuration file and pipeline to solve this issue.

Regards,

===============================
filebeat.yml file

filebeat.prospectors:

  • input_type: log
    paths:

    • /var/log/messages

    fields:
    appid: XXXX
    environment: XXX
    hosting_env: XXXX
    log_type: varlog

output.logstash:
hosts: [":5044"]

ssl.certificate_authorities: ["XXXX_XX.pem"]

logging.level: info
logging.to_files: true
logging.to_syslog: false
logging.files.path: /var/log/filebeat
logging.files.name: filebeat.log
logging.files.keepfiles: 2
registry_file: /etc/filebeat/.filebeat.registry

===============================
logstash.yml file

path.data: /data/logstash

queue.type: persisted

path.logs: /var/log/logstash
xpack.monitoring.enabled: false
xpack.monitoring.elasticsearch.url: [ ":9200" ]
xpack.monitoring.elasticsearch.username: XXXXX_XXXX
xpack.monitoring.elasticsearch.password: XXXXX
xpack.monitoring.elasticsearch.ssl.ca: "XXXX_XX.pem"

===============================
input.beat.conf file

input {
beats {
port=> 5044
ssl => false
ssl_certificate => ".crt"
ssl_key => ".pkcs8"
}
}

Hi,

Please check your ssl settings if that is ok then please find below one more solution.

# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
#queue.max_bytes: 2gb 

For more details. please read
https://www.elastic.co/guide/en/logstash/current/persistent-queues.html

Regards,
Harsh

Thanks Harsh for info,

i am able to fix the connection issue
but i could see the below error in logstash log file. please advise.

[2018-08-01T13:30:45,136][INFO ][org.logstash.beats.BeatsHandler] Exception: not an SSL/TLS record:

Hi @rajkumar.m,

Please read this thread it will help you to get it resolve.

Regards,
Harsh

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.