URGENT: ERR Failed to publish events caused by: write tcp [::1]:60383->[::1]:5044: wsasend: An established connection was aborted by the software in your host machine

Hi team,
Please help urgently as we are doing Production deployment this week. I am getting errors in filebeat logs and data is not read and sent to logstash ...my data flow is filebeat->logstash->elasticsearch

ERR Failed to publish events caused by: write tcp [::1]:60383->[::1]:5044: wsasend: An established connection was aborted by the software in your host machine.

Here is my filebeat conf -
filebeat.prospectors:

Each - is a prospector. Most options can be set at the prospector level, so

you can use different prospectors for various configurations.

Below are the prospector specific configurations.

- input_type: log
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
  - D:/Debashree/TechOffice/Software/Logs/test*.logs
  document_type: testing
  fields:
  server: localhost
  ignore_older: 10m
  harvester_buffer_size: 16384

Logstash conf file
input {
beats {
port => 5044
}
}

output{

elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "filebeattest"
}

}

So when multiple logfiles are updated below is what I see -
2017/06/15 19:33:16.834650 sync.go:85: ERR Failed to publish events caused by: read tcp 127.0.0.1:61443->127.0.0.1:5044: wsarecv: An established connection was aborted by the software in your host machine.
2017/06/15 19:33:16.835653 single.go:91: INFO Error publishing events (retrying): read tcp 127.0.0.1:61443->127.0.0.1:5044: wsarecv: An established connection was aborted by the software in your host machine.
2017/06/15 19:33:36.565050 metrics.go:39: INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.publish.read_bytes=6 libbeat.logstash.publish.read_errors=1 libbeat.logstash.publish.write_bytes=490 libbeat.logstash.published_and_acked_events=1 libbeat.logstash.published_but_not_acked_events=1 libbeat.publisher.published_events=1 publish.events=1 registrar.states.update=1 registrar.writes=1

What is I am missing here ?

looks like logstash (host) is closing the connection while beats is waiting for the ACK. Which logstash version are you using? With this minimal logstash configuration, we don't you have filebeat ship logs to elasticsearch?

Try disabling any firewall or antivirus tools on the Filebeat host to see if either is the cause of the problem.

Hi,
Now this issue is not appearing but strangely a situation is that when we on-boarded new logs (log format is same) read from multiple server and start logstash it si sable to read but CPU usage of the box is increasing continuously. Then when we start Kibana it reaches to 100 % and then server is crashed.

What could be the issue ? and what can we do ? Please help urgently because client deliverable is stuck because of this.

Do you know what processes are consuming the CPU and how much each one is using?

This topic was automatically closed after 21 days. New replies are no longer allowed.