Filebeat Couldn't Send Logs to Logstash

Hi,

I get this error everytime.

2019-08-22T16:41:22.957+0300	ERROR	logstash/async.go:256	Failed to publish events caused by: write tcp 172.30.10.5:49924->172.30.10.112:5046: wsasend: An existing connection was forcibly closed by the remote host.

Other Logs:

2019-08-22T16:39:43.944+0300	INFO	log/harvester.go:255	Harvester started for file: c:\windows\system32\dhcp\DhcpSrvLog-Thu.log
2019-08-22T16:39:44.944+0300	INFO	pipeline/output.go:95	Connecting to backoff(async(tcp://172.30.10.112:5046))
2019-08-22T16:39:44.947+0300	INFO	pipeline/output.go:105	Connection to backoff(async(tcp://172.30.10.112:5046)) established
2019-08-22T16:40:03.942+0300	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":93,"time":{"ms":15}},"total":{"ticks":93,"time":{"ms":15},"value":93},"user":{"ticks":0}},"handles":{"open":175},"info":{"ephemeral_id":"c2dc7f46-829d-40ec-b7f3-46742969d89f","uptime":{"ms":63039}},"memstats":{"gc_next":6658096,"memory_alloc":4888016,"memory_total":7773544,"rss":3416064}},"filebeat":{"events":{"added":4,"done":4},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":3,"batches":2,"total":3},"read":{"bytes":12},"write":{"bytes":955}},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"published":3,"retry":1,"total":4},"queue":{"acked":3}}},"registrar":{"states":{"current":1,"update":4},"writes":{"success":3,"total":3}}}}}
2019-08-22T16:40:33.940+0300	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":93},"total":{"ticks":93,"value":93},"user":{"ticks":0}},"handles":{"open":172},"info":{"ephemeral_id":"c2dc7f46-829d-40ec-b7f3-46742969d89f","uptime":{"ms":93038}},"memstats":{"gc_next":6658096,"memory_alloc":4971512,"memory_total":7857040,"rss":4096}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}}}}}
2019-08-22T16:41:03.941+0300	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":93},"total":{"ticks":93,"value":93},"user":{"ticks":0}},"handles":{"open":170},"info":{"ephemeral_id":"c2dc7f46-829d-40ec-b7f3-46742969d89f","uptime":{"ms":123039}},"memstats":{"gc_next":6658096,"memory_alloc":5051520,"memory_total":7937048,"rss":-4096}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}}}}}
2019-08-22T16:41:22.957+0300	ERROR	logstash/async.go:256	Failed to publish events caused by: write tcp 172.30.10.5:49924->172.30.10.112:5046: wsasend: An existing connection was forcibly closed by the remote host.
2019-08-22T16:41:24.710+0300	ERROR	pipeline/output.go:121	Failed to publish events: write tcp 172.30.10.5:49924->172.30.10.112:5046: wsasend: An existing connection was forcibly closed by the remote host.
2019-08-22T16:41:24.710+0300	INFO	pipeline/output.go:95	Connecting to backoff(async(tcp://172.30.10.112:5046))
2019-08-22T16:41:24.710+0300	INFO	pipeline/output.go:105	Connection to backoff(async(tcp://172.30.10.112:5046)) established

My filebeat config is:

filebeat.inputs:

- 
  enabled: true

  paths:
    - c:\windows\system32\dhcp\DhcpSrvLog-*.log                 
  type: log
  include_lines: ["^[0-9]"]
  document_type: dhcp
  close_removed : false
  clean_removed : false                 
  ignore_older: 47h
  clean_inactive: 48h     
  fields:
    type: dhcp
  fields_under_root: true


filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

output.logstash:
  hosts: ["172.30.10.112:5046"]

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

Logstash Config:

input {
  beats {
   client_inactivity_timeout => 1200
   port => 5046
  }
}
filter {
   somefilter...
}
output {
    elasticsearch {
      hosts => ["http://192.168.2.21:9200"]
      index => "dchp-%{+YYYY.MM.dd}"
    }
}

How can i solve this error?

Hi,

This error is most often caused by an underlying network issue. Can you ping the logstash machine from the beats machine? Is there any kind of load balancer or firewall between them? Can you confirm the target IP address from the logstash machine?

@kolten telnet 172.30.10.112 5046

This will expose any n/w issues. I too faced a similar issue where i could ping the server but couldn't telnet to it.

Everything is fine. I can ping the logstash server. IP addresses are true. There is no firewall or load balancer @faec . So that i couldnt understand the problem

@kolten: Please remove this line, client_inactivity_timeout => 1200

 input {
  beats {
   client_inactivity_timeout => 1200
   port => 5046
  }
}

You are forcibly closing the connection in case there is inactivity for 1200 seconds.
Otherwise from the logs, it looks like the harvesters start and connection to logstash is also successful from Filebeat

Hi @kumarabhi

First config was without it but it didnt work.

That doesn't sound right.

input {
  beats {
    port => 5044
  }
}

output {
	stdout { 
	     codec => rubydebug { } 
	}
}

The above is the most basic config that works. I have tried it multiple times.

Test the config

/usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/logstash.conf

Run logstash

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.