Hi,
I get this error everytime.
2019-08-22T16:41:22.957+0300 ERROR logstash/async.go:256 Failed to publish events caused by: write tcp 172.30.10.5:49924->172.30.10.112:5046: wsasend: An existing connection was forcibly closed by the remote host.
Other Logs:
2019-08-22T16:39:43.944+0300 INFO log/harvester.go:255 Harvester started for file: c:\windows\system32\dhcp\DhcpSrvLog-Thu.log
2019-08-22T16:39:44.944+0300 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://172.30.10.112:5046))
2019-08-22T16:39:44.947+0300 INFO pipeline/output.go:105 Connection to backoff(async(tcp://172.30.10.112:5046)) established
2019-08-22T16:40:03.942+0300 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":93,"time":{"ms":15}},"total":{"ticks":93,"time":{"ms":15},"value":93},"user":{"ticks":0}},"handles":{"open":175},"info":{"ephemeral_id":"c2dc7f46-829d-40ec-b7f3-46742969d89f","uptime":{"ms":63039}},"memstats":{"gc_next":6658096,"memory_alloc":4888016,"memory_total":7773544,"rss":3416064}},"filebeat":{"events":{"added":4,"done":4},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":3,"batches":2,"total":3},"read":{"bytes":12},"write":{"bytes":955}},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"published":3,"retry":1,"total":4},"queue":{"acked":3}}},"registrar":{"states":{"current":1,"update":4},"writes":{"success":3,"total":3}}}}}
2019-08-22T16:40:33.940+0300 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":93},"total":{"ticks":93,"value":93},"user":{"ticks":0}},"handles":{"open":172},"info":{"ephemeral_id":"c2dc7f46-829d-40ec-b7f3-46742969d89f","uptime":{"ms":93038}},"memstats":{"gc_next":6658096,"memory_alloc":4971512,"memory_total":7857040,"rss":4096}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}}}}}
2019-08-22T16:41:03.941+0300 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":93},"total":{"ticks":93,"value":93},"user":{"ticks":0}},"handles":{"open":170},"info":{"ephemeral_id":"c2dc7f46-829d-40ec-b7f3-46742969d89f","uptime":{"ms":123039}},"memstats":{"gc_next":6658096,"memory_alloc":5051520,"memory_total":7937048,"rss":-4096}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}}}}}
2019-08-22T16:41:22.957+0300 ERROR logstash/async.go:256 Failed to publish events caused by: write tcp 172.30.10.5:49924->172.30.10.112:5046: wsasend: An existing connection was forcibly closed by the remote host.
2019-08-22T16:41:24.710+0300 ERROR pipeline/output.go:121 Failed to publish events: write tcp 172.30.10.5:49924->172.30.10.112:5046: wsasend: An existing connection was forcibly closed by the remote host.
2019-08-22T16:41:24.710+0300 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://172.30.10.112:5046))
2019-08-22T16:41:24.710+0300 INFO pipeline/output.go:105 Connection to backoff(async(tcp://172.30.10.112:5046)) established
My filebeat config is:
filebeat.inputs:
-
enabled: true
paths:
- c:\windows\system32\dhcp\DhcpSrvLog-*.log
type: log
include_lines: ["^[0-9]"]
document_type: dhcp
close_removed : false
clean_removed : false
ignore_older: 47h
clean_inactive: 48h
fields:
type: dhcp
fields_under_root: true
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["172.30.10.112:5046"]
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
Logstash Config:
input {
beats {
client_inactivity_timeout => 1200
port => 5046
}
}
filter {
somefilter...
}
output {
elasticsearch {
hosts => ["http://192.168.2.21:9200"]
index => "dchp-%{+YYYY.MM.dd}"
}
}
How can i solve this error?