Filebeat configuration assistance

I have the ELK stack installed on a Linux Centos 7 machine and am able to ingest from another Linux machine. But I'm unable to ingest from a Windows 2012 Server. Here is my configuration:

on linux, 02-beats-input.conf:

input {
  beats {
    port => 5044
    ssl => false
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

on Windows, filebeat.yml:

filebeat.prospectors:
 - type: log
   paths:
     - c:\temp\messages
output.logstash:
  hosts: ["169.229.7.147:5044"]
  bulk_max_size: 1024

on Windows, filebeat log file:

2018-05-25T10:46:35.604-0700	INFO	instance/beat.go:468	Home path: [C:\Program Files\Filebeat] Config path: [C:\Program Files\Filebeat] Data path: [C:\\ProgramData\\filebeat] Logs path: [C:\\ProgramData\\filebeat\logs]
2018-05-25T10:46:35.605-0700	INFO	instance/beat.go:475	Beat UUID: 40df7e2e-48e9-4000-9d16-af254728ea06
2018-05-25T10:46:35.605-0700	INFO	instance/beat.go:213	Setup Beat: filebeat; Version: 6.2.4
2018-05-25T10:46:35.605-0700	INFO	pipeline/module.go:76	Beat name: maxdev
2018-05-25T10:46:35.606-0700	INFO	[monitoring]	log/log.go:97	Starting metrics logging every 30s
2018-05-25T10:46:35.606-0700	INFO	instance/beat.go:301	filebeat start running.
2018-05-25T10:46:35.606-0700	INFO	registrar/registrar.go:110	Loading registrar data from C:\ProgramData\filebeat\registry
2018-05-25T10:46:35.606-0700	INFO	registrar/registrar.go:121	States Loaded from registrar: 4
2018-05-25T10:46:35.606-0700	WARN	beater/filebeat.go:261	Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-05-25T10:46:35.606-0700	INFO	crawler/crawler.go:48	Loading Prospectors: 1
2018-05-25T10:46:35.607-0700	INFO	log/prospector.go:111	Configured paths: [c:\temp\messages]
2018-05-25T10:46:35.607-0700	INFO	crawler/crawler.go:82	Loading and starting Prospectors completed. Enabled prospectors: 1
2018-05-25T10:46:35.607-0700	INFO	log/harvester.go:216	Harvester started for file: c:\temp\messages
2018-05-25T10:46:35.945-0700	ERROR	logstash/async.go:235	Failed to publish events caused by: read tcp 128.32.154.165:65161->169.229.7.147:5044: wsarecv: An existing connection was forcibly closed by the remote host.

Please let me know what the issue might be. Thanks!

That usually means the connection was reset with a RST from the server. Can you look through the Logstash logs for errors (if you start logstash with --debug you'll get more verbose logging).

You should be able to recreate the issue by running this command from the Windows host: .\filebeat test output -e -d "*"

Thank you for helping.

When I run .\filebeat test output -e -d "*" I get the following:

{:timestamp=>"2018-05-29T11:21:58.036000-0700", :message=>"Beats inputs: accepting a new connection", :peer=>"128.32.154.165:32067", :level=>:debug, :file=>"logstash/inputs/beats.rb", :line=>"202", :method=>"handle_new_connection"}
{:timestamp=>"2018-05-29T11:21:58.040000-0700", :message=>"Beats input: waiting from new events from remote host", :peer=>"128.32.154.165:32067", :level=>:debug, :file=>"logstash/inputs/beats_support/connection_handler.rb", :line=>"30", :method=>"accept"}
{:timestamp=>"2018-05-29T11:21:58.050000-0700", :message=>"Beats Input: Remote connection closed", :peer=>"128.32.154.165:32067", :exception=>#<Lumberjack::Beats::Connection::ConnectionClosed: Lumberjack::Beats::Connection::ConnectionClosed wrapping: Errno::ECONNRESET, Connection reset by peer - Connection reset by peer>, :level=>:debug, :file=>"logstash/inputs/beats.rb", :line=>"215", :method=>"handle_new_connection"}
{:timestamp=>"2018-05-29T11:21:58.050000-0700", :message=>"Beats input, out of band call for flushing the content of this connection", :peer=>"128.32.154.165:32067", :level=>:debug, :file=>"logstash/inputs/beats_support/connection_handler.rb", :line=>"73", :method=>"flush"}
{:timestamp=>"2018-05-29T11:21:58.050000-0700", :message=>"Beats input: clearing the connection from the known clients", :peer=>"128.32.154.165:32067", :level=>:debug,

When I run the normal filebeat process from the windows box I get the following:

{:timestamp=>"2018-05-29T11:19:03.091000-0700", :message=>"Beats inputs: accepting a new connection", :peer=>"128.32.154.165:32037", :level=>:debug, :file=>"logstash/inputs/beats.rb", :line=>"202", :method=>"handle_new_connection"}
{:timestamp=>"2018-05-29T11:19:03.094000-0700", :message=>"Beats input: waiting from new events from remote host", :peer=>"128.32.154.165:32037", :level=>:debug, :file=>"logstash/inputs/beats_support/connection_handler.rb", :line=>"30", :method=>"accept"}
{:timestamp=>"2018-05-29T11:19:03.377000-0700", :message=>"Flushing buffer at interval", :instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x379d3eca @operations_mutex=#Mutex:0x395ad649, @max_size=500, @operations_lock=#Java::JavaUtilConcurrentLocks::ReentrantLock:0x3d6555ba, @submit_proc=#Proc:0x2652d6da@/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:57, @logger=#<Cabin::Channel:0x4955a4c7 @metrics=#<Cabin::Metrics:0x5e8af0d2 @metrics_lock=#Mutex:0x1848233d, @metrics={}, @channel=#<Cabin::Channel:0x4955a4c7 ...>>, @subscriber_lock=#Mutex:0x281cd2e8, @level=:debug, @subscribers={12322=>#<Cabin::Outputs::IO:0x458e5d83 @io=#<File:/var/log/logstash/logstash.log>, @lock=#Mutex:0x1c06dfc4>}, @data={}>, @last_flush=2018-05-29 11:19:02 -0700, @flush_interval=1, @stopping=#Concurrent::AtomicBoolean:0x5243c53e, @buffer=[], @flush_thread=#<Thread:0x4798a97e run>>", :interval=>1, :level=>:debug, :file=>"logstash/outputs/elasticsearch/buffer.rb", :line=>"90", :method=>"interval_flush"}
{:timestamp=>"2018-05-29T11:19:03.498000-0700", :message=>"Beats input: unhandled exception", :exception=>#<Zlib::BufError: buffer error>, :backtrace=>["org/jruby/ext/zlib/ZStream.java:134:in finish'", "org/jruby/ext/zlib/JZlibInflate.java:72:ininflate'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/lumberjack/beats/server.rb:380:in compressed_payload'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/lumberjack/beats/server.rb:251:infeed'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/lumberjack/beats/server.rb:384:in compressed_payload'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/lumberjack/beats/server.rb:251:infeed'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/lumberjack/beats/server.rb:462:in read_socket'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/lumberjack/beats/server.rb:442:inrun'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/logstash/inputs/beats_support/connection_handler.rb:33:in accept'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/logstash/inputs/beats.rb:211:inhandle_new_connection'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/logstash/inputs/beats_support/circuit_breaker.rb:42:in execute'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/logstash/inputs/beats.rb:211:inhandle_new_connection'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/logstash/inputs/beats.rb:167:in `run'"], :level=>:error, :file=>"logstash/inputs/beats.rb", :line=>"223", :method=>"handle_new_connection"}
{:timestamp=>"2018-05-29T11:19:03.499000-0700", :message=>"Beats input, out of band call for flushing the content of this connection", :peer=>"128.32.154.165:32037", :level=>:debug, :file=>"logstash/inputs/beats_support/connection_handler.rb", :line=>"73", :method=>"flush"}
{:timestamp=>"2018-05-29T11:19:03.499000-0700", :message=>"Beats input: clearing the connection from the known clients", :peer=>"128.32.154.165:32037", :level=>:debug, :file=>"logstash/inputs/beats.rb", :line=>"238", :method=>"handle_new_connection"}
{:timestamp=>"2018-05-29T11:19:04.379000-0700", :message=>"Flushing buffer at interval", :instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x379d3eca @operations_mutex=#Mutex:0x395ad649, @max_size=500, @operations_lock=#Java::JavaUtilConcurrentLocks::ReentrantLock:0x3d6555ba, @submit_proc=#Proc:0x2652d6da@/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:57, @logger=#<Cabin::Channel:0x4955a4c7 @metrics=#<Cabin::Metrics:0x5e8af0d2 @metrics_lock=#Mutex:0x1848233d, @metrics={}, @channel=#<Cabin::Channel:0x4955a4c7 ...>>, @subscriber_lock=#Mutex:0x281cd2e8, @level=:debug, @subscribers={12322=>#<Cabin::Outputs::IO:0x458e5d83 @io=#<File:/var/log/logstash/logstash.log>, @lock=#Mutex:0x1c06dfc4>}, @data={}>, @last_flush=2018-05-29 11:19:03 -0700, @flush_interval=1, @stopping=#Concurrent::AtomicBoolean:0x5243c53e, @buffer=[], @flush_thread=#<Thread:0x4798a97e run>>", :interval=>1, :level=>:debug, :file=>"logstash/outputs/elasticsearch/buffer.rb", :line=>"90", :method=>"interval_flush"}

Please advise. Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.