Hi,
Need help!
All of a sudden i started getting these in my Logstash log... ALOT!! Like 2 every millisecond
I see that events are coming in from my beat agents, but not in the pace i usually see.
Have not changed any configuration on either beat or Logstash-side.
Running Logstash and ElasticSearch on the same server.
Logstash Log:
[2018-05-25T09:55:43,871][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"D:/Bin/Logstash/modules/fb_apache/configuration"}
[2018-05-25T09:55:43,875][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"D:/Bin/Logstash/modules/netflow/configuration"}
[2018-05-25T09:55:43,916][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-05-25T09:55:44,577][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-05-25T09:55:47,511][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elkserver01:9200/]}}
[2018-05-25T09:55:47,514][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elkserver01:9200/, :path=>"/"}
[2018-05-25T09:55:47,609][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elkserver01:9200/"}
[2018-05-25T09:55:47,644][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elkserver01:9200"]}
[2018-05-25T09:55:47,650][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elkserver01:9200/]}}
[2018-05-25T09:55:47,650][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elkserver01:9200/, :path=>"/"}
[2018-05-25T09:55:47,654][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elkserver01:9200/"}
[2018-05-25T09:55:47,664][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elkserver01:9200"]}
[2018-05-25T09:55:47,668][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elkserver01:9200/]}}
[2018-05-25T09:55:47,670][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elkserver01:9200/, :path=>"/"}
[2018-05-25T09:55:47,673][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elkserver01:9200/"}
[2018-05-25T09:55:47,680][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elkserver01:9200"]}
[2018-05-25T09:55:47,688][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x49754912@D:/Bin/Logstash/logstash-core/lib/logstash/pipeline.rb:290 run>"}
[2018-05-25T09:55:48,469][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"10.50.121.50:5044"}
[2018-05-25T09:55:48,534][INFO ][logstash.pipeline ] Pipeline started {"pipeline.id"=>"main"}
[2018-05-25T09:55:48,549][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-05-25T09:55:48,624][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2018-05-25T09:57:03,821][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:5044, remote: 10.50.150.7:54638] Exception: An established connection was aborted by the software in your host machine
[2018-05-25T09:57:03,899][WARN ][logstash.inputs.beats ] Could not retrieve remote IP address for beats input.
.....
One of my beat logs:
2018-05-25T10:01:38+02:00 INFO Home path: [C:\Elastic\winlogbeat] Config path: [C:\Elastic\winlogbeat] Data path: [C:\ProgramData\winlogbeat] Logs path: [C:\Elastic\winlogbeat\logs]
2018-05-25T10:01:38+02:00 INFO Metrics logging every 30s
2018-05-25T10:01:38+02:00 INFO Beat UUID: 97cab043-2f46-43b9-95a0-ccc6adfa661f
2018-05-25T10:01:38+02:00 INFO Setup Beat: winlogbeat; Version: 6.0.0
2018-05-25T10:01:38+02:00 INFO Beat name: SERVERX
2018-05-25T10:01:38+02:00 INFO State will be read from and persisted to C:\ProgramData\winlogbeat.winlogbeat.yml
2018-05-25T10:01:38+02:00 INFO winlogbeat start running.
2018-05-25T10:02:08+02:00 INFO Non-zero metrics in the last 30s: beat.memstats.gc_next=44020208 beat.memstats.memory_alloc=41479008 beat.memstats.memory_total=145583704 libbeat.config.module.running=0 libbeat.output.events.active=4096 libbeat.output.events.batches=3 libbeat.output.events.total=4096 libbeat.output.type=logstash libbeat.output.write.bytes=1136319 libbeat.pipeline.clients=4 libbeat.pipeline.events.active=4118 libbeat.pipeline.events.published=4116 libbeat.pipeline.events.retry=1600 libbeat.pipeline.events.total=4118 msg_file_cache.Microsoft-Windows-NTLM/OperationalHits=2224 msg_file_cache.Microsoft-Windows-NTLM/OperationalMisses=1 msg_file_cache.Microsoft-Windows-NTLM/OperationalSize=1 msg_file_cache.Microsoft-Windows-PowerShell/OperationalHits=1506 msg_file_cache.Microsoft-Windows-PowerShell/OperationalMisses=1 msg_file_cache.Microsoft-Windows-PowerShell/OperationalSize=1 msg_file_cache.SystemHits=423 msg_file_cache.SystemMisses=6 msg_file_cache.SystemSize=6 uptime={"server_time":"2018-05-25T08:02:08.6577244Z","start_time":"2018-05-25T08:01:38.6289945Z","uptime":"30.0287299s","uptime_ms":"30028729"}
2018-05-25T10:02:09+02:00 ERR Failed to publish events caused by: read tcp 10.50.150.7:52935->10.50.121.50:5044: i/o timeout
2018-05-25T10:02:09+02:00 ERR Failed to publish events caused by: read tcp 10.50.150.7:52935->10.50.121.50:5044: i/o timeout
2018-05-25T10:02:09+02:00 ERR Failed to publish events caused by: read tcp 10.50.150.7:52935->10.50.121.50:5044: i/o timeout
2018-05-25T10:02:10+02:00 ERR Failed to publish events caused by: client is not connected
2018-05-25T10:02:10+02:00 INFO retryer: send wait signal to consumer
2018-05-25T10:02:10+02:00 INFO done
2018-05-25T10:02:11+02:00 ERR Failed to publish events: client is not connected
2018-05-25T10:02:11+02:00 INFO retryer: send unwait-signal to consumer
2018-05-25T10:02:11+02:00 INFO done
2018-05-25T10:02:11+02:00 INFO retryer: send wait signal to consumer
Dont understand why..
If i run netstat -a, i see alot of established remote Connection to logstash (5044)
And as i said, i see data coming in. I saw this once Before, but it solved itself somehow. Now it came back again. My cluster indexes are green.