Hello everyone,
i previously installed logstash on a Ubuntu 16.04 LTS VM on a Hyper-V. We wanted to collect all Windows Events with winlogbeat.
I tested it some days with only a few server. It works, but after sometime the server begins slow down until it reaches RED state. Only a reboot is helping. I did not change the elasticsearch.yml config.. i just set the configs to send logs over logstash with certs.
Winlogbeat log:
On Friday our Windowsserver suddenly stopped sending logs.
2017-08-14T10:02:28+02:00 INFO EventLog[Application] Successfully published 1 events
2017-08-14T10:02:33+02:00 INFO Stopping Winlogbeat
2017-08-14T10:02:33+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=7 libbeat.logstash.publish.read_bytes=371 libbeat.logstash.publish.write_bytes=7395 libbeat.logstash.published_and_acked_events=20 libbeat.publisher.published_events=20 msg_file_cache.ApplicationMisses=4 msg_file_cache.ApplicationSize=4 msg_file_cache.SecurityHits=8 msg_file_cache.SystemHits=5 msg_file_cache.SystemMisses=3 msg_file_cache.SystemSize=3 published_events.Application=4 published_events.Security=8 published_events.System=8 published_events.total=20 uptime={"server_time":"2017-08-14T08:02:33.0551393Z","start_time":"2017-08-10T13:39:26.401834Z","uptime":"90h23m6.6533053s","uptime_ms":"325386653305"}
2017-08-14T10:02:33+02:00 INFO EventLog[System] Stop processing.
2017-08-14T10:02:33+02:00 INFO EventLog[Security] Stop processing.
2017-08-14T10:02:33+02:00 INFO EventLog[Application] Stop processing.
2017-08-14T10:02:33+02:00 INFO Total non-zero values: libbeat.logstash.call_count.PublishEvents=17769 libbeat.logstash.publish.read_bytes=1936401 libbeat.logstash.publish.read_errors=111 libbeat.logstash.publish.write_bytes=37270974 libbeat.logstash.publish.write_errors=81 libbeat.logstash.published_and_acked_events=133596 libbeat.logstash.published_but_not_acked_events=240 libbeat.publisher.published_events=133596 msg_file_cache.ApplicationHits=420 msg_file_cache.ApplicationMisses=65 msg_file_cache.ApplicationSize=4 msg_file_cache.SecurityHits=74052 msg_file_cache.SecurityMisses=13 msg_file_cache.SecuritySize=1 msg_file_cache.SystemHits=57794 msg_file_cache.SystemMisses=1261 msg_file_cache.SystemSize=4 published_events.Application=482 published_events.Security=74059 published_events.System=59055 published_events.total=133596 uptime={"server_time":"2017-08-14T08:02:33.9375624Z","start_time":"2017-08-10T13:39:26.401834Z","uptime":"90h23m7.5357284s","uptime_ms":"325387535728"}
2017-08-14T10:02:33+02:00 INFO Uptime: 90h23m7.545728s
2017-08-14T10:02:33+02:00 INFO winlogbeat stopped.
server:
[2017-08-14 08:14:32,672][INFO ][monitor.jvm ] [Zaladane] [gc][young][247803][36436] duration [752ms], collections [1]/[1s], total [752ms]/[1.6h], memory [580.7mb]->[588.5mb]/[1015.6mb], all_pools {[young] [1.9mb]->[17.7mb]$
[2017-08-14 08:32:39,722][INFO ][cluster.metadata ] [Zaladane] [winlogbeat-2017.08.14] update_mapping [wineventlog]
[2017-08-14 10:02:27,920][INFO ][cluster.metadata ] [Zaladane] [winlogbeat-2017.08.14] update_mapping [wineventlog]
[2017-08-14 10:05:03,233][INFO ][cluster.metadata ] [Zaladane] [winlogbeat-2017.08.14] update_mapping [wineventlog]
[2017-08-14 10:05:03,458][INFO ][cluster.metadata ] [Zaladane] [winlogbeat-2017.08.14] update_mapping [wineventlog]
[2017-08-14 10:05:03,497][INFO ][cluster.metadata ] [Zaladane] [winlogbeat-2017.08.14] update_mapping [wineventlog]
later on server after 5 hours it start spamming these:
[2017-08-14 15:45:58,997][DEBUG][action.bulk ] [Zaladane] [winlogbeat-2017.08.14][3] failed to execute bulk item (index) index {[winlogbeat-2017.08.14][wineventlog][AV3g_nVqLna5gr6L84O_], source[{"message":"Die Systemzeit w$
MapperParsingException[failed to parse [event_data.PreviousTime]]; nested: IllegalArgumentException[Invalid format: "11:43:51" is malformed at ":43:51"];
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:329)
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:311)
at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:438)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:264)
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:308)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:328)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:254)
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:124)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:309)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:533)
at org.elasticsearch.index.shard.IndexShard.prepareCreateOnPrimary(IndexShard.java:510)
at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:214)
at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:223)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:327)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:120)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:68)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:287)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)
.... due to char limit i cut here