My logstash output is creating hundreds of shards per index

Hi all,

I am setting up a new ELK system to centralize Windows Event logs and in the future syslogs etc. The shipping of the logs using winlogbeat is working fine however when I look at my index for logstash-ddmmyyy I find I have hundreds of shards being created and the performance of the system makes it unusable. I would be expecting 5 shards per index and a new index each day.

I have cleared all of my indexes this morning to start fresh and currently have about 30 Windows Servers sending logs into the system. It has currently been running for 15 minutes and my cluster health is:

{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 1111,
"active_shards" : 1111,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 1111,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}

My config file for logstash to Elastic Search is:

input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

Filter syslog events from filebeat

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "logstash-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

I am clearly making a mistake and would be grateful if someone could help me out.

Please show the output of curl localhost:9200/_cat/indices.

Hi Magnus,

I started everything fresh just now and this is what has happened. It looks like logstash is receiving data from a long time ago instead of the past couple of days. Would you agree?

Output:
green open logstash-2016.03.18 5 1 9203 0 12.5mb 5.5mb
green open logstash-2015.05.28 5 1 26 0 119.1kb 59.5kb
green open logstash-2016.03.19 5 1 17090 0 14.6mb 6.6mb
green open logstash-2015.11.26 5 1 87 0 207.3kb 98.8kb
green open logstash-2015.11.27 5 1 126 0 228kb 99.4kb
green open logstash-2015.11.28 5 1 13 0 105.4kb 52.7kb
green open logstash-2015.11.29 5 1 40 0 165kb 68.2kb
green open logstash-2015.11.30 5 1 87 0 198.3kb 89.8kb
green open logstash-2015.09.30 5 1 940 0 930.2kb 414.7kb
green open logstash-2015.12.01 5 1 68 0 194.5kb 82.9kb
green open logstash-2015.12.02 5 1 82 0 186.7kb 93.3kb
green open logstash-2016.03.21 5 1 8899 0 15.4mb 8mb
green open logstash-2016.03.20 5 1 18150 0 15.1mb 6.3mb
green open logstash-2016.01.01 5 1 40 0 131.2kb 65.6kb
green open logstash-2016.01.04 5 1 57 0 136.7kb 68.3kb
green open logstash-2016.01.05 5 1 43 0 137.3kb 63.9kb
green open logstash-2016.01.02 5 1 19 0 110.5kb 55.2kb
green open logstash-2016.01.03 5 1 17 0 109kb 54.5kb
green open logstash-2016.01.08 5 1 103 0 217.9kb 108.9kb
green open logstash-2016.01.09 5 1 63 0 138.8kb 69.4kb
green open logstash-2016.01.06 5 1 99 0 193.7kb 96.8kb
green open logstash-2016.01.07 5 1 116 0 218.9kb 95kb
yellow open logstash-2015.12.27 5 1 71 0 136.9kb 126.5kb
green open logstash-2015.05.30 5 1 193 0 95.2kb 23.3kb
green open logstash-2015.10.03 5 1 667 0 661.8kb 315kb
green open logstash-2015.12.28 5 1 23 0 113.1kb 56.5kb
green open logstash-2015.10.04 5 1 655 0 607.9kb 304.3kb
yellow open logstash-2015.10.01 5 1 766 0 635.2kb 347.7kb
yellow open logstash-2015.11.13 5 1 0 0 780b 260b
green open logstash-2015.12.25 5 1 37 0 141.5kb 84kb
green open logstash-2015.10.02 5 1 661 0 655kb 310.4kb
green open logstash-2015.12.26 5 1 68 0 198kb 140.7kb
green open logstash-2015.10.05 5 1 490 0 91.9kb 40.8kb
green open logstash-2015.12.29 5 1 25 0 115kb 57.5kb
green open logstash-2016.12.22 5 1 1 0 28.9kb 14.4kb
green open logstash-2015.12.30 5 1 28 0 117.3kb 58.6kb
green open logstash-2015.12.31 5 1 52 0 155.6kb 77.8kb
green open logstash-2015.09.20 5 1 0 0 34.2kb 260b
green open logstash-2016.12.15 5 1 1 0 29.1kb 14.5kb
yellow open logstash-2015.09.21 5 1 0 0 10.3kb 260b
green open logstash-2015.09.25 5 1 694 0 856.9kb 404.5kb
green open logstash-2015.09.26 5 1 858 0 844.7kb 394kb
green open logstash-2015.09.27 5 1 851 0 823kb 397.2kb
green open logstash-2015.09.28 5 1 881 0 686.2kb 343.1kb
green open logstash-2015.09.29 5 1 860 0 835.3kb 358.8kb
green open logstash-2016.01.11 5 1 97 0 197.5kb 84kb
green open logstash-2016.01.12 5 1 88 0 175.5kb 78.1kb
green open logstash-2015.05.29 5 1 481 0 649.7kb 324.8kb
green open logstash-2016.01.10 5 1 61 0 137.5kb 68.7kb
green open logstash-2016.01.15 5 1 104 0 45.4kb 17.7kb
green open logstash-2016.02.27 5 1 1 0 29.2kb 14.6kb
green open logstash-2016.01.16 5 1 74 0 59.7kb 14.1kb
green open logstash-2016.01.13 5 1 125 0 71.5kb 27.6kb
green open logstash-2016.01.14 5 1 94 0 80.4kb 15.7kb
green open logstash-2016.01.19 5 1 0 0 1kb 260b
green open logstash-2016.01.17 5 1 0 0 12.2kb 260b
green open logstash-2016.01.18 5 1 0 0 24.2kb 260b
yellow open logstash-2015.12.16 5 1 422 0 650.8kb 400.1kb
yellow open logstash-2015.12.17 5 1 706 0 656.6kb 416kb
yellow open logstash-2015.12.14 5 1 60 0 231.8kb 180.3kb
green open logstash-2015.12.15 5 1 124 0 339.6kb 169.8kb
green open logstash-2015.11.09 5 1 23 0 120.3kb 60.1kb
green open logstash-2015.12.18 5 1 82 0 318.8kb 148.4kb
green open logstash-2015.12.19 5 1 60 0 271.2kb 135.6kb
green open logstash-2015.12.20 5 1 56 0 233.6kb 116.8kb
yellow open logstash-2015.12.23 5 1 208 0 251.7kb 208.7kb
green open .kibana 1 1 2 0 21.6kb 10.8kb
green open logstash-2015.11.11 5 1 416 0 552.6kb 275.1kb
yellow open logstash-2015.12.24 5 1 71 0 176.6kb 125.6kb
green open logstash-2015.09.01 5 1 236 0 294.2kb 122.8kb
green open logstash-2015.12.10 5 1 103 0 304.7kb 152.3kb
green open logstash-2016.11.29 5 1 1 0 29kb 14.5kb
green open logstash-2015.12.11 5 1 85 0 378.4kb 186.7kb
green open logstash-2015.09.02 5 1 578 0 611.9kb 276.6kb
green open logstash-2015.09.03 5 1 335 0 83.7kb 260b

Yes. Indexes are created based on the @timestamp field, so either you are indeed reading old log data and then the number of indexes is to be expected, or your date filter is parsing timestamps incorrectly.

Either way you should probably rethink having five shards per index. Unless you're logging 100 GB/day you don't need five shards.

Thanks Magnus,

Nowhere near 100GB/day so I will drop the 5 shards down to... 2? I will have to understand that better before I make a decision. Happy to listen to any suggestion you may have.

Thanks for all your help.

In my experience, winlogbeats is reading the whole Windows event log, which reaches back as far as some month. I had this issue as well, and I tried setting, the parameter to ignore old data, but it wasn't working for me. I eventually accepted the fact and then consolidated the old events (they get more sparse as they get older) into month-based indices using the Python helper library (using reindex).

Hi Stefan,

I'm seeing exactly what you described. I need to sort out the index as suggested by Magnus and then possibly curator to deal with old information that's coming in. Like you I tried the parameter in the winlogbeat.yml file however it had no affect.

Nowhere near 100GB/day so I will drop the 5 shards down to... 2? I will have to understand that better before I make a decision. Happy to listen to any suggestion you may have.

There are some good chapters in Elasticsearch: The Definitive Guide that talk about scaling, shard sizees, etc. It regularly comes up here too. I don't have the whole picture but I suspect a single shard will be fine in your case.

Thanks Magnus. I am working through each chapter so will certainly learn all this in the time to come. In the meantime I really appreciate your assistance.