We've recently updated our stack to 7.2, before the update our logstash indices were named something like "logstash-2019.03.07" by default. There's nothing in our configuration as far as I'm aware that defines this name, so it was the default logstash/ES behaviour. After updating everything(including logstash) to 7.2, the newly created indices are all collected under an index named "logstash", without the date. This broke our index pattern ("logstash-*") and I'm suspecting that not having multiple indices per each day is also slowing down searches considerably or giving time-outs by having a lot of data under the same index, though I may be wrong. My question is, has anything changed in regards to the default index naming behaviour in 7.x? Any idea why this happened?
green open .monitoring-es-7-2019.07.04 G98NljY4TumAyQs4lYboFw 1 1 260483 246081 735.6mb 368.6mb
green open filebeat-6.7.1-2019.07.01 dgsMbOSRS5GATlJ2ADAfRg 3 2 90789 0 113.9mb 37.8mb
green open .kibana_2 PQnWyVCgSE671qwOLCitRw 1 1 788 1 1.9mb 979.3kb
green open packetbeat-6.6.0-2019.07.01 iAD32Sk8R2qw0L5YOF0fiw 3 2 74627254 0 62.9gb 20.9gb
green open metricbeat-6.6.0-2019.07.02 YrSRDk5UTGG3pjhGDN6emA 1 2 201787 0 124.2mb 41.4mb
green open auditbeat-6.6.2-2019.07.03 _SwfcHSuSW63UDxaXjW0KA 3 2 7702 0 6.8mb 2.3mb
green open filebeat-6.7.1-2019.07.04 sHzfjjB-Tvehz6VctVvaiQ 3 2 91691 0 116.9mb 38.9mb
green open filebeat-6.6.1-2019.07.01 NG94W3VlT_Kx6AVcjbdxOQ 3 2 1454 0 2mb 770.3kb
green open metricbeat-6.6.0-2019.07.03 ry8byiIvSvyAqLc7BQG8ow 1 2 201943 0 124.7mb 41.5mb
green open auditbeat-6.6.2-2019.07.02 0XRbjrXnSAe6ocd72t_GsQ 3 2 7699 0 6.8mb 2.2mb
green open .monitoring-es-7-2019.07.02 B1Xjk4eUSqq2Bt0f_l3maA 1 1 932996 503750 1.2gb 624.7mb
green open filebeat-7.2.0 UZlk0P9YTV6nMpqyIg8lsA 1 2 26804606 0 26.3gb 8.8gb
green open metricbeat-6.6.0-2019.07.05 zjG91WWgS4GexGz7sizN4Q 1 2 15442 0 11.2mb 3.7mb
green open filebeat-6.7.1-2019.07.03 Pi3azZflRDKaug43c7-jlA 3 2 91606 0 116.2mb 38.6mb
green open metricbeat-6.6.1-2019.07.02 KOnDQCaGQ7m5_BDiukvV6g 1 2 200441 0 130.6mb 43.5mb
green open filebeat-6.6.1-2019.07.02 BM7XkERKRRSs7XbgTp0DVA 3 2 1148 0 1.7mb 653.5kb
green open logstash N7vpcE9zS8e3XabvQhv7AA 1 2 331529277 0 478.9gb 163.6gb
green open auditbeat-6.6.0-2019.07.02 hVpIIyclTJq0TC5hDhf93A 3 2 60257 0 49.2mb 16.2mb
green open auditbeat-6.6.0-2019.07.01 eaDusX0oRUaByPfzYrrmDg 3 2 84696 0 65.7mb 21.9mb
green open .monitoring-kibana-7-2019.07.02 LM5y-njcRCyTrZNNdvNWjA 1 1 8639 0 4.1mb 2mb
green open metricbeat-6.6.1-2019.07.01 A1MZ-vybR4KiOmOA0_yPRw 1 2 279175 0 176.9mb 58.9mb
green open metricbeat-6.6.0-2019.07.04 61micKlnTk2nlTBln_uF3w 1 2 201857 0 124.6mb 41.5mb
green open metricbeat-7.2.0 mCzviR0ZQpmyf959VxEGXg 1 2 27029615 0 25.8gb 11.5gb
green open .kibana_1 Nj789jqSTGOM8GEzHS3A-Q 1 1 785 0 1.6mb 860.7kb
green open auditbeat-7.2.0 pP8JTtzhTs6CPCVoqHnMtw 1 2 914324 0 1.4gb 506.6mb
green open filebeat-6.6.1-2019.07.03 IYMU7PeHRom7orzvYCWxWw 3 2 260 0 909.7kb 303kb
green open filebeat-6.6.1-2019.07.04 EcYm949TSlyZjxcFn8JVgg 3 2 187 0 1mb 317.9kb
green open packetbeat-6.6.0-2019.07.04 WIxZ159gQkyJw7nPRGTRsg 3 2 84035821 0 70.7gb 23.6gb
green open auditbeat-6.6.2-2019.07.01 29vp6ClXQPG6-82kKRYWFA 3 2 7971 0 7.2mb 2.4mb
green open metricbeat-6.6.0-2019.07.01 QDek5kuJSTqzNvQAYgE0Kg 1 2 201287 0 124.2mb 41.4mb
green open .monitoring-kibana-7-2019.07.03 Nx7eh7owQXKG542pVqaXlQ 1 1 8640 0 4.2mb 2.1mb
green open .monitoring-es-7-2019.07.03 nYTPUZfwQBSsbMBoU-Ik0A 1 1 825334 316973 1gb 533mb
green open .monitoring-kibana-7-2019.07.01 W0R25hW2TfqptcJRfeCFAg 1 1 8639 0 4.3mb 2.2mb
green open filebeat-6.7.1-2019.07.05 g-uaiLmZRequijGSYLJSWQ 3 2 7068 0 13.1mb 4.2mb
green open auditbeat-6.6.2-2019.07.04 6onxl2_fTqO_e8ChDQ5w-Q 3 2 5389 0 6.9mb 2.3mb
green open .monitoring-kibana-7-2019.07.04 9nRlVpQZT4GEzHJK2PibGA 1 1 3 0 1.2mb 644.5kb
green open packetbeat-6.6.0-2019.07.03 bF6PDOr3RlqK0hK1P6ZO-w 3 2 82333285 0 69.2gb 23.1gb
green open packetbeat-6.6.0-2019.07.02 GMAUAjf1QfCvgsvBDo6Gbw 3 2 77782357 0 65.6gb 21.8gb
green open .kibana_3 YDpRGZpHTFK37ape9w8ctw 1 1 1902 60 2.6mb 1.2mb
green open .monitoring-es-7-2019.07.01 Jb2766IATtWY0hYyjIsqOg 1 1 2113672 1089323 2.2gb 1.1gb
green open filebeat-6.7.1-2019.07.02 1i7hR1CLSXykmISpaKukoQ 3 2 91646 0 116.5mb 38.8mb
green open packetbeat-6.6.0-2019.07.05 H95S_dPsSqCz-7gA9ian4A 3 2 6766235 0 6.6gb 2.2gb
green open .kibana_task_manager W6A72dyBSkWqDOCGZ3f0xg 1 1 2 0 27.5kb 13.7kb```
maybe this is also helpful or related, but a 6.6 logstash instance we didn't update doesn't index data at all anymore, below is a snippet from the logs:
[2019-07-04T19:36:54,772][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2019.07.04", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x2c077020>], :response=>{"index"=>{"_index"=>"logstash-2019.07.04", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"The [default] mapping cannot be updated on index [logstash-2019.07.04]: defaults mappings are not useful anymore now that indices can have at most one type."}}}}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.