GET _cat/indices?s=index&v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .apm-agent-configuration SzbE_fa8QbC4gMPZVDE8tA 1 1 0 0 566b 283b
green open .kibana_1 UCzOfktnRYq60gQFJHuY6w 1 1 82 22 763.4kb 394.7kb
green open .kibana_task_manager_1 fqg2_pSaTt2f_MUXoixTZQ 1 1 2 2 41.9kb 21kb
green open .monitoring-es-7-2020.04.16 EwLofvWxSbGJAj1wOlGPOg 1 1 162520 94158 183.6mb 91.7mb
green open .monitoring-es-7-2020.04.17 82DdY4yYSqiy3bigzahG8w 1 1 173883 275586 213.8mb 106.9mb
green open .monitoring-es-7-2020.04.18 8PrQTEUDQ7GGhCqepX7MmQ 1 1 168733 267464 206.6mb 103.3mb
green open .monitoring-es-7-2020.04.19 rOzlRcetT1Cthg8u02-HGA 1 1 176696 280048 214.1mb 107mb
green open .monitoring-es-7-2020.04.20 80JBwxqKQHmRgTctAHS_mA 1 1 176965 0 182.4mb 91.2mb
green open .monitoring-es-7-2020.04.21 4KXWLJvGTvuo8LLnx63Yqw 1 1 195144 295283 249.6mb 124.6mb
green open .monitoring-es-7-2020.04.22 itsoLTKhRlmkToOuV1BVQw 1 1 255922 73102 523.5mb 261.3mb
green open .monitoring-kibana-7-2020.04.16 95pKz_EyTryfhrlPNnnANw 1 1 5871 0 2.8mb 1.4mb
green open .monitoring-kibana-7-2020.04.17 aiQNHvXqSUm4FAHHyTL1pQ 1 1 5992 0 2.5mb 1.3mb
green open .monitoring-kibana-7-2020.04.18 SNdIWScuT1-Dlg9laY8RQw 1 1 5798 0 2.4mb 1.2mb
green open .monitoring-kibana-7-2020.04.19 waQvq35ySMm0J2lQeiUb3w 1 1 6090 0 2.4mb 1.2mb
green open .monitoring-kibana-7-2020.04.20 NH6kocfATLWdsre0g6pJFA 1 1 5465 0 2.3mb 1.1mb
green open .monitoring-kibana-7-2020.04.21 OyZbgy7CQF-7OladEyNfRw 1 1 5998 0 2.6mb 1.3mb
green open .monitoring-kibana-7-2020.04.22 oaE07g5fQmCg6wzjbWKmcQ 1 1 6680 0 3mb 1.4mb
green open count-test-2020.04.21-000001 7LrOIkvgTIC07DV4QdBngw 1 0 5 0 13.6kb 13.6kb
green open count-test-2020.04.21-000002 KCV69x8UQOKiZKf3-QOW8w 1 0 6 0 20.5kb 20.5kb
green open count-test-2020.04.21-000003 eN6gfjRwSc-MMKV1_AfLBQ 1 0 7 0 21.3kb 21.3kb
green open count-test-2020.04.21-000004 0VEgWFLcQBOA69ikV3inSw 1 0 5 0 11.3kb 11.3kb
green open count-test-2020.04.21-000005 _BHnmp5YSPqszKpT-qwLNw 1 0 27 0 22.2kb 22.2kb
green open count-test-2020.04.21-000006 M74fkyTRTvSsMEYDV3PZfQ 1 0 0 0 230b 230b
green open file_path bxFDIsDZT0WK4qo3Yse4Sw 1 1 1079172 0 640.9mb 320.4mb
green open file_path_timeseries njKvoThHQzm7_mZagBSlKw 1 1 13 0 23.2kb 11.6kb
green open filebeat-7.6.0-2020.04.10-000001 woUZY8W0TYO_B7UULxWoSw 1 1 97801 0 49.3mb 24.7mb
green open ilm-history-1-000001 5fnOgHAbRPObfqgJ2q0d_g 1 1 3305 0 1.2mb 655.2kb
green open rdbms_sync_idx eWlE2mqlS7qkribdbGcXHg 1 1 1491000 2 508.1mb 254mb
green open rdbms_url_sync_idx AHLvIEJ_STuZoWK9_OftDQ 1 1 1 0 8.7kb 4.3kb
My policy only rolls over to a new shard with no warm, cold, or delete phase. So I think that this is the full policy:
PUT _ilm/policy/hot-warm-delete
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"rollover": {
"max_size": "50gb",
"max_docs": 5
},
"set_priority": {
"priority": 50
}
}
}
}
}
}
PUT _template/hot-warm-delete-temp
{
"index_patterns": ["count-test-*"],
"settings": {
"index.lifecycle.name": "hot-warm-delete",
"index.lifecycle.rollover_alias": "count-test-alias",
"index.routing.allocation.require.box_type": "hot",
"number_of_replicas" : 0
}
}
Initializing the process with a PUT:
PUT count-test-2020.04.21-000001
{
"aliases": {
"count-test-alias":{
"is_write_index": true
}
}
}
Adding some sample docs (showing just one as an example):
POST count-test-alias/_doc
{
"name": "count-test-alias test 1"
}
Something that I just noticed. When executing GET _cat/indices?s=index&v
after the first run, I can see everything worked and the indices contain ~5 docs. When I come back and try to add some more, that is when the rollover seems to freeze. It just indexes all docs to the newest shard and when I try GET _cat/indices?s=index&v
again, the new index where all the documents stored shows 0 documents. If I run:
POST count-test-alias/_doc?refresh
{
"name": "count-test-alias test 1"
}
and then GET _cat/indices?s=index&v
again, it shows the correct number of documents and that there is a new shard ready for indexing.