Elasticsearch can't handle after 10 to 15 days indexs data

Dear Team,

I am happy to use elk stack, but after 7 to 10 days elasticsearch is not working fine and it can't handle more indexes, to fix that issue, I will have to restart the elasticsearch and delete indexes, Kindly guide me what I need to do for that.

Many things you can do. Buy more machines, delete non needed data, make sure you are overloading the system...

Hard to tell without any idea of your configuration, number of indices, shards and the logs when the problem is happening...

Let's start with:

GET /
GET /_cat/nodes?v
GET /_cat/health?v
GET /_cat/indices?v

If some outputs are too big, please share them on gist.github.com and link them here.

Thanks for reply, dadoonet i am using default configuration of elastiocseach version 6.4.1, Please suggest me what i need to do in configuration of elasticsearch.

I will when you will share the outputs I asked for.

Your reply answer - Hard to tell without any idea of your configuration, number of indices, shards, and the logs when the problem is happening...

Please if you have any perfect solution so please provide us.

To provide any guidance that is not general in nature we need to see the output of the APIs @dadoonet requested.

In general one of the most common causes for bad performance is that there are too many small shards, which is discussed in this blog post. Some times it can also be that inappropriate hardware and/or very slow storage is used. Another cause is that the heap is too small and you experience long and/or frequent GC. This should be indicated in the logs if it is the case.

1 Like

{
"name": "_sQ8z2G",
"cluster_name": "elasticsearch",
"cluster_uuid": "_WE-QkQ7RsGtlQLpgJLIsA",
"version": {
"number": "6.4.2",
"build_flavor": "default",
"build_type": "deb",
"build_hash": "04711c2",
"build_date": "2018-09-26T13:34:09.098244Z",
"build_snapshot": false,
"lucene_version": "7.4.0",
"minimum_wire_compatibility_version": "5.6.0",
"minimum_index_compatibility_version": "5.0.0"
},
"tagline": "You Know, for Search"
}


ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1 63 96 9 0.33 0.35 0.40 mdi * _sQ8z2G


epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1564128411 13:36:51 elasticsearch yellow 1 1 39 39 0 0 18 0 - 68.4%


health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .reporting-2019.04.21 WceQnwfKTZO-Pk2xAFgeYQ 1 0 1 0 56kb 56kb
yellow open filebeat-6.4.2-2019.07.25 ILlCH2ElTN2nb6UoUsYR_g 3 1 8923486 0 1.8gb 1.8gb
green open .monitoring-kibana-6-2019.07.21 HP3ZpFkgTGWXuXrnBwe-xA 1 0 8639 0 2mb 2mb
green open .monitoring-es-6-2019.07.19 MPuGrEYHQY67w8lVllfBug 1 0 407355 1824 233.1mb 233.1mb
green open .monitoring-es-6-2019.07.20 yYeLeox9RPyhI1ihjyz7gw 1 0 424626 1224 237.4mb 237.4mb
green open .monitoring-kibana-6-2019.07.24 QBuk73ZNRgy95LBLx5gMtw 1 0 8639 0 2mb 2mb
green open .monitoring-kibana-6-2019.07.19 Tg53lB2xR5WEWBvxF9D3cA 1 0 8639 0 2.1mb 2.1mb
green open .monitoring-kibana-6-2019.07.22 rAPpKtDpRD6rJYpcl4ORmQ 1 0 8640 0 2.1mb 2.1mb
green open .reporting-2019.06.02 K_7sc0_gRaS9OCFWBwp7ug 1 0 3 0 2.1mb 2.1mb
green open .reporting-2019.06.30 kx_wGmHwSWS5SC4gFJ-u4A 1 0 1 0 78.9kb 78.9kb
yellow open metricbeat-6.4.2-2019.07.26 r6YoAGMCSBKMeriMmvATnw 1 1 1074526 0 399.7mb 399.7mb
green open .kibana BPYRb5iuTI6Yf02wNB4fdQ 1 0 313 2 477.6kb 477.6kb
green open .monitoring-es-6-2019.07.21 sB-DgB5BTVCTnuNe_zCV4Q 1 0 442125 1828 244.5mb 244.5mb
yellow open filebeat-6.4.2-2019.07.24 KSFyTih4SeWw0uvgFV7SEw 3 1 6419600 0 1gb 1gb
green open .monitoring-kibana-6-2019.07.25 fr60Q44-Ro2qRaRTdUm__A 1 0 8639 0 2.1mb 2.1mb
green open .monitoring-es-6-2019.07.23 EF8BQ7kAQLKqrurNfKTSgQ 1 0 470586 2749 264.9mb 264.9mb
green open .monitoring-kibana-6-2019.07.26 StIDQoIyTxy6zGXJNJEEyQ 1 0 2923 0 823.7kb 823.7kb
green open .reporting-2019.06.09 efzb6qyzQQG-g_nPSBmHxg 1 0 5 0 3.9mb 3.9mb
yellow open filebeat-6.4.2-2019.07.26 lhlGrWaiTX2dPkD75ChBfQ 3 1 5183069 0 1gb 1gb
yellow open filebeat-6.4.2-2019.07.21 jNEnDQLNSgKTsdGhNl-q_Q 3 1 10045460 0 2.3gb 2.3gb
yellow open filebeat-6.4.2-2019.07.23 ob8raFBgTK6xsbPdYtmlCg 3 1 9942961 0 1.8gb 1.8gb
green open .monitoring-es-6-2019.07.26 SHVV8vkcRbaB8ae04UBk6Q 1 0 175665 978 102.9mb 102.9mb
green open .monitoring-kibana-6-2019.07.20 YxjA-W8uQOSVwRWVYFG54g 1 0 8639 0 2.1mb 2.1mb
yellow open metricbeat-6.4.2-2019.07.24 1jgsfStgQfq7dtvkTGXd6g 1 1 1919673 0 650.2mb 650.2mb
green open .monitoring-es-6-2019.07.25 tW5c641wR_Sqg9Msvd4tJQ 1 0 503261 2719 286.9mb 286.9mb
green open .monitoring-kibana-6-2019.07.23 wPlmPSZ4RNuJzOGaZZZyMw 1 0 8639 0 2.1mb 2.1mb
yellow open metricbeat-6.4.2-2019.07.25 mDEsHBTzQ9aAfOD1Ri35kg 1 1 2054919 0 681.9mb 681.9mb
green open .monitoring-es-6-2019.07.24 CunRjw8WS7C0aNvNZ6u21A 1 0 485403 2273 282.2mb 282.2mb
green open .monitoring-es-6-2019.07.22 efqQloihSk-hONaN5jZh3g 1 0 459458 1106 252.6mb 252.6mb.


Dear Dadoonet,

Please provide the solution for that.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.