We have an index which is around 120 gb. we want to split it multiple indices. Is there any way to do that ? I guess reindex supports 1 to 1. I need 1 to many. It doesn't matter which document is in which index. We can use alias for that.
Why multiple indices and not a single index with a larger number of primary shards?
My index has 6 shards and our cluster has 3 nodes. If i make it 12 or 24 shards, will my problems which are slowness and getting timeout solved ?
That would depend on what is causing the slowness and timeouts. Querying one index with 12 shards is the same as querying 4 indices with 3 shards each - it is the shard count that matters, so I do not see any point in splitting to multiple indices.
What is the nature of the slowness and timeouts you are experiencing? When does it happen? What type of queries are you using? What is the load on the cluster?
What is the specification of the cluster with respect to node count, CPU, RAM and type of storage?
Actually i tought we can use 4 indexes with 12 shards each. Especially deleting and searching documents takes long time and we get 503 error sometimes.
In log files, there are lots of removing and adding nodes again and again.
I checked the memory with nodes/stats API, it says free memory is %18.
Maybe this is the reason. I will increase memory and CPU.
That would be equivalent to a single index with 48 shards.
Before making any changes like this I would recommend identifying what the issue likely is. It would help if you answered the questions I asked. If would also be useful to know the following:
- Which version of Elasticsearch are you using?
- Do you have monitoring enabled?
- What is the query and indexing load on the cluster?
- What is the size of the indexed data on disk?
I see, thanks for advices,
Elasticsearch version 6.2.1
We use Kibana,
GET /_cluster/allocation/explain
{
"index": ".kibana",
"shard": 0,
"primary": false,
"current_state": "unassigned",
"unassigned_info": {
"reason": "NODE_LEFT",
"at": "2023-05-18T11:26:24.016Z",
"details": "node_left[W-g1eBunTpSumcDmn_ZOcg]",
"last_allocation_status": "no_attempt"
},
"can_allocate": "throttled",
"allocate_explanation": "allocation temporarily throttled",
"node_allocation_decisions": [
{
"node_id": "OKHUeN1GTPyA_3RZN3XGGg",
"node_name": "isvitelkwx03",
"transport_address": "ip:9300",
"node_decision": "throttled",
"deciders": [
{
"decider": "throttling",
"decision": "THROTTLE",
"explanation": "reached the limit of outgoing shard recoveries [2] on the node [OKHUeN1GTPyA_3RZN3XGGg] which holds the primary, cluster setting [cluster.routing.allocation.node_concurrent_outgoing_recoveries=2] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
}
]
},
Generally i see this NODE_LEFT and unussigner shard error.
GET _cluster/health
{
"cluster_name": "clustername",
"status": "yellow",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 179,
"active_shards": 287,
"relocating_shards": 0,
"initializing_shards": 2,
"unassigned_shards": 69,
"delayed_unassigned_shards": 57,
"number_of_pending_tasks": 1,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 80.16759776536313
}
GET _cat/shards?h=index,shard,prirep,state,unassigned.reason
index_name 2 r INITIALIZING NODE_LEFT
index_name 2 p STARTED
index_name 5 p STARTED
index_name 5 r UNASSIGNED NODE_LEFT
Elasticsearch version 6.2.1 is EOL and no longer supported. Please upgrade ASAP.
(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns )
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.