Do I have too many shards?

My ELK stack passed in red status this morning because of missing memory on disk. After cleaning it a bit, it could reassign an unassigned shard and it came back to yellow again and it works.

I checked the health endpoint and I wanted to have your opinion on it.

{
"cluster_name": "docker-cluster",
"status": "yellow",
"timed_out": false,
"number_of_nodes": 1,
"number_of_data_nodes": 1,
"active_primary_shards": 57,
"active_shards": 57,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 4,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 93.44262295081968
}

Here is disk usage from this curl command :

curl -X GET "localhost:9200/_cat/allocation?v=true&h=node,shards,disk.*&pretty"

node          shards disk.indices disk.used disk.avail disk.total disk.percent
elasticsearch     57       71.8mb    78.9gb     12.3gb     91.2gb           86
UNASSIGNED         4 

I have 4000 documents with one dense_vector for each. Do you think this is optimal, or do I have way too many shards?

Thank you!

Some things to consider when choosing the number of shards are shard size and memory. It's likely worth experimenting with smaller shard sizes to see if it's better for your use case.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.