Changing shard number per index due to EsRejectedExecutionException

Hi @warkolm, I am using ES 2.3.2 and Kibana 4.5. I have 1 ES node using 3Gb allocated to the JVM. I created my index pattern 'logstash-*' as per the Kibana tutorials. But I have a single node and after a few months I started seeing "Failed Shard" messages in my dashboards when searching the 'previous month'.

My elasticsearch.log had lots of exceptions:
Failed to execute [org.elasticsearch.action.search.SearchRequest...

Caused by: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4@2c7c54cc on EsThreadPoolExecutor[search, queue capacity = 1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@7aa6ad6[Running, pool size = 2, active threads = 2, queued tasks = 1000, completed tasks = 11673]]]

After reading the Kibana turorials I realised that each daily index has 5 shards by default. I am processing ~50MB data per day so the recommended shard number per index is 1.

So, I edited the elasticsearch-template.json to have 'number_of_shards: 1'.

What steps do I now need to take in order to re-index all my data for my existing 'logstash-*' index pattern? Server-restart will mean that future daily indices will have 1 shard each, but existing indices will remain at 5 shards each?

Many thanks for your time.

You will need to create a new index and move the data over to it.

To reindex all of the documents from the old index efficiently, use scroll to retrieve batches of documents from the old index, and the bulk API to push them into the new index.

https://www.elastic.co/guide/en/elasticsearch/guide/current/reindex.html

To me, that error looks like your exceeding the capabilities of the box which is causing the queue to be backed up. Jason has a good writeup here: https://github.com/elastic/elasticsearch/issues/16224#issuecomment-174976727

Thanks so much for your reply. Yes - I also discovered that 2 months of daily indices of 10-15mb each with 1GB RAM allocated to the JVM heap was killing my ES. It had the default 5 shards per index! Changing it to 2 shards (so I can migrate one to a new machine as we grow) fixed the issue.

To re-index - I created a new index called 'tuned-logstash-1' and copied the original 'logstash-*' index into it using the Sense App via:

POST _reindex
{
  "source": {
    "index": "logstash-2016.07.26"
  },
  "dest": {
    "index": "tuned-logstash-2016.07.26"
  }
}

I checked the document number and they are the same. I was looking for an automated way to run this over each day. I didn't find anything quick so I did it by hand, it took a few mins!

Many thanks!