Reindex large index

Today I was faced with a reindex due to a question of deleting a field, plus a change in the mapping of the datetime field that I had planned.

I have done it with PHP.

42 million docs passed, even though the process lasted about 35 minutes.

Between that time, 2,600 docs were leaked or missing, which I estimate are from the sending of data that my system produces every 15 minutes (control of lighting systems command panels), that is, a complete process and part of another.

Locally I had tried executing the same command so that the docs that were added in a batch would be reindexed.

And it worked,

In the first test I did, (without changing the alias to point to the new index) I got this problem

	"index": "analyzers-2024032501",
	"id": "P0boPIcBtSrsmCHcKSl6",
	"cause": {
		"type": "cluster_block_exception",
		"reason": "index [analyzers-2024032501] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];"
	"status": 429

In addition, the hard drive reached a size limit that did not correspond. It is as if instead of updating the NOT indexed ones, duplicates have been added.

$params = [
            'body' => [
                'source' => [
                    'index' => $old,
                'dest' => [
                    'index' => $newIndexName,

        $this->info("Reindexing from $old to $mapping");
        try {
            $response = $this->client->reindex($params);
            $this->info("Reindex end from $old to $mapping");
        } catch (ClientResponseException|ServerResponseException $e) {
            $this->error("Error reindexing: {$e->getMessage()}");


  • can I re-run the same process? (Now alias work-analyzers point to newIndexName and this is production)

  • There re run the same method?