How to get the best performance with index update

I have a index with length greater than 1 TB and I want to update a field but update by query is very slow, mor slow that delete all and insert all.
How best alternative to mass update?

The best is to delete the index and index everything again.
Using aliases might help to do it without interruption.

This is currently the solution, but some one-to-many updates are being problematic for PostgreSQL mirroring.
I was thinking of some Bulk API based solution.

Using bulk is always a good idea IMO. But not really related to the question IMHO.

Hi @dadoonet @Henrique_Brasileiro,

How about re-indexing??
Configuring more shards can share the data and performing the update will be easy option right?

Yeah. Reading the source documents from elasticsearch with reindex API is probably the way to go.

My first response was just about update by query vs index API.

@dadoonet

Agree with you.

Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.