Currently, we are running 3 Node Cluster having over 83 indices and 2893 shards with a total of over 500 millions of Documents. We need to migrate the entire data to a different cluster. During the migration, we need to do a clean up some of the property value and remove some of the fields based on some condition. We thought of migrating using Logstash. Need clarification from Elastic Search Team is Logstash is the best way to migrate 5 TB of data from one server to different server.
It is. You can also think about reindex from remote feature which runs in elasticsearch only.
If you can do your cleanup with an ingest pipeline, I'd probably use that.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.