I'm re-indexing some data from our old cluster into a new one. I pre-created my index (logstash-2023.10.02) and changed the total field mappings to 4000, the same as the old index on the old host.
If I look at the new index settings I see the 4000 setting:
But when I try to re-index from the old cluster to the new it keeps throwing the 1000 limit error? Do I need to change something else somewhere or restart elasticsearch?
Thanks!
On the source that particular index is about 31GB
Maybe what happened is I ran logstash on the new cluster and it created some logstash-* indices and now I'm trying to move over older logstash indices with different mappings? Could that be it?
I'll try this:
stop logstash on the new cluster
delete all the existing logstash* indexes
import the old indexes from the old cluster
start logstash on the new cluster
Hey Stephen! I think I've solved this issue. I created indexes on the new cluster that are named legacy_data and I'm re-indexing the old cluster logstash* indexes into legacy_data so as not to conflict with the existing logstash indexes.
The 502 Bad Gateway messages were coming from the dev console timing out. I switched to a shell script and have re-indexed my Oct-2023 old logstash data into the new cluster. Just a couple more months to re-index and I can throw that old cluster away.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.