Error: Limit of total fields [1000] has been exceeded but index limit is higher

I'm re-indexing some data from our old cluster into a new one. I pre-created my index (logstash-2023.10.02) and changed the total field mappings to 4000, the same as the old index on the old host.
If I look at the new index settings I see the 4000 setting:

{
  "settings": {
    "index": {
      "routing": {
        "allocation": {
          "include": {
            "_tier_preference": "data_content"
          }
        }
      },
      "mapping": {
        "total_fields": {
          "limit": "4000"
        }

But when I try to re-index from the old cluster to the new it keeps throwing the 1000 limit error? Do I need to change something else somewhere or restart elasticsearch?
Thanks!

@MColeman

Are you sure that is the index you are trying to write to?

Hey Stephen! Here is my reindex command:

POST _reindex?pretty
{
  "source": {
    "remote": {
      "host": "http://remote_host:9200"
    },
    "index": "logstash-2023.10.02"
},
  "dest": {
    "index": "logtash-2023.10.02"
  }
}

so it was my understanding that it would just query that entire index from the remote_host and put it all into the index_name on the new host.

Thanks!

I ran it from the dev tools console on the new host if that makes any difference

How did you do this?

And so when you run from the destination

GET logtash-2023.10.02

You see the correct settings?

What version cluster source &destination

Just for grins try using a different destination name just for a test that does not start with logstash-

I created the index like this:

PUT /logstash-2023.10.02

and then

PUT logstash-2023.10.02/_settings
{
  "index.mapping.total_fields.limit":4000
}

The GET command looks correct

GET logstash-2023.10.02
{
  "logstash-2023.10.02": {
    "aliases": {},
    "mappings": {},
    "settings": {
      "index": {
        "routing": {
          "allocation": {
            "include": {
              "_tier_preference": "data_content"
            }
          }
        },
        "mapping": {
          "total_fields": {
            "limit": "4000"
          }
        },
        "number_of_shards": "1",
        "provided_name": "logstash-2023.10.02",
        "creation_date": "1703793290300",
        "number_of_replicas": "1",
        "uuid": "vT7d31JdQsa-upBAqeDtpA",
        "version": {
          "created": "8500003"
        }
      }
    }
  }
}

The source cluster (old) is linux 7.16.1 and the target (new) is Windows 8.11.3

Interestingly enough I did this:

PUT /test_index

PUT test_index/_settings
{
  "index.mapping.total_fields.limit":4000
}

GET test_index

POST _reindex?pretty
{
  "source": {
    "remote": {
      "host": "http://lxdev10:9200"
    },
    "index": "logstash-2023.10.02"
},
  "dest": {
    "index": "test_index"
  }
}

and received this

{
  "statusCode": 502,
  "error": "Bad Gateway",
  "message": "Client request timeout"
}

On the source that particular index is about 31GB
Maybe what happened is I ran logstash on the new cluster and it created some logstash-* indices and now I'm trying to move over older logstash indices with different mappings? Could that be it?

I'll try this:
stop logstash on the new cluster
delete all the existing logstash* indexes
import the old indexes from the old cluster
start logstash on the new cluster

Hey Stephen! I think I've solved this issue. I created indexes on the new cluster that are named legacy_data and I'm re-indexing the old cluster logstash* indexes into legacy_data so as not to conflict with the existing logstash indexes.

The 502 Bad Gateway messages were coming from the dev console timing out. I switched to a shell script and have re-indexed my Oct-2023 old logstash data into the new cluster. Just a couple more months to re-index and I can throw that old cluster away.

Thanks!
Mark

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.