Limit of total fields [1000] exceeded for index with 699 fields

Elasticsearch 7.2.1

Ingesting an XLSX sheet with 1x title and 1x data row using the excellent tool @GitHub - codingchili/excelastic: Vert.x web and commandline application to import CSV/XLS/XLSX files into ElasticSearch.

  • Index created @ingestion time
  • Dynamic mapping used (only used in dev phase in order to have an initial mapping to get started with for the final product)
  • XLSX sheet contains 347 fields, so expecting around 694 fields excl system generated fields.

Getting the following error message:

[2020-03-02T00:50:51,053][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [node1] failed to put mappings on indices [[[qtitle2/Q0vYr8G7RfGmsu8NxtT3eA]]], type [default]
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [qtitle2] has been exceeded
        at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:605) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:508) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:402) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:335) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:315) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:238) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310) ~[elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210) [elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142) [elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688) [elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.2.1.jar:7.2.1]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.2.1.jar:7.2.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_221]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_221]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_221]
[2020-03-02T00:50:51,063][INFO ][o.e.c.r.a.AllocationService] [node1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[qtitle2][0]] ...]).

Sample of my data as csv (which shows the same behaviour by the way). I have a no. of multi-fields as you can see, which are automatically generated from e.g. "q1.title".

q1.title,q2.title,q3.title,q4
What is your name?,How old are you?,Where do you currently live?,Can you read and write?

, the resulting dynamic mapping:

"q1" : {
          "properties" : {
            "title" : {
              "type" : "text",
              "fields" : {
                "keyword" : {
                  "type" : "keyword",
                  "ignore_above" : 256
                }
              }
            }
          }
        },

I increased the max no of fields to 2000 as follows for troubleshooting:

DELETE qtitle2
PUT qtitle2
{
  "settings":{
    "index.mapping.total_fields.limit": 2000
  }
}

Now, ingestion works, however the no of fields reported: 699, actually what was expected as shown above.
Any idea what might go wrong when using default setting?
(Have looked around on the forum, but found nothing addressing this particular issue).

Just a thought. May be your index was containing an initial mapping? With other field names?

Could you reproduce the problem on a clean index?
Is there anyway to debug and print what is actually sent by the tool to elasticsearch ?

Hi and thx for fast reply.

There is no initial mapping, the index is created upon ingestion/upload from the tool with all fields dynamically created.
Unless there is a known bug in Elastic 7.2, it was also my thought that he tool might do something during ingestion that we don't see (doing any temporary stuff etc). Will check for debug flags.

May also try your own excellent fscrawler that I used some time ago.
Problem temporarily solved though with the increase of no. of fields.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.