Continuously getting - java.lang.IllegalArgumentException: Limit of total fields [1000] in index has been exceeded error

I am running with ELK cluster 5.6.16.

I am getting continuously below error into elasticsearch logs since quite some time for only one index. Is there any method reduce no of fields Or I will have to increase no of fields ?

I am looking for solution to configure settings into configuration file.

[2019-06-28T05:14:26,727][DEBUG][o.e.a.b.TransportShardBulkAction] [] [www-2019.06.27][0] failed to execute bulk item (index) BulkShardRequest [[www-2019.06.27][0]] containing [27] requests
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [www-2019.06.27] has been exceeded
	at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.index.mapper.MapperService.internalMerge( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.index.mapper.MapperService.internalMerge( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.index.mapper.MapperService.merge( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.cluster.service.ClusterService.executeTasks( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.cluster.service.ClusterService.calculateTaskOutputs( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.cluster.service.ClusterService.runTasks( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.cluster.service.ClusterService$ ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.cluster.service.TaskBatcher$ ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean( ~[elasticsearch-5.6.16.jar:5.6.16]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$ ~[elasticsearch-5.6.16.jar:5.6.16]
	at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_131]

Can anyone help me here ?

You can update limit of total fields after index has been created:

PUT my_index/_settings
"index.mapping.total_fields.limit": 2000

Note too many fields will lead to mapping explosion, which is a not good practice.

Agreed. So may be reducing the number of fields is the first thing you need to look at.
Why do you have so many?
Does some of them have the same meaning?

Yes. I am looking for solution reduce unwanted fields. I am not in favour to increase total fields.
Please see the given screenshot. There were 2.2k fields into index. I am talking about these wanted fields which are detected as fields.

These names are part of request url & i don't want to be in index.

Can I define selective fields for index into configuration ? Let me know if more info require. plz help.

Why are you sending them to elasticsearch in the first place? If you don't need them, don't send them.

I am sending complete access log to ELK stack & storing below url pattrn under request field.


Please see this access log :,, - - [03/Jul/2019:17:03:14 +0530] "GET /webmain/autosuggest.php?cases=what&search=chem&city=Samastipur&area=&s=1&pg=index HTTP/1.0" 200 3598 "" "Mozilla/5.0 (X11; Linux i686; rv:34.0) Gecko/20100101 Firefox/34.0" "REMOTE_ADDR :" "TRUE_CLIENT :" "AKAXFF :" 1.218 0.154 IN .

Logstash pattern :

(?<x_forwarded_for>%{IP}, .*|%{IP:xforwardedfor}|-) (%{NGUSER:ident}|-) (%{NGUSER:auth}|%{USERNAME:user}|-) [%{HTTPDATE:timestamp}] "(?:%{WORD:method} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:request})" %{NOTSPACE:response} (%{NOTSPACE:bytes_read}|-) (%{DATA:request_header_referer}|-) "(%{DATA:request_header_agent}|-)" "(REMOTE_ADDR : %{DATA:clientip}|-)" "(TRUE_CLIENT : %{DATA:http_true_client_ip}|-)" "(AKAXFF : %{DATA:http_akaxff}|-)" (?:%{HOSTNAME:http_host}|%{IP}|%{HOSTNAME:http_host}:%{POSINT}|-) (?:%{BASE10NUM:request_duration}|-) (?:%{BASE10NUM:upstream_request_duration}|-) (%{WORD:Country}|-)

If I'm not mistaken then request field just contains /webmain/autosuggest.php?cases=what&search=chem&city=Samastipur&area=&s=1&pg=index, right?

If so, and based only on the information you gave so far, which might be incomplete, you end up with a document like:

  "request": "/webmain/autosuggest.php?cases=what&search=chem&city=Samastipur&area=&s=1&pg=index"

This is not generating as many fields as you shown.

As you said, whatever data filled-up into request fields is correctly into ELK. My concern is only, I want to prevent to create below unwanted fields. That's the reason, I have raised this case.


Hope you understand my concern.

You can always have a strict mapping with "dynamic": "strict" or "dynamic": false . See

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.