I am getting continuously below error into elasticsearch logs since quite some time for only one index. Is there any method reduce no of fields Or I will have to increase no of fields ?
I am looking for solution to configure settings into configuration file.
[2019-06-28T05:14:26,727][DEBUG][o.e.a.b.TransportShardBulkAction] [29-121-IDC.justdial.com] [www-2019.06.27][0] failed to execute bulk item (index) BulkShardRequest [[www-2019.06.27][0]] containing [27] requests
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [www-2019.06.27] has been exceeded
at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:604) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:420) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:336) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:268) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:311) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cluster.service.ClusterService.executeTasks(ClusterService.java:634) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cluster.service.ClusterService.calculateTaskOutputs(ClusterService.java:612) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:571) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cluster.service.ClusterService$ClusterServiceTaskBatcher.run(ClusterService.java:263) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:576) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:247) ~[elasticsearch-5.6.16.jar:5.6.16]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:210) ~[elasticsearch-5.6.16.jar:5.6.16]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
Agreed. So may be reducing the number of fields is the first thing you need to look at.
Why do you have so many?
Does some of them have the same meaning?
Yes. I am looking for solution reduce unwanted fields. I am not in favour to increase total fields.
Please see the given screenshot. There were 2.2k fields into index. I am talking about these wanted fields which are detected as fields.
These names are part of request url & i don't want to be in index.
Can I define selective fields for index into configuration ? Let me know if more info require. plz help.
As you said, whatever data filled-up into request fields is correctly into ELK. My concern is only, I want to prevent to create below unwanted fields. That's the reason, I have raised this case.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.