We're encountering an issue with our Elasticsearch pods:
The error message we're seeing is: [Limit of total fields [1000] has been exceeded while adding new fields [644]], which is causing Fluentd to partially upload logs.
We have multiple indices configured, and for some, we've extended the total_fields limit as needed. However, in this case, the default limit is being exceeded, but the index that's breaching the limit is not printed in the error logs.
From other discussions, I see the index usually gets printed. In my case, I can't determine which index is exceeding the limit. How can I identify the problematic index?
Elasticsearch Version : v 7.17.1
We have many indices configured for fluentd to write data, I can't identify the problematic index that's exceeding the total fields limit.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.