We're running elasticsearch 7.0 but are still importing data from some filebeat-6.6.0 systems though logstash. Running queries in kibana we see shard failures and elastic reports:
Caused by: java.lang.IllegalArgumentException: field expansion matches too many fields, limit: 1024, got: 1369
I can fix this manually in kibana, but new indexes keep having this problem. Is there some way to prevent it before hand?
You'll need to update the Filebeat index template to set the index.query.default_field setting as described here - you'll just update the index template instead of updating the index.
You can find information on how to retrieve and update index templates here.
Thanks. I pulled the index.query.default_field setting from the filebeat-7.0 template and added it to the filebeat-6.6.0 one and that appears to have done the trick.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.