So, having odd issue.
I am getting no data when I select my logstash-* index in kibana, nor will it work in visualizations.
It does show shows shard failures.
If I try other logstash-env-* and other broad patterns, there is no issues.
All the indices I checked that are under logstash-* have no problems.
Nothing jumps out in logs or the support diagnostics.
If i use devtools to query logstash-* i get results fine.
Before you say mapping conflict, I do have mapping conflicts (but who doesn't in one going across all indices, though I want to fix that), but I have another cluster that is nearly identical with the same mapping conflicts and it is fine.
Kinda not sure where to look from here.
Anyone have ideas?
I think this is related to Caused by: java.lang.IllegalArgumentException: Trying to retrieve too many docvalue_fields. Must be less than or equal to: [100] but was [142]. This limit can be set by changing the [index.max_docvalue_fields_search] index level setting.
But I am only seeing it on the logstash-* index pattern which I have no way to put a index setting on to raise this setting.
The index that has a huge number of data fields wich has the docvalue, is able to load fine.
@Asa_Zalles-Milner I think you are right on the cause of the problem and it makes sense to work for logstash-env-* and not for logstash-*.
I think the best thing to do here is a solution already posted on other topic (Kibana requesting too many doc values) that I will also transcript here:
As you suggested, you could try increasing max_docvalue_fields_search for each index, though I have concerns this might not scale. If you continue to index documents with new fields then you'll probably bump up against this limit again.
If you don't actually need to query/aggregate on all of those date fields, then maybe you could try mapping them as strings instead. Then they won't be requested as docvalue fields.
You could also be trying to fit too many different types of data into a single index pattern, i.e. your index pattern is too greedy. Would your use case could allow you to define index patterns which match fewer indices, and thus include fewer date fields?
The part I don't get is WHY does it aggregate all of the docvalues first? because no one index has too many docvalues, each index is fine. It looks like it first aggregates all the docvalues across all the indices that matches the pattern, and THEN checks that list again the max for EACH index that it matches? Only then is it coming up with too many. (upping the docvalue did temporarily fix the problem, but not the root of the issue)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.