First of all this is my first "real" post here so excuse me if there is something off or not according to the rules. I've read them but maybe I missed something. I will update this post in case of.
I've found already something to solve my problem but there are still some questions concerning this issue. The following link contains the "procedure" to reproduce the problem which I am facing here.
I have daily indices which are rolling over depending on my policys. This isn't a permanent solution because I have to do this everyday to keep the mapping working for the Monitoring UI. Otherwise I get the aforementioned [illegal_argument_exception] error.
Second, the following link to the Docs.
This got me for a deeper understanding of "what is the fielddata" thing but I can't get a solution out of it.
DON'T setting the fielddata value to true would be my first impulse because of the performance issues but I couldn't find another solution. I've also checked the mapping of the concerned index. When I delete the filebeat- index everthing works as intended. At the first indexed document in this index the monitoring UI stops working with the [illegal_argument_exception].
The error which appears on the bottom-right.
[illegal_argument_exception] Fielddata is disabled on text fields by default. Set fielddata=true on [event.dataset] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.: Check the Elasticsearch Monitoring cluster network connection or the load level of the nodes.
Hmm, that's odd. Filebeat normally creates the index template, if one doesn't exist, right before it indexes the first document.
Could you share your filebeat.yml file (with any sensitive information redacted)? Also, when you start up Filebeat are there any errors or warnings in the logs?
I think I've found the solution. Your post was the missing piece. I forgot to mention an important detail in my first post. I index the documents via a Logstash instance. Because of that I haven't any index template at all for filebeat. I've exported the template directly from a filebeat instance and curl'd it onto my ELK stack manually. I've adapted everything to my needs in my existing templates (reorganized "order": "" values etc.) and now everything is working as it should.
I learned my lesson and I will repeat all this in the future after any update. Because I think that's a must due to the architecture of ELK?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.