I'm trying to make the leap between developing and using ELK (plus Filebeat) to deploying it. I need to deploy Kibana in such a way as to have index pattern "filebeat-*" already configured/resident in Kibana so that users won't have to learn to do that--it will just already be done.
From Googling around, I'm guessing what I am asking isn't exactly how it's done, but I think my question helps understand what I mean. Please explain how to deploy Kibana with an index pattern pre-installed ready to begin to be used on the data that I know will flow from Filebeat to Logstash to Elasticsearch and then to Kibana.
I'm using Filebeat containers on my application nodes to shoot log entries to a container running the ELK stack. This all works perfectly. I'm now looking for a solution that gives users a functioning Kibana as soon as they load it up in their browser without them having to learn to create an index pattern.
I already long ago waded through all of this material. It's why I'm asking experts in this forum for help deploying the Elastic parts of my solution. I need an installed index pattern (precisely, "filebeat-*") and, later, I'm going to want to ship Kibana with premade visualizations. Finally, I'm going to ask how my users can save any visualizations and dashboard content they create such that it can be transferred to other instances of ELK they have running.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.