Hmm, ok then it might actually be better to do this in ES via mapping and then just drop anything that doesn't fit your structure. Do you know exactly what you want to keep?
Hi @warkolm
If I understand you correctly,
in order to tailor the discover part of Kibana for a particular presentation, the data should not be even permitted to enter the system.
This data should be removed right at the entrance via logstash.
Thus, in order to provide custom presentations for the same data in Kabana I have to run a separate (logstah, elasticsearch, kibana) stack per presentation?
In order to provide custom presentations, in terms of the fields shown in the discover part of Kibana, I have to run a separate ELK stack per presentation?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.