I am wondering how to create separated indexes for different logs fetched into logstash (which were later passed onto elasticsearch), so that in kibana,
In my case, I have a few client servers (each of which is installed with filebeat ) and a centralized log server ( ELK ). Each client server has different kinds of logs. (i.e. one filebeat should read one grok filter and another filebeat should read another grok filter)
i am using ELK 7.6 , i heard that document_type was deprecated . so how to achieve this.
add a tag in your filebeat and in your logstash pipeline separate actions to take with a if...else statement on the tags field (or whatever other field you're putting your tags into).
Hi Fabio,
what tag we have to add in filebeat yml file? i have added
if type ==[xxx] in logstash filter section. so how filebeat will read/direct to that particular tag(xxx) in logstash..
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.