I'm trying to set up filebeat to ingest 2 different types of logs. They're in different locations and they should output to different indexes. This is what I have so far:
I don't think I'm doing this right... In addition, I've never configured logstash so I'm at a loss as to whether not or using logstash is necessary. I've always simply sent this data directly to Elasticsearch on port 9200.
Any help would be immensely appreciated! Thank you!
Choosing a good index name requires some considerations. In my example I opted to have a common prefix + include the beat version and event acquisition date.
Having a common prefix ensures the index template is used for both indices. Including the beat version (or some kind of versioning) ensure your setup will not break if you ever update filebeat with changed document type (e.g. 5.4 and 6.2 template mappings are not fully copmatible). The date at the end ensure you will have a daily index. Having daily indices enables you to do some index lifecycle management. E.g. move old indices to cold storage or delete very old indices (retention policies). The index setting will read the fields.type when constructing the index name. If fields.type is missing, the default value of other will be used to construct the index name.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.