I am trying to figure out how to deal with different types of log files using Filebeats as the the forwarder.
Basically I have several different log files I want to monitor, and then I actually want to put an extra field in to identify which log the entry came from, as well as a few other little things. This is then forwarded onto Logstash for further processing, which is where each element comes into play.
My problem is that it doesn't seem to play nicely once you add more than one file. Usually the last entry is the one that is uses. The documentation is confusing as well, in regards to how to achieve it, with document_type and input_type being interchanged.
filebeat:
List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
When indexing right into elasticsearch, all log lines will be written to same index (filebeat-<date>), but having different types. Based on 'type' you can filter in elasticsearch/kibana.
The fields configuration given in your example is another solution.
My problem is that it doesn't seem to play nicely once you add more than one file. Usually the last entry is the one that is uses
I don't understand. What's exactly the problem?
In logstash you can filter based on type or your custom fields. When indexing into Elasticsearch your custom fields will also be indexed. These you can use for filtering your entries. But configuring 'document_type' should be all you need. When setting up logstash according to the getting started guide, the document_type configuration in filebeat determines the document_type logstash uses for indexing.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.