HI ,
i am using filebeat 6.3 with the below configuration , however multiple inputs in the file beat configuration with one logstash output is not working.
Filebeat configuration :
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
paths:
- /opt/eureqa/TEST_filebeat/TE.log
tags: ["TE"]
fields: {log_type: te}
- type: log
paths:
- /opt/eureqa/TEST_filebeat/TMRS.log
tags: ["TMRS"]
fields: {log_type: tmrs}
# Change to true to enable this input configuration.
enabled: true
reload.enabled: true
reload.period: 10s
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "192.168.0.159:5601"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.0.159:5998"]
# hosts: ["192.168.0.159:5998"]
Why do you need tags and the log_type field? These seem quite redundant to me.
The format string on the index setting doesn't look correct either. Assuming log_type is what you want you want index => "%{[fields][log_type]-index".
Do you use aliases and some kind of rollover strategy? If not consider to add a timestamp to the index, so you can delete/archive old data after a given retention period.
Hi,
I am using tags to differentiate my logs . my elk stack is on one server and file beat configuration is on other server (where my log files are generated).
I had 5 components logs and which are needs to be parsed by using the logstash output configuration.
I able to find the index names in the kibana dash board " |_index ||
|---|---|
||%{[fields][log_type]-index|"
by using the above configuration.
can you help me where i did mistake mostly .
and my grok pattern and kv filter was not working if i use the above configuration.
Please note, index names without timestamp are not recommended. You will have a hard time to delete old data when you are about to run out of disk space.
Having log_type, I don't see why you need to configure tags in filebeat, but it doesn't really hurt to do so.
No idea about the grok and kv filters. I'm no grok debugger, but have you tried a grok debugger like the one that comes with kibana?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.