Multi file send from filebeat to logstash

i configured a filebeat.yml to get multi file log and create type fields for each file like this:
filebeat.prospectors:
- paths:
- /var/log/access.log
fields: {log_type: access}
-
paths:
- /var/log/errors.log
fields: {log_type: errors}
then in logstash.cfg i want to cfg in fileter like:
if ([fields][log_type]=="errors){
do something
}else if ([fields][log_type]=="access"){
do something
}
but error occur so i can't start filebeat, logstash worked normal

What's the error?

Dec 26 10:14:34 ubuntu systemd[1]: Stopped filebeat.
Dec 26 10:14:34 ubuntu systemd[1]: Started filebeat.
Dec 26 10:14:34 ubuntu filebeat[6300]: Exiting: error loading config file: yaml:
Dec 26 10:14:34 ubuntu systemd[1]: filebeat.service: Main process exited, code=e
Dec 26 10:14:34 ubuntu systemd[1]: filebeat.service: Unit entered failed state.
Dec 26 10:14:34 ubuntu systemd[1]: filebeat.service: Failed with result 'exit-co
Dec 26 10:14:35 ubuntu systemd[1]: filebeat.service: Service hold-off time over,
Dec 26 10:14:35 ubuntu systemd[1]: Stopped filebeat.
Dec 26 10:14:35 ubuntu systemd[1]: filebeat.service: Start request repeated too
Dec 26 10:14:35 ubuntu systemd[1]: Failed to start filebeat.

this is error messages

i configured my filebeat:

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.


  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - "/var/log/access.log"
   fields: {log_type: access}
  paths:
    -"/var/log/errors.log"
   fields: {log_type: errors}

Run ./filebeat.sh -configtest -e to test the configuration (if you installed via rpm/deb). What does that output?

Try this:

filebeat.prospectors:
- paths:
    - "/var/log/access.log"
  fields: {log_type: access}
- paths:
    - "/var/log/errors.log"
  fields: {log_type: errors}
1 Like

thanks, its working guys, is you just add - on paths to fixed my configured?

In the configuration, prospectors is a YAML list so each individual prospector needs a - to begin a new list entry.

thanks you, i will try to find more about yaml.
can i asked you 1 question, when i configure 2 file send by 1 filebeat, i check kibana, but just 1 file log is parse by logstash and show in kibana, do i need to configure other index or use same index is logstash ?
sry if the question is so recondite

i check log of filebeat and notice it just read file apear in first line is: accesslog, and file error log don't see any action with it.

Writing all of the data to the same index should not be a problem. I noticed I had a space missing from the second paths given in my example and I corrected it. Please update yours.

While you are testing you might need to delete the registry file to get Filebeat to resend data from those files.

You can increase logging level to see more details on Filebeat is doing while you debug the problem.

1 Like

best thanks guys. merry christmas :))

hey guys, i configure successfull file configure logstash and filebeat on ubuntu but when i copy file configure on centos, filebeat not send log of error.log. Just send log file of access.log. This is debug log filebeat:
2016-12-29T14:25:24+07:00 DBG Start next scan
2016-12-29T14:25:24+07:00 DBG scan path /opt/apache2/logs/error_log
2016-12-29T14:25:24+07:00 DBG Check file for harvesting: /opt/apache2/logs/error_log
2016-12-29T14:25:24+07:00 DBG Same file as before found. Fetch the state.
2016-12-29T14:25:24+07:00 DBG Update existing file for harvesting: /opt/apache2/logs/error_log
2016-12-29T14:25:24+07:00 DBG Not harvesting, file didn't change: /opt/apache2/logs/error_log
2016-12-29T14:25:24+07:00 DBG End of file reached: /opt/apache2/logs/error_log; Backoff now.
2016-12-29T14:25:24+07:00 DBG End of file reached: /opt/apache2/logs/access_log; Backoff now.

although i have updated error.log, but filebeat seem think no changed in file error. i have searched more, but can't find extacly the answer for this problem.

This topic was automatically closed after 21 days. New replies are no longer allowed.