Hallo Everybody
i am using filebeat directly with ES. I have installed filebeat on my Web Frontend Server where multiple applications are deployed. Each application wirte its own log. I want to use filebeat to parse the logfiles of each application and send data to ES. ES should create a index for each application depending upon fields, application name or tags. is it possible without logstash in middle ?
Here is my Filebeat.yml configuration.
filebeat.prospectors:
- input_type: log
paths:
- d:/logs/tmp/Catalog/*.log
include_lines: ['^Error']
tags: ["catalog"]
fields:
app_id: catalog
level: error
scan_frequency: 10s
filebeat.prospectors:
- input_type: log
paths:
- d:/logs/tmp/Onsurance/*.log
include_lines: ['^Error']
tags: ["onsurance"]
fields:
app_id: onsurance
level: error
scan_frequency: 5s
#-------------------------- Multiline options------------------------------
multiline.pattern: ^Error;.*(?:\r?\n(?!Error;|Verbose;).*)*
multiline.negate: false
multiline.match: after
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["10.202.170.77:9200"]
template.enabled: true
template.path: "filebeat.template.json"
template.overwrite: false
indices:
- index: "Catalog-%{+yyyy.MM.dd}"
when.contains:
tags: ["catalog"]
- index: "Onsurance-%{+yyyy.MM.dd}"
when.contains:
tags: ["onsurance"]
currently i have Following issues:
- Currently it give me error on multiline.pattern. It does not accept my regular expression although regex is working in regex101.com. so i have disabled multiline.
- It parse every line, although i have specified only to parse lines starting with Error;
- It create a Index filebeat-*. i want to have a index for each file.
any suggestions ? how do i can achieve it.
best regards