ppafford
(Phill Pafford)
October 15, 2018, 3:49pm
1
This is my setup
Running Filebeat in it's own Docker container on ECS https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html
Mount the log volume /var/log and create the filebeat directory there so it persists on new deployments
Ran filebeat setup
My flow is, Filebeat to Redis, Redis to Logstash, Logstash to Elasticsearch
Filebeat picks up my application log files and I see them end up in Kibana
Filebeat picks up the ECS healthcheck pings to my running container and I see those apache logs in Kibana
The issue:
Filebeat does not pick up my custom VHost apache logs, I see no data in Kibana
Just not sure why?
Filebeat Config
## full file: https://github.com/elastic/beats/blob/master/filebeat/filebeat.reference.yml
## https://github.com/elastic/beats/blob/master/filebeat/filebeat.reference.yml#L770
filebeat.config:
modules:
enabled: true
path: /usr/share/filebeat/modules.d/apache2.yml
reload.enabled: true
reload.period: 10s
## https://github.com/elastic/beats/blob/master/filebeat/filebeat.reference.yml#L753
filebeat.registry_file: /var/log/filebeat/registry
filebeat.registry_file_permissions: 0664
## https://github.com/elastic/beats/blob/master/filebeat/filebeat.reference.yml#L372
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/www/app/logs/app1*.log
fields:
app_name: app1
environment: env
region: region
ecs_cluster: cluster
fields_under_root: false
ignore_older: 12h
scan_frequency: 10s
harvester_buffer_size: 16384
max_bytes: 10485760
symlinks: false
- type: log
enabled: true
paths:
- /var/www/app/logs/app2*.log
fields:
app_name: app2
environment: env
region: region
ecs_cluster: cluster
fields_under_root: false
ignore_older: 12h
scan_frequency: 10s
harvester_buffer_size: 16384
max_bytes: 10485760
symlinks: false
## https://github.com/elastic/beats/blob/master/filebeat/filebeat.reference.yml#L1548
path.data: /var/log/filebeat/data
path.logs: /var/log/filebeat/logs
## https://github.com/elastic/beats/blob/master/filebeat/filebeat.reference.yml#L1381
output.redis:
enabled: true
hosts: ["redis.example.com"]
port: 6379
key: filebeat
datatype: list
codec.json:
pretty: false
escape_html: true
## https://github.com/elastic/beats/blob/master/filebeat/filebeat.reference.yml#L1018
## https://www.elastic.co/guide/en/logstash/current/filebeat-modules.html#_set_up_and_run_filebeat
# output.elasticsearch:
# enabled: true
# hosts: ["https://elasticsearch.example.com:9200"]
## https://github.com/elastic/beats/blob/master/filebeat/filebeat.reference.yml#L1683
setup.kibana:
host: "https://kibana.example.com"
username: "user"
password: "pass"
ssl.enabled: true
ssl.verification_mode: "none"
## https://github.com/elastic/beats/blob/master/filebeat/filebeat.reference.yml#L1739
logging.level: debug
logging.selectors: ["*"]
logging.to_syslog: false
logging.to_eventlog: false
logging.metrics.enabled: true
logging.metrics.period: 30s
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: beat.log
keepfiles: 7
rotateeverybytes: 10485760 # 10 MB
permissions: 0664
logging.json: false
Filebeat Apache2 Module Config
- module: apache2
access:
enabled: true
var.paths: ["/var/log/apache2/*ccess.log"]
error:
enabled: true
var.paths: ["/var/log/apache2/*rror.log"]
ppafford
(Phill Pafford)
October 15, 2018, 3:50pm
2
Logstash Config
{code}
input {
redis {
data_type => "list"
key => "filebeat"
host => "redis.example.com"
}
}
filter {
## beat and LSF compatibility
## https://discuss.elastic.co/t/problem-with-transfer-filebeat-6-1-3-logstash-6-1-3-elasticsearch-6-1-3/136264/6
## https://discuss.elastic.co/t/logstash-errors-after-upgrading-to-filebeat-6-3-0/135984/6
if [beat][hostname] {
if [source] {
if ![file] {
mutate {
add_field => {
"file" => "%{source}"
}
}
}
}
mutate {
remove_field => [ "[host]" ]
}
mutate {
add_field => {
"host" => "%{[beat][hostname]}"
}
}
}
# ## apache2 module
# ## filebeat apache module https://www.elastic.co/guide/en/logstash/current/logstash-config-for-filebeat-modules.html
if [fileset][module] == "apache2" {
if [fileset][name] == "access" {
grok {
match => { "message" => ["%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \[%{HTTPDATE:[apache2][access][time]}\] \"%{WORD:[apache2][access][method]} %{DATA:[apache2][access][url]} HTTP/%{NUMBER:[apache2][access][http_version]}\" %{NUMBER:[apache2][access][response_code]} %{NUMBER:[apache2][access][body_sent][bytes]}( \"%{DATA:[apache2][access][referrer]}\")?( \"%{DATA:[apache2][access][agent]}\")?",
"%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \\[%{HTTPDATE:[apache2][access][time]}\\] \"-\" %{NUMBER:[apache2][access][response_code]} -" ] }
remove_field => "message"
}
mutate {
add_field => { "read_timestamp" => "%{@timestamp}" }
}
date {
match => [ "[apache2][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "[apache2][access][time]"
}
useragent {
source => "[apache2][access][agent]"
target => "[apache2][access][user_agent]"
remove_field => "[apache2][access][agent]"
}
geoip {
source => "[apache2][access][remote_ip]"
target => "[apache2][access][geoip]"
}
}
else if [fileset][name] == "error" {
grok {
match => { "message" => ["\[%{APACHE_TIME:[apache2][error][timestamp]}\] \[%{LOGLEVEL:[apache2][error][level]}\]( \[client %{IPORHOST:[apache2][error][client]}\])? %{GREEDYDATA:[apache2][error][message]}",
"\[%{APACHE_TIME:[apache2][error][timestamp]}\] \[%{DATA:[apache2][error][module]}:%{LOGLEVEL:[apache2][error][level]}\] \[pid %{NUMBER:[apache2][error][pid]}(:tid %{NUMBER:[apache2][error][tid]})?\]( \[client %{IPORHOST:[apache2][error][client]}\])? %{GREEDYDATA:[apache2][error][message1]}" ] }
pattern_definitions => {
"APACHE_TIME" => "%{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}"
}
remove_field => "message"
}
mutate {
rename => { "[apache2][error][message1]" => "[apache2][error][message]" }
}
date {
match => [ "[apache2][error][timestamp]", "EEE MMM dd H:m:s YYYY", "EEE MMM dd H:m:s.SSSSSS YYYY" ]
remove_field => "[apache2][error][timestamp]"
}
}
}
## syslog, there might be a module for this as well
if [type] == "syslog" {
### "$RepeatedMsgReduction off" /etc/rsyslog.conf
#if [message] =~ /last message repeated [0-9]+ times/ {
# drop { }
#}
## enable high precision timestamps
# comment out $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
grok {
match => { "message" => "(?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:syslog_timestamp}) %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => {
"syslog_received_at" => "%{@timestamp}"
"syslog_received_from" => "%{host}"
}
}
syslog_pri {
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ]
timezone => "America/New York"
}
mutate {
replace => { "syslog_timestamp" => "%{@timestamp}" }
}
# for check grok data type conversion bug???
mutate {
convert => {
"syslog_pid" => "integer"
}
}
}
## old apache filter
# if [type] == "apache" {
# grok {
# match => { "message" => "%{COMBINEDAPACHELOG}" }
# }
# date {
# match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
# timezone => "America/New York"
# }
# }
}
output {
elasticsearch {
hosts => ["elasticsearch.example.com:9200"]
ssl => true
index => "logstash-%{+YYYY.MM}"
}
}
{code}
ppafford
(Phill Pafford)
October 15, 2018, 4:19pm
3
example of what I do get from apache, but this is not from my VHost apache logs, they are coming from the ECS healthcheck and showing up in this log file /var/log/apache2/other_vhosts_access.log
{
"_index": "logstash-2018.10",
"_type": "doc",
"_id": "0lQ7amYBed8j2fdHuFWO",
"_version": 1,
"_score": null,
"_source": {
"input": {
"type": "log"
},
"@version": "1",
"read_timestamp": "2018-10-12T21:43:09.562Z",
"source": "/var/log/apache2/other_vhosts_access.log",
"beat": {
"version": "6.4.2",
"name": "filebeat.example.com",
"hostname": "filebeat.example.com"
},
"@timestamp": "2018-10-12T21:42:55.000Z",
"offset": 61132,
"apache2": {
"access": {
"url": "*",
"referrer": "-",
"method": "OPTIONS",
"remote_ip": "127.0.0.1",
"geoip": {},
"http_version": "1.0",
"user_agent": {
"device": "Other",
"name": "Other",
"build": "",
"os": "Ubuntu",
"os_name": "Ubuntu"
},
"user_name": "-",
"response_code": "200",
"body_sent": {
"bytes": "125"
}
}
},
"fileset": {
"module": "apache2",
"name": "access"
},
"prospector": {
"type": "log"
},
"tags": [
"_geoip_lookup_failure"
],
"file": "/var/log/apache2/other_vhosts_access.log",
"host": "elasticsearch.example.com"
},
"fields": {
"@timestamp": [
"2018-10-12T21:42:55.000Z"
]
},
"highlight": {
"source": [
"/@kibana-highlighted-field@var@/kibana-highlighted-field@/@kibana-highlighted-field@log@/kibana-highlighted-field@/@kibana-highlighted-field@other_vhosts_access.log@/kibana-highlighted-field@"
]
},
"sort": [
1539380575123
]
}
steffens
(Steffen Siering)
October 16, 2018, 9:24pm
4
Have you checked filebeat logs? Filebeat prints picked up files into the INFO log.
ppafford
(Phill Pafford)
October 17, 2018, 4:33pm
5
Thanks, I have checked and I'm now able to get logs from filebeats to logstash but now Im seeing this issue
Saved "field" parameter is now invalid. Please select a new field
Related: https://github.com/elastic/beats/issues/6206
steffens
(Steffen Siering)
October 18, 2018, 9:00pm
6
Beats is moving towards ECS . As there is always the chance of incompatible mappings, we version the index templates/mappings and index names by the beat version.
The example in this Documentation section shows how we normally recommend to index beats events:
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
Also see: https://www.elastic.co/guide/en/beats/filebeat/current/config-filebeat-logstash.html
system
(system)
Closed
November 15, 2018, 9:09pm
7
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.