Hello: I'm pretty new at the ecosystem. I've learned a lot in the past few weeks however I think I've hit a roadblock where I can't seem to figure out why log lines are not being broken out into the expected fields and then having them populated. I have a few log types going in however I'd like to focus on iptables logs at this point.
For each of the ELK components I am running the official docker images of version 7.16.3. These are running in Docker version 20.10.11, build dea9396e184290f638ea873c76db7c80efd5a1d2.
I may be confused about something: My impression is that logstash is for taking the logs in from wherever, and it is then logstash that sends them to Elasticsearch. When reading the documents about filebeat and the iptables module, it talked bout sending these logs directly to Elasticsearch, but also noted one could send them to logstash. I decided to send them to logstash. Have I made a mistake?
The source of the logs in this case is using filebeat to pull in a few log files, and then I have configured my iptables rules to use the prefix "IPTABLES: " for logged traffic. I then use rsyslog to look for that text string and break iptables logs out into /var/log/iptables.log. I then enable the iptables filebeat module and configure it to look at /var/log/iptables.log.
This works fine, and the logs make it into Elasticsearch, and I can see them in kibana. However I notice the various fields that the filebeat iptables module is supposed to create and populate are not there. Instead each of the full log lines is placed into the "message" field. In any case I have been reading the documents for a long time and searching around for a solution (the later not working too well as I am not entirely sure what to search on) and figured it was time for help.
These are the configuration files involved. I am not sure how to create a "gist" yet, and I am concerned about pastebins expiring the text, so hopefully I won't cause too much annoyance by just putting it here. If I missed something please let me know!
FILE - LOGSTASH - logstash.yml:
http.host: "0.0.0.0"
# xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.elasticsearch.hosts: [ "http://172.17.0.2:9200" ]
FILE - LOGSTASH - logstash.conf:
input {
beats {
port => 5044
}
}
# https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html
input {
syslog {
port => 1514
type => "syslog"
}
}
input {
tcp {
port => 7000
dns_reverse_lookup_enabled => "false"
mode => "server"
codec => "line"
# Avoids host* field collisions which results in logs blocked from entering elasticsearch
# https://discuss.elastic.co/t/problem-with-transfer-filebeat-6-1-3-logstash-6-1-3-elasticsearch-6-1-3/136264/3?u=badger
# https://www.elastic.co/guide/en/logstash/current/plugins-inputs-tcp.html#plugins-inputs-tcp-ecs_compatibility
# https://www.elastic.co/guide/en/ecs/current/ecs-host.html
ecs_compatibility => "v8"
}
}
# Last output block to elasticsearch, replaced by the one below in an attempt to get expected iptables filebeat module fields populated
# https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html
#output {
# elasticsearch {
## hosts => "172.17.0.2:9200"
# hosts => "172.17.0.2"
# data_stream => "true"
## codec => "cef"
# codec => "json_lines"
# }
# attempt to get expected iptables filebeat module fields populated
# https://www.elastic.co/guide/en/logstash/7.0/use-ingest-pipelines.html
output {
if [@metadata][pipeline] {
elasticsearch {
hosts => "http://172.17.0.2:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
pipeline => "%{[@metadata][pipeline]}"
}
} else {
elasticsearch {
hosts => "http://172.17.0.2:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
}
FILE - LOGSTASH - pipelines.yml:
- pipeline.id: main
path.config: "/usr/share/logstash/pipeline"