Hi everybody.
I've have a ES + Kib + Logstash + Filebeat environment ready for testing purposes, and after installing Filebeat so it sends the data to Logstash, I've realiced that I'm recieving more documents than neccesary, as some of them are completely irrelevant to me. I know this is a very typical subject/problem that have been discussed many times, but although I'm trying to apply all the solutions I've found, none of them is working for me:
I'm trying to get rid of all events that are created as a consequence of the 'avahi-daemon'.
I can find those documents in Kibana´s Analitics/Discover section with the following format:
@timestamp
Jun 7, 2022 @ 17:29:46.000
@version
1
agent.ephemeral_id
50154ac2-98bb-4362-84ff-8e5d673c836d
agent.hostname
ubuntuelk
agent.id
4f5d3321-54c8-4858-851e-a30d531eedbd
agent.name
ubuntuelk
agent.type
filebeat
agent.version
8.2.1
ecs.version
1.12.0
event.dataset
system.syslog
event.ingested
Jun 7, 2022 @ 17:29:50.316
event.kind
event
event.module
system
event.original
Jun 7 17:29:46 ubuntuelk avahi-daemon[769]: Registering new address record for fe80::94ca:bd6f:70f5:3d41 on ens33.*.
event.timezone
+02:00
fileset.name
syslog
host.architecture
x86_64
host.containerized
false
host.hostname
ubuntuelk
host.id
4293b3061fe540d9b3cdd77a03af46c3
host.ip
192.168.0.111, fe80::6b28:900b:b61d:b756, fe80::5d73:dcbe:a241:d705,
If I run a query for them in the 'Dev Tools' prompt, this is how they look like:
{
"_index" : ".ds-filebeat-8.2.1-2022.06.06-000001",
"_id" : "rmjNPoEBevuxJxIgmcAI",
"_score" : 1.0,
"_source" : {
"agent" : {
"name" : "ubuntuelk",
"id" : "4f5d3321-54c8-4858-851e-a30d531eedbd",
"type" : "filebeat",
"ephemeral_id" : "50154ac2-98bb-4362-84ff-8e5d673c836d",
"version" : "8.2.1"
},
"process" : {
"name" : "avahi-daemon",
"pid" : 769
},
"log" : {
"file" : {
"path" : "/var/log/syslog"
},
"offset" : 2222249
},
"fileset" : {
"name" : "syslog"
},
"message" : "Withdrawing address record for fe80::6b28:900b:b61d:b756 on ens33.",
"tags" : [
"beats_input_codec_plain_applied"
],
"input" : {
"type" : "log"
},
"@timestamp" : "2022-06-07T17:34:26.000+02:00",
"system" : {
"syslog" : { }
},
"ecs" : {
"version" : "1.12.0"
},
"related" : {
"hosts" : [
"ubuntuelk"
]
},
"service" : {
"type" : "system"
},
"host" : {
"hostname" : "ubuntuelk",
"os" : {
"kernel" : "5.13.0-44-generic",
"codename" : "focal",
"name" : "Ubuntu",
"type" : "linux",
"family" : "debian",
"version" : "20.04.3 LTS (Focal Fossa)",
"platform" : "ubuntu"
},
"ip" : [
"192.168.0.111",
"fe80::6b28:900b:b61d:b756",
"fe80::5d73:dcbe:a241:d705",
"fe80::94ca:bd6f:70f5:3d41"
],
"containerized" : false,
"name" : "ubuntuelk",
"id" : "4293b3061fe540d9b3cdd77a03af46c3",
"mac" : [
"00:0c:29:f5:4e:bf"
],
"architecture" : "x86_64"
},
"@version" : "1",
"event" : {
"ingested" : "2022-06-07T15:34:30.406056321Z",
"original" : "Jun 7 17:34:26 ubuntuelk avahi-daemon[769]: Withdrawing address record for fe80::6b28:900b:b61d:b756 on ens33.",
"timezone" : "+02:00",
"kind" : "event",
"module" : "system",
"dataset" : "system.syslog"
}
}
}
As you can see this document, as many other have the field process.name with this value: "avahi-daemon"
I'm trying to set a filter to drop that objects in my Logstash's pipeline configuration file.
This is how it looks:
input {
beats {
port => "5044"
}
}
filter {
if "avahi-daemon" in [process.name] {
drop { }
}
}
output {
if [@metadata][pipeline] {
elasticsearch {
hosts => ["https://192.168.0.111:9200","https://192.168.0.112:9200","https://192.168.0.113:9200"]
cacert => '/certs/elastic/http_ca.crt'
pipeline => "%{[@metadata][pipeline]}"
user => "${LS_USER}"
password => "${LS_PWD}"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}"
action => "create"
}
} else {
elasticsearch {
hosts => ["https://192.168.0.111:9200","https://192.168.0.112:9200","https://192.168.0.113:9200"]
cacert => '/certs/elastic/http_ca.crt'
user => "${LS_USER}"
password => "${LS_PWD}"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}"
action => "create"
}
}
}
But as you can guess this is not working.
Could please someone tell me what am I doing wrong?
Thank you very much in advance.
Carlos T.