hi, i'm trying to pull in bind9 dns logs into elasticsearch..
my workflow is this:
- filebeat transfers bind9 logs to logstash
- logstash dumps them into a rabbitmq queue
- another logstash instance on another machine pulls from rabbitmq and puts them into ES
on the dns server the logs look like this:
2019-06-05T17:37:42.753Z client @0x7f0b9466c8f0 172.XX.XX.XXX#48739: query
in rabbitmq and beyond the log entries look like this:
2019-06-05T17:37:42.753Z {name=edhelloXXXX, hostname=edhelloXXXXX, id=adee31d708b14ad19c1759d378a7788b, os={name=Ubuntu, family=debian, version=18.04.2 LTS (Bionic Beaver), kernel=4.15.0-47-generic, platform=ubuntu, codename=bionic}, containerized=false, architecture=x86_64} client @0x7f0b9466c8f0 172.XX.XX.XXX#48739: query
that additional data is throwing off any grok pattern i find online. i'm just curious at which stage that data is being inserted.. and if anyone else has had something like that happen..
Thanks