Beginner / intermediate Elastic user here.
I have filebeat configured to listen on port 9001/UDP for iptables events that are sent to it via syslogd by my firewall. This has worked fine for years.
I recently noticed that syslogd (or, perhaps, just my version of it) is truncating the events it's sending due to the length of IPv6 addresses. To address that, decided to switch to syslog-ng as it appears to handle the longer events without truncation, however, when syslog-ng is used instead of syslogd, the events don't appear to be added to elasticsearch as they're not visible when searching or in Observability in Kibana.
I enabled debug on filebeat and I do see the events being received:
{"log.level":"debug","@timestamp":"2022-12-11T20:59:32.365Z","log.logger":"processors","log.origin":{"file.name":"processing/processors.go","file.line":210},"message":"Publish event: {\n \"@timestamp\": \"2022-12-11T14:59:32.000Z\",\n \"@metadata\": {\n \"beat\": \"filebeat\",\n \"type\": \"_doc\",\n \"version\": \"8.5.0\",\n \"truncated\": false,\n \"pipeline\": \"filebeat-8.5.0-iptables-log-pipeline\"\n },\n \"event\": {\n \"severity\": 4,\n \"module\": \"iptables\",\n \"dataset\": \"iptables.log\",\n \"timezone\": \"+00:00\"\n },\n \"log\": {\n \"source\": {\n \"address\": \"<redacted>:41505\"\n }\n },\n \"input\": {\n \"type\": \"syslog\"\n },\n \"fileset\": {\n \"name\": \"log\"\n },\n \"ecs\": {\n \"version\": \"1.12.0\"\n },\n \"message\": \"[229132.935598] ACCEPT IN=br0 OUT=vlan2 MAC=<redacted> SRC=<redacted> DST=<redacted> LEN=84 TC=0 HOPLIMIT=63 FLOWLBL=395008 PROTO=TCP SPT=56575 DPT=443 WINDOW=65535 RES=0x00 SYN URGP=0 \",\n \"syslog\": {\n \"severity_label\": \"Warning\",\n \"facility\": 0,\n \"facility_label\": \"kernel\",\n \"priority\": 4\n },\n \"process\": {\n \"program\": \"kernel\"\n },\n \"tags\": [\n \"iptables\",\n \"forwarded\"\n ],\n \"service\": {\n \"type\": \"iptables\"\n },\n \"agent\": {\n \"name\": \"elastic\",\n \"type\": \"filebeat\",\n \"version\": \"8.5.0\",\n \"ephemeral_id\": \"6fbf5167-5db9-419a-8af1-8e3fa61a93c8\",\n \"id\": \"1296f66c-f7ce-47dc-be4e-058c131cf53c\"\n },\n \"hostname\": \"router\"\n}","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2022-12-11T20:59:32.773Z","log.logger":"elasticsearch","log.origin":{"file.name":"elasticsearch/client.go","file.line":247},"message":"PublishEvents: 2 events have been published to elasticsearch in 6.340351ms.","service.name":"filebeat","ecs.version":"1.6.0"}
I've used the pipeline simulate API for the event and it processes without error, so it doesn't appear to be a problem with that. Strangely enough, I do see documents being added to the index, however, if I search the index for the most recent document using this code, I don't see new events:
POST .ds-filebeat-8.5.0-2022.12.10-000005/_search
{
"size": 1,
"sort": { "@timestamp": "desc"},
"query": {
"match_all": {}
}
}
What I'd like to see are some sort of logs or events for the filebeat / elasticsearch event ingestion and publication. I've tried enabling different loggers via "/_cluster/settings" but I've not found the one that provides those logs.
I've gone as far as I can with this - any help or insight would be greatly appreciated!
Thanks!