Iptables events not being published in elasticsearch

Beginner / intermediate Elastic user here.

I have filebeat configured to listen on port 9001/UDP for iptables events that are sent to it via syslogd by my firewall. This has worked fine for years.

I recently noticed that syslogd (or, perhaps, just my version of it) is truncating the events it's sending due to the length of IPv6 addresses. To address that, decided to switch to syslog-ng as it appears to handle the longer events without truncation, however, when syslog-ng is used instead of syslogd, the events don't appear to be added to elasticsearch as they're not visible when searching or in Observability in Kibana.

I enabled debug on filebeat and I do see the events being received:

{"log.level":"debug","@timestamp":"2022-12-11T20:59:32.365Z","log.logger":"processors","log.origin":{"file.name":"processing/processors.go","file.line":210},"message":"Publish event: {\n  \"@timestamp\": \"2022-12-11T14:59:32.000Z\",\n  \"@metadata\": {\n    \"beat\": \"filebeat\",\n    \"type\": \"_doc\",\n    \"version\": \"8.5.0\",\n    \"truncated\": false,\n    \"pipeline\": \"filebeat-8.5.0-iptables-log-pipeline\"\n  },\n  \"event\": {\n    \"severity\": 4,\n    \"module\": \"iptables\",\n    \"dataset\": \"iptables.log\",\n    \"timezone\": \"+00:00\"\n  },\n  \"log\": {\n    \"source\": {\n      \"address\": \"<redacted>:41505\"\n    }\n  },\n  \"input\": {\n    \"type\": \"syslog\"\n  },\n  \"fileset\": {\n    \"name\": \"log\"\n  },\n  \"ecs\": {\n    \"version\": \"1.12.0\"\n  },\n  \"message\": \"[229132.935598] ACCEPT IN=br0 OUT=vlan2 MAC=<redacted> SRC=<redacted> DST=<redacted> LEN=84 TC=0 HOPLIMIT=63 FLOWLBL=395008 PROTO=TCP SPT=56575 DPT=443 WINDOW=65535 RES=0x00 SYN URGP=0 \",\n  \"syslog\": {\n    \"severity_label\": \"Warning\",\n    \"facility\": 0,\n    \"facility_label\": \"kernel\",\n    \"priority\": 4\n  },\n  \"process\": {\n    \"program\": \"kernel\"\n  },\n  \"tags\": [\n    \"iptables\",\n    \"forwarded\"\n  ],\n  \"service\": {\n    \"type\": \"iptables\"\n  },\n  \"agent\": {\n    \"name\": \"elastic\",\n    \"type\": \"filebeat\",\n    \"version\": \"8.5.0\",\n    \"ephemeral_id\": \"6fbf5167-5db9-419a-8af1-8e3fa61a93c8\",\n    \"id\": \"1296f66c-f7ce-47dc-be4e-058c131cf53c\"\n  },\n  \"hostname\": \"router\"\n}","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2022-12-11T20:59:32.773Z","log.logger":"elasticsearch","log.origin":{"file.name":"elasticsearch/client.go","file.line":247},"message":"PublishEvents: 2 events have been published to elasticsearch in 6.340351ms.","service.name":"filebeat","ecs.version":"1.6.0"}

I've used the pipeline simulate API for the event and it processes without error, so it doesn't appear to be a problem with that. Strangely enough, I do see documents being added to the index, however, if I search the index for the most recent document using this code, I don't see new events:

POST .ds-filebeat-8.5.0-2022.12.10-000005/_search
{
   "size": 1,
   "sort": { "@timestamp": "desc"},
   "query": {
      "match_all": {}
   }
}

What I'd like to see are some sort of logs or events for the filebeat / elasticsearch event ingestion and publication. I've tried enabling different loggers via "/_cluster/settings" but I've not found the one that provides those logs.

I've gone as far as I can with this - any help or insight would be greatly appreciated!

Thanks!

Hi @jasongil Welcome to the community

the debug line seems to indicate that the event was ingested...

I notice you are searching at a specific backing index when you search instead of the data stream perhaps try searching against the data stream perhaps that backing index has rolled over?

POST filebeat-8.5.0/_search
{
   "size": 1,
   "sort": { "@timestamp": "desc"},
   "query": {
      "match_all": {}
   }
}

Simulate does not guarantee a document will be written.. as it may pass through / be processed in the pipeline but have a mapping type error which then would fail indexing (which I would expect to see as an error in filebeat)

But you can certainly actually post a sample document using the pipeline

POST filebeat-8.5.0/_doc/?pipeline=filebeat-8.5.0-iptables-log-pipeline
{
..... you sample document
}
1 Like

Hey @stephenb, thanks for the response.

I'm still working on this and will give the two tips you passed along a shot to see if they help identity the problem.

Are you aware of any trace or debugging in Elastic that would produce some sort of log entry when an event is published? Something like "Document ID xxxxxxx published from received event"

Thanks again for your help.

Yes you can turn full auditing... It will be quite verbose and hard to find what you are looking for..

But I have rarely if ever needed that to debug filebeat.

Filebeat is guaranteed once delivery .. can you show more of the logs...

There will be filebeat log entries with metrics that shows what is published and acked.

Can you share your entire filebeat.yml

Have you tried to just set the output to console? And see what is coming out.

Are you sending through logstash?

Did you try the search I showed you?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.