Auditbeat data not being indexed

Hello,

I can ingest the logs if I set up the output.destination to Elasticsearch but when routing the logs via Logstash I am unable to receive/view them at Elasticsearch (through Kibana)

THe following the setup with Logstash

Auditbeat 8.6.0 (running on a Ubuntu Server 22.04.1) --> Logstah 7.17.8 on a Raspberry Pi running Ubuntu 22.04.1 --> sending data to Elaticsearch 8.6.0 on an Ubuntu 22.04.1 server.

I see that the connectivity is established:

image

My configuration for beats to elasticsearch is:

cat beats.conf

input {
  beats {
    port => 5074
    #id => "beats_ingest"
  }
}

output {
  elasticsearch {
    hosts => ["https://IP address:9200","https://IP address2:9200"]
    ssl => true
    user => ''
    password => ''
    cacert => '/etc/logstash/elasticsearch-ca.pem'
    ssl_certificate_verification => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM}"
    ilm_enabled => auto
    ilm_rollover_alias => "%{[@metadata][beat]}"
  }
}

Here is the output from auditbeat-20230117-16.ndjson

{"log.level":"info","@timestamp":"2023-01-17T10:50:38.910+0530","log.origin":{"file.name":"instance/beat.go","file.line":724},"message":"Home path: [/usr/share/auditbeat] Config path: [/etc/auditbeat] Data path: [/var/lib/auditbeat] Logs path: [/var/log/auditbeat]","service.name":"auditbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-01-17T10:50:38.910+0530","log.origin":{"file.name":"instance/beat.go","file.line":732},"message":"Beat ID: 2e39d971-a33b-49e1-839b-e9d171f8e5c2","service.name":"auditbeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2023-01-17T10:50:41.911+0530","log.logger":"add_cloud_metadata","log.origin":{"file.name":"add_cloud_metadata/provider_aws_ec2.go","file.line":81},"message":"read token request for getting IMDSv2 token returns empty: Put \"http://169.254.169.254/latest/api/token\": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.","service.name":"auditbeat","ecs.version":"1.6.0"}

Here is the grep from /var/log/syslog for "auditbeat"

Jan 17 11:05:50 hostname auditbeat[3166]: {"log.level":"warn","@timestamp":"2023-01-17T11:05:50.922+0530","log.logger":"process","log.origin":{"file.name":"process/process.go","file.line":249},"message":"failed to hash executable /usr/share/metricbeat/bin/metricbeat for PID 954: failed to hash file /usr/share/metricbeat/bin/metricbeat: hasher: file size 175988320 exceeds max file size","service.name":"auditbeat","ecs.version":"1.6.0"}
Jan 17 11:05:51 hostname auditbeat[3166]: {"log.level":"warn","@timestamp":"2023-01-17T11:05:51.005+0530","log.logger":"process","log.origin":{"file.name":"process/process.go","file.line":249},"message":"failed to hash executable /usr/share/auditbeat/bin/auditbea
 for PID 3166: failed to hash file /usr/share/auditbeat/bin/auditbeat: hasher: file size 111095136 exceeds max file size","service.name":"auditbeat","ecs.version":"1.6.0"}
Jan 17 11:05:52 hostname auditbeat[3166]: {"log.level":"info","@timestamp":"2023-01-17T11:05:52.596+0530","log.logger":"file_integrity","log.origin":{"file.name":"file_integrity/scanner.go","file.line":124},"message":"File system scan completed","service.name":"auditbeat","scanner_id":1,"took":3435615392,"file_count":1696,"total_bytes":180585488,"bytes_per_sec":52562777.667285524,"files_per_sec":493.652462947168,"ecs.version":"1.6.0"}
Jan 17 11:06:13 hostname auditbeat[3166]: {"log.level":"info","@timestamp":"2023-01-17T11:06:13.599+0530","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":187},"message":"Non-zero metrics in the last 30s","service.name":"auditbeat","monitoring":{"metrics":{"auditd":{"received_msgs":4},"beat":{"cgroup":{"cpu":{"id":"auditbeat.service"},"memory":{"id":"auditbeat.service","mem":{"usage":{"bytes":42618880}}}},"cpu":{"system":{"ticks":1120,"time":{"ms":1120}},"total":{"ticks":1590,"time":{"ms":1590},"value":1590},"user":{"ticks":470,"time":{"ms":470}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":42},"info":{"ephemeral_id":"190f9338-eba7-4d3f-ac0d-f283fb810bbd","name":"auditbeat","uptime":{"ms":34605},"version":"8.6.0"},"memstats":{"gc_next":20982824,"memory_alloc":11780776,"memory_sys":33202380,"memory_total":128298272,"rss":114999296},"runtime":{"goroutines":60}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":57,"active":0,"batches":5,"total":57},"read":{"bytes":36},"type":"logstash","write":{"bytes":14132}},"pipeline":{"clients":8,"events":{"active":0,"published":57,"retry":4,"total":57},"queue":{"acked":57,"max_events":4096}}},"metricbeat":{"auditd":{"auditd":{"events":1,"success":1}},"file_integrity":{"file":{"events":1,"success":1}},"system":{"host":{"events":3,"success":3},"login":{"events":3,"success":3},"process":{"events":48,"success":48},"socket":{"events":1,"success":1}}},"system":{"cpu":{"cores":1},"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}},"ecs.version":"1.6.0"}}
Jan 17 11:06:43 hostname auditbeat[3166]: {"log.level":"info","@timestamp":"2023-01-17T11:06:43.599+0530","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":187},"message":"Non-zero metrics in the last 30s","service.name":"auditbeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":40337408}}}},"cpu":{"system":{"ticks":1160,"time":{"ms":40}},"total":{"ticks":1660,"time":{"ms":70},"value":1660},"user":{"ticks":500,"time":{"ms":30}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":42},"info":{"ephemeral_id":"190f9338-eba7-4d3f-ac0d-f283fb810bbd","uptime":{"ms":64605},"version":"8.6.0"},"memstats":{"gc_next":18932024,"memory_alloc":12673272,"memory_total":139831640,"rss":112754688},"runtime":{"goroutines":60}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":3,"active":0,"batches":2,"total":3},"read":{"bytes":12},"write":{"bytes":1965}},"pipeline":{"clients":8,"events":{"active":0,"published":3,"total":3},"queue":{"acked":3}}},"metricbeat":{"system":{"process":{"events":2,"success":2},"socket":{"events":1,"success":1}}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}},"ecs.version":"1.6.0"}}
Jan 17 11:07:13 hostname auditbeat[3166]: {"log.level":"info","@timestamp":"2023-01-17T11:07:13.598+0530","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":187},"message":"Non-zero metrics in the last 30s","service.name":"auditbeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":42524672}}}},"cpu":{"system":{"ticks":1190,"time":{"ms":30}},"total":{"ticks":1720,"time":{"ms":60},"value":1720},"user":{"ticks":530,"time":{"ms":30}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":42},"info":{"ephemeral_id":"190f9338-eba7-4d3f-ac0d-f283fb810bbd","uptime":{"ms":94604},"version":"8.6.0"},"memstats":{"gc_next":22490456,"memory_alloc":12358840,"memory_total":150007192,"rss":114933760},"runtime":{"goroutines":60}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":1,"active":0,"batches":1,"total":1},"read":{"bytes":6},"write":{"bytes":1001}},"pipeline":{"clients":8,"events":{"active":0,"published":1,"total":1},"queue":{"acked":1}}},"metricbeat":{"system":{"socket":{"events":1,"success":1}}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}},"ecs.version":"1.6.0"}}
Jan 17 11:07:43 hostname auditbeat[3166]: {"log.level":"info","@timestamp":"2023-01-17T11:07:43.599+0530","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":187},"message":"Non-zero metrics in the last 30s","service.name":"auditbeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":41496576}}}},"cpu":{"system":{"ticks":1210,"time":{"ms":20}},"total":{"ticks":1780,"time":{"ms":60},"value":1780},"user":{"ticks":570,"time":{"ms":40}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":42},"info":{"ephemeral_id":"190f9338-eba7-4d3f-ac0d-f283fb810bbd","uptime":{"ms":124604},"version":"8.6.0"},"memstats":{"gc_next":19945336,"memory_alloc":11232968,"memory_total":160466960,"rss":113938432},"runtime":{"goroutines":60}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":1,"active":0,"batches":1,"total":1},"read":{"bytes":6},"write":{"bytes":991}},"pipeline":{"clients":8,"events":{"active":0,"published":1,"total":1},"queue":{"acked":1}}},"metricbeat":{"system":{"socket":{"events":1,"success":1}}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}},"ecs.version":"1.6.0"}}
Jan 17 11:08:13 hostname auditbeat[3166]: {"log.level":"info","@timestamp":"2023-01-17T11:08:13.600+0530","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":187},"message":"Non-zero metrics in the last 30s","service.name":"auditbeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":41222144}}}},"cpu":{"system":{"ticks":1250,"time":{"ms":40}},"total":{"ticks":1860,"time":{"ms":80},"value":1860},"user":{"ticks":610,"time":{"ms":40}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":42},"info":{"ephemeral_id":"190f9338-eba7-4d3f-ac0d-f283fb810bbd","uptime":{"ms":154605},"version":"8.6.0"},"memstats":{"gc_next":19224312,"memory_alloc":15186224,"memory_total":173900536,"rss":113594368},"runtime":{"goroutines":60}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":5,"active":0,"batches":2,"total":5},"read":{"bytes":12},"write":{"bytes":2498}},"pipeline":{"clients":8,"events":{"active":0,"published":5,"total":5},"queue":{"acked":5}}},"metricbeat":{"system":{"process":{"events":4,"success":4},"socket":{"events":1,"success":1}}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}},"ecs.version":"1.6.0"}}

What should be the next step(s) for diagnosing this issue?

Hello @parthmaniar !

Usually when you get into a situation when a beat directly to ES works fine, but does not when Logstash is in between, the next step is usually the Logstash logs.

What you are interested in finding out, is why the documents are dropped by Elasticsearch, which is usually logged there.

Something that is a bit confusing to me, is why Logstash is on 7.x and the rest of the stack is on 8.x, which could cause issues for you as well, but let's start with the Logstash logs first, and see what they say.

Hey @Marius_Iversen , thank you very much for your reply.

The deployment is in a homelab catering to my four years of MSc degree. I did upgrade logstash to 8.x but had logs dropping, so I instantly reverted to 7.x. It is a primary reason to use an RPi, as I can easily swap the memory card.

Second, please find the logs from logstash about "beats_ingest", which is the pipeline to handle all beats data.

from /var/log/syslog & from /var/log/logstash/logstash-plain.log | grep beats_ingest

It is kind of odd that I did a connection test on the auditbeat node and that wasn't noted here anywhere.

Am I missing something?

Jan 17 12:10:30 hostname logstash[714]: [2023-01-17T12:10:30,510][INFO ][org.logstash.beats.BeatsHandler][beats_ingest][297fb3f41654fa6551a1384c1d0f8bfe063ff9c8ae35f43ac7cf566aa32f234d] [local: 0.0.0.0:5074, remote: 192.168.0.50:53222] closing (Connection reset by peer)
Jan 17 12:10:30 hostname  logstash[714]: [2023-01-17T12:10:30,523][INFO ][org.logstash.beats.BeatsHandler][beats_ingest][297fb3f41654fa6551a1384c1d0f8bfe063ff9c8ae35f43ac7cf566aa32f234d] [local: 0.0.0.0:5074, remote: 192.168.0.50:53228] closing (Connection reset by peer)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.