What are the system limits to check when Auditbeat stops sending messages

Thought I'd better start this as a separate topic. At 1am yesterday morning, the messages from Auditbeat stopped appearing in ELK. No changes were made to either Auditbeat or Logstash filtering. Just stopped dead. I was told to check the Auditbeat Log for a particular message (Second one below) as there could be a system limit that has been hit. Trouble is, I don't have any idea what to check. Any help gratefully received.

|2018-04-19T16:39:36.870+0100|INFO|[auditd]|auditd/audit_linux.go:192|audit status from kernel at start|{"audit_status": {"Mask":892548912,"Enabled":1,"Failure":1,"PID":0,"RateLimit":0,"BacklogLimit":8196,"Lost":3446808471,"Backlog":0,"FeatureBitmap":0,"BacklogWaitTime":0}}|
|---|---|---|---|---|---|
|2018-04-19T16:40:04.236+0100|INFO|[monitoring]|log/log.go:124|Non-zero metrics in the last 30s|{"monitoring": {"metrics": {"auditd":{"lost":293},"beat":{"cpu":{"system":{"ticks":17180,"time":17188},"total":{"ticks":46220,"time":46228,"value":46220},"user":{"ticks":29040,"time":29040}},"info":{"ephemeral_id":"c853ae42-1c9b-49b6-8416-b06b174d331d","uptime":{"ms":30041}},"memstats":{"gc_next":9708848,"memory_alloc":8230792,"memory_total":762385304,"rss":34197504}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"events":{"acked":23511,"batches":26,"total":23511},"read":{"bytes":156},"type":"logstash","write":{"bytes":3366714}},"pipeline":{"clients":1,"events":{"active":559,"published":24070,"retry":864,"total":24070},"queue":{"acked":23511}}},"metricbeat":{"auditd":{"auditd":{"events":24073,"success":24073}}},"system":{"cpu":{"cores":38},"load":{"1":0.84,"15":0.24,"5":0.37,"norm":{"1":0.0221,"15":0.0063,"5":0.0097}}}}}}|

I have checked the Auditbeat config and output and they both test OK. I tried remove anything clever from Auditbeat config and logstash filtering and still not receiving any messages.

Please, share the following:

  • auditbeat version
  • OS version
  • Full configuration (auditbeat.yml) enclosed between triple backticks ```like this```
  • Debug log (run auditbeat with -e -d '*'

I also suggest to enable the http profiler (--httpprof :8888). This allows you to connect to http://localhost:8888/debug/pprof/ and inspect a running beat. It helps diagnose memory and runtime problems. I am interested in the goroutine dump.

@adrisr Hi Adrian, thanks for getting back to me. I'm still very much in the learning phase with Auditbeat and ELK.

In the end I was able to prove to myself that this was not an auditbeat issue. I had another system that I had used to prototype the work and spun that up and none of the messages from this came through either.

Not sure if this is what solved it, but it appears to be working again after over 24 hours of being dead, so documenting this just in case it helps someone else.

Redhat was the Auditbeat server, Ubuntu the ELK server.

As I'm still in the development stage, I had already deleted everything from ELK and was still not getting anything through, but I was getting an indexing error showing in Kibana. Didn't understand it as not had it before, so look into resolving this.

Wiped ELK again and looked at how I had been creating the index. I had deployed the auditbeat template locally on Ubuntu that I had installed Auditbeat on for monitoring that server later.

Looking at the Redhat Auditbeat configuration file I can see that the template can be installed from there, but overwrite was set to false. Both Auditbeat on Redhat and Ubuntu were 6.2.4, just that one was for Redhat and the other Ubuntu.

Spun up ELK and rather than install the template locally, Started Auditbeat on Redhat and got the message on ELK

elkselasticsearch_1 | [2018-04-20T10:49:28,536][INFO ][o.e.c.m.MetaDataCreateIndexService] [TKz4KCH] [auditbeat-6.2.4-2018.04.20] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []

So this clearly showed that the ELK server was at least getting something. Went into Kibana that stated it did not have an index selected, so went through setting that up and hey presto, messages are now appearing.

I have no idea if this cleared what stopped them at that time mentioned above, only time will tell if it stops again, but at least they are coming through now.

I also wanted to deploy the default dashboards and when I try and use auditbeat on the Ubuntu ELK (v6.2.2) server (locally), they deploy from here, I get a number of errors in the dashboards that stop the widgets from displaying so this tends to make me wonder if there is a subtle difference between the Ubuntu and Redhat Auditbeat although you'd expect them to be the same.

So trying to install them from the Redhat auditbeat, this used to work on older Auditbeats, but I now get:

[nhopper@ptc38501-01 ~]$ sudo auditbeat setup --dashboards -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["soptct62-02.ptc.com:9200"]'
Exiting: Error importing Kibana dashboards: fail to create the Kibana loader: Error creating Kibana client: fail to get the Kibana version:HTTP GET request to /api/status fails: fail to execute the HTTP GET request: Get http://localhost:5601/api/status: dial tcp 127.0.0.1:5601: getsockopt: connection refused. Response: .

So considering the indexing problem using the Debian version of Auditbeat template seems to cause and the Redhat Auditbeat automatically deploys it and no problems, would like to deploy the dashboards from Redhat Auditbeat as well.

Sheesh, I wish this was easier, but getting there.

Glad you're making advances.

The last issue, dial tcp 127.0.0.1:5601: getsockopt: connection refused means that Auditbeat can't contact Kibana. Make sure you have the right address for Kibana in auditbeat.yml, as it doesn't seem to be installed in localhost.

So something like:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localho$
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "soptct62-02.ptc.com:5601"

Must be as adding in setting the dashboards as part of the yml file worked. The problem then is this

So default index, default dashboards and this error.

Any guidance or shall I create this as a new thread?

In the More Info of the Visualize part you get

Error: "field" is a required parameter
FieldParamTypeProvider/FieldParamType.prototype.write@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:1267309
AggTypesAggParamsProvider/AggParams.prototype.write/<@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:229895
AggTypesAggParamsProvider/AggParams.prototype.write@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:229854
VisAggConfigProvider/AggConfig.prototype.write@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:50590
VisAggConfigProvider/AggConfig.prototype.toDsl@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:51537
VisAggConfigsProvider/AggConfigs.prototype.toDsl/<@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:1457068
VisAggConfigsProvider/AggConfigs.prototype.toDsl@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:1456782
SavedVis.prototype._afterEsResp/</<@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:422272
SearchSourceProvider/SearchSource.prototype._mergeProp@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:71851
ittr@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:700382
baseMap/<@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:30:633766
createBaseFor/<@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:30:643946
baseForOwn@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:30:630907
createBaseEach/<@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:30:643506
baseMap@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:30:633699
map@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:30:669604
ittr@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:700232
AbstractDataSourceProvider/SourceAbstract.prototype._flatten@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:700206
SearchRequestProvider/</<.value@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:302506
Promise.try@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:506629
callClient/<@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:706962
Promise.try@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:506561
Promise.map/<@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:505941
Promise.map@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:505906
callClient@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:706909
fetchSearchResults/<@http://soptct62-02.ptc.com:5601/bundles/commons.bundle.js?v=16588:1:704362
processQueue@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:58:132456
scheduleProcessQueue/<@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:58:133349
$RootScopeProvider/this.$get</Scope.prototype.$digest@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:58:144239
$RootScopeProvider/this.$get</Scope.prototype.$evalAsync/<@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:58:146732
completeOutstandingRequest@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:58:36782
Browser/self.defer/timeoutId<@http://soptct62-02.ptc.com:5601/bundles/vendors.bundle.js?v=16588:58:39923

Raising it as a separate thread