Hello, I was deploying Auditbeat on several servers and noticed that our couchdb server was having some performance issues after deployment. It turned out that there were way too many localhost socket connections being traced,
and as I was trying to debug this problem I noticed an option socket.include_localhost: false
in Elastic SIEM for home and small business: Beats on CentOS blog post.
I looked through documentation and also found it mentioned in System Module documentation, but without explanation. It's not even mentioned in System socket dataset.
I couldn't find explanation anywhere so I decided to try it. Here is a snippet from the auditbeat.yml
config I used:
Auditbeat config snippet
- module: system
datasets:
- host # General host information, e.g. uptime, IPs
- login # User logins, logouts, and system boots.
- package # Installed, updated, and removed packages
- process # Started and stopped processes
- socket # Opened and closed sockets
- user # User information
socket.include_localhost: false
# How often datasets send state updates with the
# current state of the system (e.g. all currently
# running processes, all open sockets).
state.period: 12h
I was sort of expecting to stop seeing logs where source.ip and destination.ip is 127.0.0.1 etc., but I still get documents like following:
{
"_index": "auditbeat-7.7.1-000003",
"_type": "_doc",
"_id": "v7l2wXIB1fmLmVwTHOra",
"_version": 1,
"_score": null,
"_source": {
"client": {
"ip": "127.0.0.1",
"bytes": 81,
"port": 49092,
"packets": 1
},
"agent": {
"hostname": "installer-test",
"version": "7.7.1",
"type": "auditbeat",
"ephemeral_id": "f623462c-8407-4352-a156-0801394e0641",
"id": "71c807b6-282a-4b96-9b9a-7b1d6ad374d9"
},
"system": {
"audit": {
"socket": {
"egid": 1000,
"uid": 1000,
"kernel_sock_address": "0xffff9ce1dea6a640",
"gid": 1000,
"euid": 1000
}
}
},
"host": {
"mac": [
"00:15:5d:19:8a:b9"
],
"hostname": "installer-test",
"containerized": false,
"ip": [
"10.10.20.112",
"fe80::dab0:e4d3:6d32:2d3d"
],
"name": "installer-test",
"os": {
"kernel": "5.3.0-28-generic",
"platform": "ubuntu",
"version": "18.04.4 LTS (Bionic Beaver)",
"name": "Ubuntu",
"codename": "bionic",
"family": "debian"
},
"id": "1ce2de8fba0846c5a19d6d330ba47d77",
"architecture": "x86_64"
},
"ecs": {
"version": "1.5.0"
},
"server": {
"ip": "127.0.0.53",
"bytes": 89,
"port": 53,
"packets": 1
},
"network": {
"transport": "udp",
"type": "ipv4",
"community_id": "1:tfg5fvM59Z2EQc2rAG0Wk37Kj24=",
"direction": "outbound",
"bytes": 170,
"packets": 2
},
"@version": "1",
"service": {
"type": "system"
},
"user": {
"name": "ubuntu",
"id": "1000"
},
"source": {
"ip": "127.0.0.1",
"bytes": 81,
"port": 49092,
"packets": 1
},
"process": {
"name": "netstat",
"args": [
"netstat"
],
"executable": "/bin/netstat",
"created": "2020-06-17T08:47:16.278Z",
"pid": 172325
},
"@timestamp": "2020-06-17T08:47:22.382Z",
"group": {
"name": "ubuntu",
"id": "1000"
},
"flow": {
"final": true,
"complete": false
},
"tags": [
"auditbeat",
"beats_input_raw_event"
],
"destination": {
"ip": "127.0.0.53",
"bytes": 89,
"port": 53,
"packets": 1
},
"event": {
"duration": 54837923,
"end": "2020-06-17T08:47:16.626Z",
"kind": "event",
"category": "network_traffic",
"start": "2020-06-17T08:47:16.571Z",
"dataset": "socket",
"action": "network_flow",
"module": "system"
}
},
"fields": {
"event.end": [
"2020-06-17T08:47:16.626Z"
],
"@timestamp": [
"2020-06-17T08:47:22.382Z"
],
"event.start": [
"2020-06-17T08:47:16.571Z"
]
},
"highlight": {
"event.dataset": [
"@kibana-highlighted-field@socket@/kibana-highlighted-field@"
]
},
"sort": [
1592383642382
]
}
I'm really not sure what this config option does. (I know the socket dataset is still in Beta) I would like to get some explanation on this option.
Also I know that I can exclude these logs with processors, but I don't believe that solves my issue, since I think that the performance issues were cause by KProbe event tracing and not by processing those logs, but that's just my current hypothesis.