TTY logging decoding

I did not try Auditbeat yet, but based on what I read so far it does not support decoding type=USER_TTY or type=TTY audit records. Is this correct?

If it is correct, then this would be extremely disappointing. It would be extremely useful if it could decode the TTY logging keystrokes to be stored in an ELK stack system.

Hi,

Auditbeat is perfectly capable of parsing TTY audit records.

May I ask where did you read that information?

For testing I setup simple file output on logstash, an example of USER_TTY record:

{"source":"/var/log/audit/audit.log","fileset":{"module":"auditd","name":"log"},"host":{"name":"host.example.com"},"@timestamp":"2018-10-02T06:58:50.847Z","off
set":19301726,"prospector":{"type":"log"},"@version":"1","tags":["beats_input_codec_plain_applied"],"input":{"type":"log"},"message":"node=host2.example.com type=TTY msg=audit(1538453437.764:15045): tty pid=15571 uid=0 auid=54162 ses=2477 major=136 minor=0 comm=\"bash\" data=6C73202D6C617472202F7661092E6C67097F7F7F6C6F67096E670909096909207C74616B7F696C0D","beat":{"version":"6.4.1","name":"host.example.com","hostname":"host.example.com"}}

The "data" field "6C732[...]" is not decoded to keystrokes.

Am I doing something wrong? I did not check a lot into the settings yet - these are on default settings with auditbeat exporting to logstash outputting to a file.

Edit: I do not see any relevant settings in auditbeat configuration.

In my case, it decodes keystrokes successfully.

What is your Auditbeat, Kernel version and OS release?

Thanks for helping with this problem!

Auditbeat is latest RPM, 6.4.1.
Logstash is latest RPM, 6.4.1.

Auditbeat running on EL6 64-bit kernel 2.6.32-754.3.5 with audit 2.4.5-6 RPM.

I shall test with EL7 next

Edit:
Tested with EL7, 64-bit kernel 3.10.0-862.11.6, audit 2.8.1-3 RPM.

The result is same, example output to logstash file with USER_TTY data not decoded:
{"source":"/var/log/audit/audit.log","fileset":{"module":"auditd","name":"log"},"host":{"name":"host3.example.com"},"@timestamp":"2018-10-02T09:48:19.350Z","offset":4315449,"@version":"1","prospector":{"type":"log"},"tags":["beats_input_codec_plain_applied"],"input":{"type":"log"},"message":"node=host3.example.com type=USER_TTY msg=audit(1538473695.304:1199): pid=5535 uid=0 auid=5462 ses=141 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 **data=6C73202D6C61**","beat":{"version":"6.4.1","name":"host3.example.com","hostname":"host3.example.com"}}

(the data=6C73202D6C61 should be decoded as "ls -la" somewhere...)

Edit: I guess I could use mutate filter and gsub each ASCII hex byte out of the "data" if there is seriously no other options...

I realised that the events you are pasting don't come from Auditbeat, but Filebeat. Those are log lines read from /var/log/audit/audit.log and not auditd events reported by Auditbeat.

I think you have Filebeat feeding logs to Elasticsearch too and got confused.

An Auditbeat event looks like this:

  "@timestamp": "2018-10-02T10:20:56.849Z",
  "@metadata": {
    "beat": "auditbeat",
    "type": "doc",
    "version": "7.0.0-alpha1"
  },
  "beat": {
    "name": "localhost.localdomain",
    "hostname": "localhost.localdomain",
    "version": "7.0.0-alpha1"
  },
  "event": {
    "category": "TTY",
    "type": "tty",
    "action": "typed",
    "module": "auditd"
  },
  "user": {
    "name_map": {
      "auid": "vagrant",
      "uid": "root"
    },
    "auid": "1000",
    "uid": "0"
  },
  "process": {
    "pid": "1680",
    "name": "yum"
  },
  "auditd": {
    "data": {
      "data": "y\n",   # <- KEYSTROKES HERE
      "major": "136",
      "minor": "0"
    },
    "summary": {
      "how": "yum",
      "actor": {
        "primary": "vagrant",
        "secondary": "root"
      },
      "object": {
        "type": "keystrokes",
        "primary": "y\n"
      }
    },
    "sequence": 604,
    "result": "unknown",
    "session": "3"
  },
  "host": {
    "name": "localhost.localdomain"
  }
}

If you're using Kibana to inspect the events, make sure you have an auditbeat index pattern selected, not filebeat.

Interesting. On this machine Filebeat was installed only for Auditbeat usage. It outputs to logstash which outputs to a file - in a different pipeline from everything else and it is the only beats input on the logstash.

The config is similarly very simple on the client;
output.logstash:
hosts: ["192.0.2.1:5044"]

filebeat.config.modules:
 path: ${path.config}/modules.d/*.yml
 reload.enabled: false

setup.template.settings:
 index.number_of_shards: 3

The file modules.d/auditd.yml is default and there is no other configuration on client:
- module: auditd
log:
enabled: true

And pipeline on logstash:
input {
beats {
port => 5044
}
}
output {
file {
path => "/opt/logstash/audit/%{host}/%{+YYYY-MM-dd}.log"
}
}

Based on what you say it sounds like I cannot output to logstash, I should output directly to Elasticsearch. I will try that next then.

I was trying to avoid running the "filebeat 'setup intial environment'" since it requires the ingest-geoip (which I do not care about), and forces me to use a specific template and creates a ton of dashboards I do not care about.

I did it anyways, and now there is the index "filebeat-6.4.1-2018.10.02". The mappings include "auditd" section but there is no "type:" "keystrokes" in there.

I see you are running 7.0.0 alpha. Are you sure it is not a feature of 7.0?

Edit: there is also a grok error.
Provided Grok expressions do not match field value: [node=host.example.com type=USER_TTY msg=audit(1538482400.901:121392): pid=26742 uid=0 auid=5462 ses=15978 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 data=6C2020(rest cut out)]

Edit: the only index created was the aforementioned filebeat index, no index for auditbeat to be seen anywhere.

There's a misunderstanding here.

What you're using is Filebeat with the auditd module. This doesn't decode TTY.

What you want to use is Auditbeat, which is a different Beat altogether, and does TTY data decoding.

This has nothing to do with outputting to Logstash or directly to Elasticsearch.

Duh. Well that makes sense totally. Thanks.

Working great now that this "small" misunderstanding was corrected. Sorry to waste your time.

Don't worry, no waste at all :slight_smile:

It also took me a long time to realise the events you were sharing didn't belong to auditbeat.

It looks very good so far - deployed to a few dozen machines for testing. Much better than auditd+audisp-remote to centralize audit logs.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.