Duplicate audit logs for successful logins in auditbeat

When an auditbeat logs a successful login on ubuntu, it logs a success and a failed event.
Example - I tried logging into my ubuntu instance and it was successful, so here I get a success log and a failure log. The failure log shouldn't have been there.
logs - (failure log from auditbeat for a successful login to the instance)

{
  "_index": "auditbeat-6.7.1-2019.04.23",
  "_type": "doc",
  "_id": "ox_QSmoBVWWEnTzwiivn",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2019-04-23T15:29:01.419Z",
    "tags": [
      "auditbeat18"
    ],
    "event": {
      "type": "user_login",
      "action": "logged-in",
      "module": "auditd",
      "category": "user-login"
    },
    "process": {
      "pid": "17601",
      "exe": "/usr/sbin/sshd"
    },
    "network": {
      "direction": "incoming"
    },
    "auditd": {
      "session": "unset",
      "data": {
        "terminal": "sshd",
        "acct": "ubuntu",
        "op": "login"
      },
      "summary": {
        "how": "/usr/sbin/sshd",
        "actor": {
          "primary": "unset",
          "secondary": "ubuntu"
        },
        "object": {
          "primary": "sshd",
          "secondary": "xxxxxxxxxxxxx",
          "type": "user-session"
        }
      },
      "messages": [
        "type=USER_LOGIN msg=audit(1556033341.419:62645): pid=17601 uid=0 auid=4294967295 ses=4294967295 msg=op=login acct=ubuntu exe=/usr/sbin/sshd hostname=? addr=xxxxxxxxxx terminal=sshd res=failed"
      ],
      "sequence": 62645,
      "result": "fail"
    },
    "beat": {
      "version": "6.7.1",
      "name": "auditbeat18",
      "hostname": "ip-xxxxxxxxxx"
    },
    "host": {
      "name": "auditbeat18"
    },
    "user": {
      "uid": "0",
      "name_map": {
        "uid": "root"
      },
      "auid": "unset"
    },
    "source": {
      "ip": "xxxxxxxxxxxxxxx"
    }
  },
  "fields": {
    "[@timestamp](https://github.com/timestamp)": [
      "2019-04-23T15:29:01.419Z"
    ]
  },
  "highlight": {
    "event.action": [
      "@kibana-highlighted-field@logged-in@/kibana-highlighted-field@"
    ]
  },
  "sort": [
    1556033341419
  ]
}

This is affecting the success and failure login filtering in Kibana

So this was a successful login over SSH? Seems like the SSH daemon is reporting "res=failed" to the Linux Audit Framework.

Yes it was a successful login over ssh. But I see both successes and failed logs in auditbeat.
OS - ubuntu16.04/18.04.
Any resolution suggested for this?

Auditbeat is translating what it receives as text (type=USER_LOGIN msg=audit(1556033341.419:62645): pid=17601 uid=0 auid=4294967295 ses=4294967295 msg=op=login acct=ubuntu exe=/usr/sbin/sshd hostname=? addr=xxxxxxxxxx terminal=sshd res=failed) into a structured event. So the result is failed because that's what got reported.

I wonder if it be that SSH is reporting failed attempts if the ssh client first tries one of several pubkey before finding the right one or falling back to passwd authentication?

Was looking through openssh source and it appears that this is where it writes audit messages: https://github.com/openssh/openssh-portable/blob/9b655dc9c9a353f0a527f0c6c43a5e35653c9503/audit-linux.c#L54.

We are not using password authentication.
we ssh using keys.
if the ssh client first tries one of several pubkey's it should return any number of failed attempts. But every time it returns exactly one failed attempt on any ubuntu server along with a successful login

What's the message field for the successful event that's being reported by Linux for the login?

It could be interesting to review the ssh -v <host> output to see if more than one authentication is attempted.

message field of a successful login -
auditd.message_type: user_login

ssh -v debug log -
Reading configuration data /home/ritesh/.ssh/config
debug1: /home/ritesh/.ssh/config line 16: Applying options for auditbeat
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolving "xx.xx.xx.xx" port 22
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to xx.xx.xx.xx [xx.xx.xx.xx] port 22.
debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file /home/ritesh/jenkins.private type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/ritesh/jenkins.private-cert type -1
debug1: identity file /home/ritesh/.ssh/id_rsa type 0
debug1: key_load_public: No such file or directory
debug1: identity file /home/ritesh/.ssh/id_rsa-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.2p2 Ubuntu-4ubuntu2.8
debug1: match: OpenSSH_7.2p2 Ubuntu-4ubuntu2.8 pat OpenSSH* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug1: Authenticating to xx.xx.xx.xx as 'ubuntu'
debug3: hostkeys_foreach: reading file "/home/ritesh/.ssh/known_hosts"
debug3: record_hostkey: found key type ECDSA in file /home/ritesh/.ssh/known_hosts:74
debug3: load_hostkeys: loaded 1 keys from xx.xx.xx.xx
debug3: order_hostkeyalgs: prefer hostkeyalgs: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
debug3: send packet: type 20
debug1: SSH2_MSG_KEXINIT sent
debug3: receive packet: type 20
debug1: SSH2_MSG_KEXINIT received
debug2: local client KEXINIT proposal

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.