6.4 xpack.security.audit.logfile.events.ignore_filters.*.realms broken?

In our cluster, we have added the following snippet to our elasticsearch.yaml across all nodes:

xpack.security.audit.enabled: true
xpack.security.audit.outputs: [ index ]
xpack.security.audit.index.events.emit_request_body: true

We have restarted all nodes in the cluster and applied the following via the Kibana's dev-tools:

PUT /_cluster/settings 
{
    "transient": {
       
        "xpack.security.audit.logfile.events.ignore_filters": {
            "ourpolicyname": {
                "realms": [
                    "__attach",
                    "__anonymous",
                    "reserved"
                ]
            }
        }
    }
}

We have removed the daily .security_audit_log-* index and waited for it to become recreated.

Upon recreation our expectation was to see the __attach , __anonymous and reserved realms filtered out of the audit log. Instead, what we're seeing is:

Which indicates that events from the forementioned realms keep flowing in and that the audit log is not filtering them out as we intended. Our intention is to have the audit log ignore all of the events from those realms from all indices.

We have also tried to apply the equivalent ignore_filters configuration via elasticsearch.yml with the same result.

Please tell us if this is a miss-configuration on our end or is there a bug in this piece of functionality.

Best regards

Hi @mmisztal1980,

As the setting name xpack.security.audit.logfile.events.ignore_filters suggests the ignore_filters capability is available only for the logfile audit output type .

We are working on moving away from the index audit output as there are design limitations we cannot overcome otherwise. For example, it is too complicated to throttle calls generating audit events when the indexing throughput fluctuates. We recommend using the logfile auditing and we will have a convenient inside the stack solution for indexing it, soon...

Hi @Albert_Zaharovits, thanks for the reply.

Our requirement is to persist that within an index, so we will try to use filbeat to ingest the auditlog after filtering out the unwanted realms. That ought to do it, however we are still missing an ingestion pipeline.

Can you point me to one?
Best regards

Using filebeat is the right strategy. The out-of-the-box-solution for this problem would be something very similar to what you're describing.

I recommend you upgrade to ES 6.5 as the audit logfile is structured . The one line JSON per line format should be easy for Filebeat to digest and then ship to ES (using a pipeline or not). You'd still have to define the index template yourself , and for inspiration you can look into the log4j2.properties file for the audit logfile appender (in ES).

We are working to making this process an out of the box experience, but we're not there yet...

@Albert_Zaharovits we're currently running ES 6.4, does that support the same feature as 6.5?

Structured audit log was introduced in 6.5 .

Got it, thank you. We are preparing to upgrade one of our clusters to 6.5 to try this feature out.

Hi @Albert_Zaharovits, we've upgraded to 6.5.3, enabled to logfile output and are ingesting it with filebeat.

This is what filebeat produces so we'll need to use an ingestion pipeline with a json processor to extract the message and add it to the root of the document.

{
  "_index": "auditlog-2018.12.18",
  "_type": "doc",
  "_id": "IxIRwmcB_DAbTiSg2aMw",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2018-12-18T16:06:47.880Z",
    "host": {
      "name": "Elastic01"
    },
    "source": "D:\\logs\\elasticsearch\\MachX-Test_audit.log",
    "offset": 104236160,
    "message": "{\"@timestamp\":\"2018-12-18T17:06:46,574\", \"node.name\":\"elastic01.schultzdev.local\", \"node.id\":\"VlN-m2IVR2-GsIkcGtY_pQ\", \"event.type\":\"transport\", \"event.action\":\"access_granted\", \"user.name\":\"mam@schultz.dk\", \"user.realm\":\"schultzms\", \"user.roles\":[\"superuser\"], \"origin.type\":\"rest\", \"origin.address\":\"10.32.219.52:55485\", \"action\":\"cluster:admin/xpack/security/user/authenticate\", \"request.name\":\"AuthenticateRequest\"}",
    "input": {
      "type": "log"
    },
    "prospector": {
      "type": "log"
    },
    "beat": {
      "hostname": "Elastic01",
      "version": "6.4.0",
      "name": "Elastic01"
    }
  },
  "fields": {
    "@timestamp": [
      "2018-12-18T16:06:47.880Z"
    ]
  },
  "sort": [
    1545149207880
  ]
}

While doing early work with the ingestion pipeline:

PUT _ingest/pipeline/audit-log-ingestion-pipeline
{
  "description": "Audit Log Ingestion Pipeline Q4 2018",
  "processors": [
    {
      "json" : {
        "field" : "message",
        "target_field" : "msg"
      }
    }
  ],
  "on_failure": [
      {
        "set": {
          "field": "_index",
          "value": "audit-log-ingestion-pipeline-errors"
        }
      },
      {
        "set": {
          "field": "error",
          "value": "{{ _ingest.on_failure_message }}"
        }
      }
    ]
}

I've attempted to use the simulate api:

POST _ingest/pipeline/audit-log-ingestion-pipeline/_simulate
{
  "docs": [
    {
      "_index": "auditlog-2018.12.18",
      "_type": "doc",
      "_id": "IxIRwmcB_DAbTiSg2aMw",
      "_version": 1,
      "_score": null,
      "_source": {
        "@timestamp": "2018-12-18T16:06:47.880Z",
        "host": {
        "name": "Elastic01"
      },
      "source": "D:\\logs\\elasticsearch\\MachX-Test_audit.log",
      "offset": 104236160,
      "message": "{\"@timestamp\":\"2018-12-18T17:06:46,574\", \"node.name\":\"elastic01.schultzdev.local\", \"node.id\":\"VlN-m2IVR2-GsIkcGtY_pQ\", \"event.type\":\"transport\", \"event.action\":\"access_granted\", \"user.name\":\"mam@schultz.dk\", \"user.realm\":\"schultzms\", \"user.roles\":[\"superuser\"], \"origin.type\":\"rest\", \"origin.address\":\"10.32.219.52:55485\", \"action\":\"cluster:admin/xpack/security/user/authenticate\", \"request.name\":\"AuthenticateRequest\"}",

      "input": {
        "type": "log"
      },
      "prospector": {
        "type": "log"
      },
      "beat": {
        "hostname": "Elastic01",
        "version": "6.4.0",
        "name": "Elastic01"
      }
    },
    "fields": {
      "@timestamp": [
        "2018-12-18T16:06:47.880Z"
      ]
    },
    "sort": [
      1545149207880
    ]
  }]
}

And go an interesting error message in return:

{
  "error": {
    "root_cause": [
      {
        "type": "class_cast_exception",
        "reason": "java.lang.Integer cannot be cast to java.lang.Long"
      }
    ],
    "type": "class_cast_exception",
    "reason": "java.lang.Integer cannot be cast to java.lang.Long"
  },
  "status": 500
}

Tried it with message equal to the equivalent of :

{ "x": "y" }

And got the same error.

Can you advise what is wrong here?

Hi @mmisztal1980

The filebeat output document doesn't look right, but I'm no expert to say what's missing (more so because of not posting the configuration). For an example see the structured-logging-filebeat blog post, even though it's a bit dated.
For setting reference of the filebeat log input see https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html#filebeat-input-log-config-json

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.