Kibana automatic activity is flooding audit log

Hello

I enabled auditing in my cluster using x-pack. In the log file I can see that Kibana is constantly sending monitoring and health check requests to the cluster, even when there is no user activity. Every few seconds I get a bunch of messages like this:
[2017-03-20T23:08:04,587] [transport] [access_granted] origin_type=[rest], origin_address=[192.168.1.240], principal=[kibana], action=[cluster:monitor/main], request=[MainRequest]
[2017-03-20T23:08:04,592] [transport] [access_granted] origin_type=[rest], origin_address=[192.168.1.240], principal=[kibana], action=[cluster:monitor/nodes/info], request=[NodesInfoRequest]
[2017-03-20T23:08:04,593] [transport] [access_granted] origin_type=[rest], origin_address=[192.168.1.240], principal=[kibana], action=[cluster:monitor/nodes/info[n]], request=[NodeInfoRequest]
[2017-03-20T23:08:04,609] [transport] [access_granted] origin_type=[rest], origin_address=[192.168.1.240], principal=[kibana], action=[cluster:monitor/nodes/info], request=[NodesInfoRequest]
[2017-03-20T23:08:04,610] [transport] [access_granted] origin_type=[rest], origin_address=[192.168.1.240], principal=[kibana], action=[cluster:monitor/nodes/info[n]], request=[NodeInfoRequest]
[2017-03-20T23:08:04,614] [transport] [access_granted] origin_type=[rest], origin_address=[192.168.1.240], principal=[kibana], action=[cluster:monitor/health], indices=[.kibana], request=[ClusterHealthRequest]

This clutters the log and making it hard to find the really important messages. On the other hand, I cannot just filter those messages out based on IP address or user name, since I want Kibana "real" activity created by users to be caught by the audit process.

Does anyone know how to stop those messages from appearing in the audit log file ?

Thanks

Guy

1 Like

If you are on 5.0+ there is this option:

Modify the config/x-pack/log4j2.properties file.

Add the following to the file and restart the ES node(s).

appender.audit_rolling.filter.regex.type = RegexFilter
appender.audit_rolling.filter.regex.onMatch = DENY
appender.audit_rolling.filter.regex.regex = .principal=\[Kibana\].|.indices=\[.monitoring-data-2\].
appender.audit_rolling.filter.regex.onMisMatch = ACCEPT

The above example denies any log entries for the Kibana user as well as access to the .monitoring-data-2 index.

Note: Syntax above is case-sensitive, and double escaping is required for the [ and ]

(We have an outstanding enhancement request to provide more native filtering capabilities for X-pack Security)

This an example from one of our engineers, but I think you can modify it to your needs from here. :slight_smile:

Hello

I will give it a try. Too bad x-pack does not have it built in, It should consider Kibana monitoring and not treat it as standard event.

There is an enhancement request for this, I found it yesterday, but I don't know when it will be picked up.

Hi Marius,

That config worked like a charm for the log files, that'll save some disk space.

Is there an equivalent for the audit logs indexed in ES rather than a logfile?

Thanks
JS

Oh, there is. You need to add this to the elasticsearch.yml file.
xpack.security.audit.outputs: [ index, logfile ]
But the filter will only work for the log file, although you can browse them a lot more easily in ES + Kibana.
There are a few drawbacks to logging to a index only, this is why with the options from before it will log both to file and ES.
Read more about them here (it's in the first article of the page):
https://www.elastic.co/guide/en/x-pack/current/auditing.html

Hi Marius,

Thank you for the prompt response.

I already have audit log sent to index using the method described above, and i've added exclusions for authentication_success, however i cannot do so for the access_granted since the messages i'm most interested in other than failed auith comes from those messages (deletion events)

This in turn generates 3.2 to 3.3 million events per day for the kibana and logstash users alone, since i need a minimum retention of 90 days for active logs and a year for archived logs that's a lot of clutter.

If there are no filtering options for the audit logs sent directly index other than events exclusion would it be something that's included in the enhancement request you've described above or would a new request need to be created.

In the meanwhile i'm thinking i'll either create a daily cornt job to perform a delete_by_query or simply ingest the logfile through logstash.

Thanks
JS

Hello

It doesn't seem to work for me.
I tried the very same regular expression you suggested, and it did not like the hyphens (even when I escaped them) and threw errors.

I tried a little different regular expression: .principal=[Kibana].|.action=[.monitoring.*]
And it just ignored it and kept on flooding the log with audit messages that were supposed to be filtered out.

What am I doing wrong ?

principal=[Kibana].|.action=[.monitoring.*]

That needs to be a Java regular expression, and the [ character has special meaning there.

The trouble with escaping is that you have deal with properties files and regular expressions, so some things need multiple escapes.

The simplest expression that doesn't need any escaping would be:

principal=.Kibana.|.indices=..monitoring-data-2..

There's some improvements that could be made, but that should do the job for you.

Hello

I am not that great with regular expressions, however, trying to filter out Kibana monitoring messages, I came up with this regexp:
principal=.Kibana.|.indices=..(monitoring-data-2|kibana)..|action=.*cluster:monitor*

I tested it with an online Java regexp tester and it was good and matched all the required messages. But when I inserted it to log4j2.properties and restarted, it seems like Elasticsearch is completely ignoring it !

I keep seeing the messages in the audit file...

Any ideas ?

Thanks

Guy

it looks like the regex has to macht the WHOLE string, not just a part of it, the following works for me:

appender.audit_rolling.filter.regex.regex = .*principal=.elastic...action=.indices:data/write/bulk.*|.*principal=.kibana.,.*

it filters out all kibana user requests and the following index requests from elastic user:

[2017-03-27T13:01:30,486] [transport] [access_granted]  origin_type=[rest], origin_address=[::1], principal=[elastic], action=[indices:data/write/bulk[s][p]], request=[ConcreteShardRequest]

Please try it out!

Hello

This is what eventually eliminated all unwanted messages:
.*principal=.kibana...action=.cluster:monitor.*|.*action=.cluster:admin.*|.*indices=..kibana.,.*|.*indices=..\*.,.*

Thank you all for your help

Guy

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.