Auditbeat: Broken kibana dashboards – missing .keyword in the fields

  • Version: 8.14.0
  • Operating System: Ubuntu 22.04.4 LTS
  • Steps to Reproduce:

Load the dashboards as recommended in the documentation:

auditbeat setup -e \
  -E output.logstash.enabled=false \
  -E output.elasticsearch.hosts=['http://log-server:9200'] \
  -E output.elasticsearch.username=\${ES_USERNAME} \
  -E output.elasticsearch.password=\${ES_PASSWORD} \
  -E setup.kibana.host=http://log-server:5601

Sample broken dashboard: Process OS Distribution [Auditbeat System] ECS
Starts working after editing and changing fields:

  • host.id -> host.id.keyword
  • host.os.name -> host.os.name.keyword
  • host.os.version -> host.os.version.keyword

By the way, path to auditbeat kibana dashboard in the package still include number 7: /usr/share/auditbeat/kibana/7/{dashboard,search,visualization}

Is that a bug?

This sounds like you are missing the index template provided by Auditbeat. Its index template should prevent multi-fields like host.id.keyword from being created (that would be the default Elasticsearch behavior when a string field like host.id is indexed).

I was able to load the index template manually with following command:

auditbeat setup --index-management -e \
  -E output.logstash.enabled=false \
  -E output.elasticsearch.hosts=['http://log-server:9200'] \
  -E output.elasticsearch.username=\${ES_USERNAME} \
  -E output.elasticsearch.password=\${ES_PASSWORD} \
  -E setup.ilm.overwrite=true \
  -E setup.template.overwrite=true

However, in Kibana I receive illegal_argument_exception error:

Fielddata is disabled on [host.os.name] in [auditbeat-20240613]. Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [host.os.name] in order to load field data by uninverting the inverted index. Note that this can use significant memory.

I see there is a hint in an error message but I am not sure what's the best practice here. Isn't it something that should work out of the box when using auditbeat?

Another thing is that setting up Kibana Space ID doesn't seem to work while loading the dashboards. Command used:

auditbeat setup --dashboards -e \
  -E output.logstash.enabled=false \
  -E output.elasticsearch.hosts=['http://log-server:9200'] \
  -E output.elasticsearch.username=\${ES_USERNAME} \
  -E output.elasticsearch.password=\${ES_PASSWORD} \
  -E setup.ilm.overwrite=true \
  -E setup.kibana.space.id=audit

The dashboards are still loaded into Default space. I also tried to set setup.kibana -> space.id in the config file.

Side note to whoever is dealing with the same problem and is running above commands.

The ES_PASSWORD (and ES_USERNAME) is escaped with \$ and it is taken from the keystore. Normaly the variable would be expanded by the shell before running the command so the plain-text password would be logged into elastic if you are auditing execve() syscalls.

I added the variable to a keystore with the command cat password_file | auditbeat keystore add ES_PASSWORD --stdin --force

Alternatively you can simply stop auditbeat service.

In Ubuntu keystore file is kept in /var/lib/auditbeat/auditbeat.keystore by default with root:root 0600 permissions.

That's not the naming convention Audibeat uses. Something still doesn't seem right.

How do you have the ES output configured in Logstash? There is an example at Configure the Logstash output | Auditbeat Reference [8.14] | Elastic.

What should happen is that auditbeat data should go into a data stream like auditbeat-8.14.1 which is backed by indices that automatically roll-over based on time and size.

If you have regular indices that were created before getting the index template setup then those probably need to be deleted or reindexed. The index template that you installed will only apply to newly created indices.

also configure auditbeat to point to elasticsearch and Kibana

Then just run
auditbeat setup -e

Without all the options, that's the Best method.

Then you can point ought to be too logstash if you want later.

I always get it working direct from auditbeat to elastic first before putting logstash in the middle.

Also, as Andrew mentioned, if you already started auditbeat before setting up the template correctly, you need to clean up.

Thank you for helping. I think I got it now. The thing is that Elasticsearch is using data streams for logs and if one wants to use logstash on the way it must be a pass through and using whatever auditbeat setup is creating, e.g. auditbeat-8.14.0 for version 8.14.0.

output {
  elasticsearch {
    user => "${ES_USERNAME}"
    password => "${ES_PASSWORD}"
    hosts => ['http://log-server:9200']
    index => "%{[@metadata][beat]}-%{[@metadata][version]}"
    action => "create"
  }
}

On upgrades I will run auditbeat setup with arguments to go directly to elasticsearch:

    - name: Auditbeat setup
      command: >
        auditbeat setup -e \
          -E output.logstash.enabled=false \
          -E output.elasticsearch.hosts={{ auditbeat_elasticsearch_hosts | string }} \
          -E output.elasticsearch.username=\${ES_USERNAME} \
          -E output.elasticsearch.password=\${ES_PASSWORD} \
          -E setup.ilm.overwrite=false \
          -E setup.template.overwrite=false
      register: _setup_output

Note that ES_USERNAME and ES_PASSWORD are values stored in auditbeat keystore and elasticsearch hosts are using Jinja2 templating from Ansible.

All in all, it works fine now.

1 Like