Metricbeat + Kibana Dashboards [esaggs] > "field" is a required parameter

Hello,

I've just deployed a cluster in the Elastic Cloud and configured a metricbeat agent to send metrics to that cluster.

Here is my metricbeat.yml:

name: "xyz.acme.co"

tags: ["ACME", "acme.co"]

setup.ilm.check_exists: false

cloud.id: "..."

setup.kibana.host: "..."
setup.kibana.protocol: "https"

setup.dashboards.enabled: true

metricbeat.modules:
- module: system
  enabled: true
  period: 1m
  1. Shouldn't Metricbeat's "setup --dashboards" and "setup.dashboards.enabled: true" only set up dashboards for the agent's configured modules?

  2. I am getting alot of errors trying when accessing the dashboards:

Below you can see the [esaggs] > "field" is a required parameter errors...

When taking a look at one specific visualization I can see that the field config is not set (it should be set to "host.name" accordingly to the saved object config, but in the index pattern that's not an aggregatable field):

If I set it to host.name.keyword it works:

Here is the Saved Object code. You can see that it acctually is set to use the host.name:
Bildschirmfoto vom 2020-03-31 09-27-24

This is related to [Metricbeat Docker] Overview ECS is showing no data and error: [esaggs] > "field" is a required parameter, but I've already tryied to delete index, template, index pattern, even saved objects... no luck.

Thank you!

I've managed to get some visualizations to work by setting "some-field.keyword" (host.name.keyword, process.name.keyword, ...), but the whole thing just looks broken.

Did I got anything wrong? I took a look at the dashboards that comes with the metricbeat.tar.gz and they all refer to fields without the ".keyword".

It will setup all the Dashboards.

In general I would suggest to carefully setup indexes, templates etc and check that field mappings are stored correctly and then in a separate step to setup the Dashboards.

It's just that I have a 7.6 cluster and I am using a 7.6.1 Metricbeat agent... I don't see how could this not work out of the box... I mean, did I do something in the wrong order or something? This should not have to be that complicated...

I basically followed the docs:

1. metricbeat setup --dashboards
2. metricbeat -e

Just took a look at the same visualization in demo.elastic.co an it uses "host.name", and not "host.name.keyword".

But demo.elastic.co is a 7.5.1 cluster, and I am running a 7.6.1. There's a difference in the index pattern of the two (but the question is, is thaton pourpose, or did I do something wrong?)

7.5.1

7.6.1

Did some testing and I am now positive that the issue has to do with the index pattern... or with whatever sets it.

When I run ./metricbeat setup --dashboards it creates the dashboards and the index pattern with 3131 fields mapped. In this index pattern the host.name is correctly mapped.

But then when I run ./metricbeat -e something overwrites the index pattern. The new index pattern has only 181 fields and the host.name is wrongly mapped.

It turns out, I was doing something wrong...

I think at the beggining I indexed some metrics documents without having loaded the index templates, so Elasticsearch mapped for example host.name as text+keyword. That documents were kept in a metricbeat-7.6.1-* index.

Throughout all my debugging I was cleaning things up by deleting the _template/metricbeat-7.6.1 template and the metricbeat-7.6.1 index. But to delete the indices what I should have done is DELETE metricbeat-7.6.1-*.

After I did that, I just:

  1. Created the index templates: sudo ./metricbeat setup --index-management -E 'output.elasticsearch.hosts: ["..."]' -E 'output.elasticsearch.username: "..."' -E 'output.elasticsearch.password: "..."'

  2. Set Kibana dashboards up: sudo ./metricbeat setup --dashboards

  3. Got rid of the following parameters in metricbeat.yml

setup.ilm.check_exists: false
setup.template.overwrite: false
setup.template.enabled: false
  1. Ran Metricbeat: sudo ./metricbeat -e

I think that's it. Just wanted to write the solution down in case anyone else faces the same issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.