Restoring "Stack Monitoring" after 6.8->7.2 upgrade

So - this functionality used to work. How do I get it back?

I've checked and Im collecting all the (seemingly) relevant system indexes: .monitoring-[kibana|es|logstack].

When I click on this option in Kibana a panel opens on the right side with repeated text saying:

"Monitoring Request Error

[illegal_argument_exception] Fielddata is disabled on text fields by default. Set fielddata=true on [event.dataset] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead. "

@chrisronline Any ideas here? Should the fielddata setting be set on the mappings for monitoring indices? If so, why wouldn't it be set by default after an upgrade?

Hi @ethrbunny,

Could you post the outputs of GET _template/.monitoring* and GET .monitoring-*/_mapping please? Note that both will be rather large so you might want to post them in and then post the link here.



Hopefully you can see both of those

Thanks, both of those look good to me.

Next, could you please post the outputs of GET _cat/indices/f*?v, GET _template/filebeat-* and GET filebeat-*/_mapping please?




NOTE: I get a similar error message (see top of thread) when I click on the new "SIEM" menu item and "View Hosts".

Looking at the mappings for the various filebeat-* indices, I see that the event.dataset field is mapped as text. This is the cause of your error. Now the question is why is this field mapped incorrectly.

The thing that looks most suspicious to me is that there appears to be no Filebeat index templates. This is almost certainly why the field is being mapped incorrectly. Just to double check that you have no Filebeat templates, could you post the response from GET _cat/templates?v, please?

Also, looking at your filebeat-2019.07.* indices, I notice an interesting pattern. All indices created with date <= 2019.07.06 have 3 primary shards and 1 replica. The ones created with date > '2019.07.06 have 1 primary shard and 1 replica. Did something change on/around 2019.07.06? Is this when you upgraded from 6.8 -> 7.2?

Finally, when you say you upgraded from 6.8 -> 7.2, exactly what parts of the Elastic stack have you upgraded? Specifically, what version of Elasticsearch are you currently running? Same for Kibana? Same for Filebeat?



That's the upgrade date so it would make sense to have a transition there. I upgraded the entire stack: elastic, logstash, kibana, etc . Everything I could find.

Here is the _cat/templates output:

name                        index_patterns                order      version
mysql                       [mysql*]                      0          
logstash                    [logstash-*]                  0          60001
.ml-state                   [.ml-state*]                  0          7020099
apache2                     [logstash-*]                  0          60001
.watch-history-9            [.watcher-history-9*]         2147483647 
.monitoring-logstash        [.monitoring-logstash-7-*]    0          7000199
.ml-meta                    [.ml-meta]                    0          7020099
filebeat                    [filebeat*]                   0          
.monitoring-kibana          [.monitoring-kibana-7-*]      0          7000199
.monitoring-alerts-7        [.monitoring-alerts-7]        0          7000199
kafka_consumer_lag          [kafka_consumer_lag*]         0          
metricbeat                  [metricbeat*]                 0          
.ml-config                  [.ml-config]                  0          7020099
.kibana_task_manager        [.kibana_task_manager]        0          7020099
.monitoring-es              [.monitoring-es-7-*]          0          7000199
.data-frame-internal-1      [.data-frame-internal-1]      0          7020099
.logstash-management        [.logstash]                   0          
.management-beats           [.management-beats]           0          70000
.monitoring-beats           [.monitoring-beats-7-*]       0          7000199
.watches                    [.watches*]                   2147483647 
.ml-notifications           [.ml-notifications]           0          7020099
.ml-anomalies-              [.ml-anomalies-*]             0          7020099
.triggered_watches          [.triggered_watches*]         2147483647 
.data-frame-notifications-1 [.data-frame-notifications-*] 0          7020099

Okay, I do see the Filebeat template. It's named filebeat (which is why my earlier request,GET _template/filebeat-*, didn't find it).

Could you please post the output of GET _template/filebeat?




Okay, that template definitely doesn't look right at all.

At this point I suggest the following steps:

  • Stop all your Filebeat instances.
  • Delete the bad template by running DELETE _template/filebeat
  • Restart your Filebeat instances. This will create a new template.
  • Please re-run GET _template/filebeat* and post the results here so we can check if the new template looks good.

I deleted the filebeat template but nothing is being created (except an empty "{ }").

What should "event.dataset" be mapped to?

I restored / recreated the default filebeat template as follows:

  1. filebeat export template > filebeat.template.json
  2. curl --data "@filebeat.template.json" -XPUT "http://elastic.ip:9200/_template/filebeat*" -H 'Content-Type: application/json'

When it rolls over tonight we'll see what changes.

No change in behavior or error msg this morning.

That's probably because old mappings are still present. The template change will only affect mappings created after the change.

Is it an option for you to delete the old Filebeat indices (the ones created before the template change)?