Host data wont populate in dashboards when upgrading beats from 7.0 to 7.3 and using Logstash

ECE - Elasticsearch 7.3
Kibana -7 .3

Our metricbeat system overview dashboard will not display a Beat thats running 7.3 after upgrading from 7.0 if we run the beat through Logstash. The beat will populate in the dashboard if we point the beat directly to Elasticsearch but not through Logstash. The data is populating in Elasticsearch as I can search for the tag that we want and the host.name that we are looking for but the dashboard will not populate the beat even though the dash is set to check metricbeat-*.

We've tried a few other version of metricbeat that are higher than the first 7.0 version and none of them will populate in the dashboard if we run the beat through logstash. Any insight would be appreciated.

input {
  beats {
   port => 5044
  }
}

filter {
    if [host][hostname] =~ "(abc|def|ghi|jkl|mno)" {
        mutate {
            add_tag => [ "AD" ]
        }
    }
    if [host][hostname] =~ "(pqr|stu|vwx)" {
        mutate {
            add_tag => [ "opentest" ]
        }
    }
}

output {
  if (([agent][type] == "metricbeat") and ([agent][version] == 7.3)) {
    elasticsearch {
      hosts => ["https://abc.gov:9243"]
      manage_template => true
      index => "metricbeat-%{[agent][version]}"
	  user => xxxxx
	  password => xxxxx
	  ssl => true
	  cacert => "/etc/logstash/cert/cert.pem"
  }
 }
  if [agent][type] == "metricbeat" {
    elasticsearch {
      hosts => ["https://efg.gov:9243"]
      manage_template => false
      index => "metricbeat-%{[agent][version]}"
	  user => xxxxx
	  password => xxxxx
	  ssl => true
	  cacert => "/etc/logstash/cert/cert.pem"
 }
    elasticsearch {
      hosts => ["xx.xxx.xx.xx"]
      manage_template => false
      index => "metricbeat-%{[agent][version]}"
      #index => "*"
  }
 }
  if [agent][type] == "filebeat" {
    elasticsearch {
      hosts => ["https://hij.gov:9243"]
      manage_template => false
      index => "filebeat-%{[agent][version]}"
	  user => xxxxx
	  password => xxxxx
	  ssl => true
	  cacert => "/etc/logstash/cert/cert.pem"
 }
   elasticsearch {
     hosts => ["xx.xxx.xx.xx"]
     manage_template => false
     index => "filebeat-%{[agent][version]}"
  }
 }
  if [agent][type] == "heartbeat" {
    elasticsearch {
      hosts => ["https://jkl.gov:9243"]
      manage_template => false
      index => "heartbeat-%{[agent][version]}"
	  user => xxxxx
	  password => xxxxx
	  ssl => true
	 cacert => "/etc/logstash/cert/cert.pem"
 }
  elasticsearch {
    hosts => ["xx.xxx.xx.xx"]
    manage_template => false
    index => "heartbeat-%{[agent][version]}-%{+YYYY.MM.dd}"
  }
 }
}

I set this up in a fresh testing environment and the problem persists. Kibana actually shows an error now of:

{
  "took": 8,
  "timed_out": false,
  "_shards": {
    "total": 3,
    "successful": 2,
    "skipped": 0,
    "failed": 1,
    "failures": [
      {
        "shard": 0,
        "index": "metricbeat-7.3.0",
        "node": "RZ4p0uSxT0SwZSnqyBsbLA",
        "reason": {
          "type": "illegal_argument_exception",
          "reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [host.name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
        }
      }
    ]
  },
  "hits": {
    "total": 0,
    "max_score": null,
    "hits": []
  },
  "aggregations": {
    "1": {
      "value": 0
    }
  }
}

I will keep an eye on this topic but I feel like it's something more related with logstash so I moved the topic to the logstash forum.

There's some known issue when loading Kibana dashboards between a couple of 7.x versions (I don't exactly remember which ones now) but it's unrelated with the use of logstash or not but as you mentioned, you are not having this issue if you point directly Metricbeat to Elasticsearch so I'm leaning towards thinking that it's a issue unrelated with Metricbeat.

Can you provide the mapping you are using? The error is related with this https://www.elastic.co/guide/en/elasticsearch/reference/current/fielddata.html

Mario whats the best way to attach the mappings? Trying to add even one of these puts me over the character limit.

I think I figured it out:


After reading through the fielddata link I'm going to try and do a test using that work around. Outside of the heap issue are there any other potential problems that I should be aware of when implementing this?

It does seem like a Logstash issue so any help from the Logstash developers would be appreciated. We currently have metricbeat-7.0.0 on a multitude of servers, a few of which are having the memory leak issue, and we'd like to transition all of them to 7.5 to stop the issue but as of right now if we do that the dashboards arent going to work correctly.

After enabling fielddata with the command below the Number of hosts [Metricbeat System] ECS visualization in the [Metricbeat System] Overview ECS dashboard populates correctly. The CPU Usage Gauge and Dish used [Metricbeat System] ECS visualizations are not populating. I've also noticed that the error that I was originally getting about shard failures has gone away after deleting the two indices that it was failing on when trying populate the dashboards.

PUT metricbeat-7.5.0/_mapping
	{
	  "properties": {
	      "host.name": {
	         "type": "text",
	         "fielddata": true
	}
 }
}

Update: I think this happens if you don't load the template into Elasticsearch before or during the first connection of a new beat version. We ran into the same issue when we setup a new beat to go to Logstash which means that it didnt load the template first.
The only way I was able to get the dashboards to populate correctly was to delete all traces of any index, index pattern, template, etc of beats that are a version higher than 7.0.0. I have a theory that I'm going to test that this problem was due to the fact that I accidentally put the metricbeat mapping in as an index at some point when I was loading the template manually. After going through my steps I have no idea why I would do this but I feel like I did at some point. If I find the time I'm going to try and recreate this issue and update this post.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.