Metricbeat doesn't fill all available fields (eg. system.memory. ...)

Hi there,

We have just started with the ELK stack to monitor our servers. The servers are running on Debian Jessie (8.8) and ELK Stack 5.5.

On most of the machines metricbeat works as expected. But I have an issue with some servers: not all fields with system.x.y were filled in Kibana. Only "system.process.x.y" is being filled with data, other fields (eg. system.memory. ...) are empty / unused in Kibana.

Both groups of servers are installed with an automatic installation tool, so the installed software is nearly identical.

The installed OS is
Linux 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2 (2017-04-30) x86_64 GNU/Linux

Metricbeat is
metricbeat 5.5.0

metricbeat.yml (the same on all servers)

#==========================  Modules configuration ============================
metricbeat.modules:

#------------------------------- System Module -------------------------------
- module: system
  metricsets:
    - cpu
    - load
    - core
    - diskio
    - filesystem
    - fsstat
    - memory
    - network
    - process

    # Sockets (linux only)
    #- socket
  enabled: true
  period: 30s
  processes: ['.*']

#================================ Outputs =====================================
#----------------------------- Logstash output --------------------------------

output.logstash:
  # The Logstash hosts
  hosts: ["123.123.123.123:5044"]

I have started metricbeat in debugmode to collect logs on both groups of servers (snippet from a "non-working" server below) . The output looks to me as if metricbeat collects data for fields like system.memory.actual.free but they are not visible in Kibana or maybe they were not send via logstash to elasticsearch.

root@server: bin/metricbeat -c /etc/metricbeat/metricbeat.yml -e -d "*" 2>&1 | tee /root/mbeat-outp-root.log
...

  "system": {
"memory": {
  "actual": {
    "free": 1068235440128,
    "used": {
      "bytes": 16046555136,
      "pct": 0.014800
    }
  },
  "free": 1064366252032,
  "swap": {
    "free": 17179865088,
    "total": 17179865088,
    "used": {
      "bytes": 0,
      "pct": 0.000000
    }
  },
  "total": 1084281995264,
  "used": {
    "bytes": 19915743232,
    "pct": 0.018400
  }
}
....

What is the best way to debug this issue? Is there an easy way to check the data flow?

It makes me crazy that one group of servers sends the complete dataset via logstash to the elasticsearch instance and another group of servers sends only a part of the dataset ...

Best regards
Olaf

It's me again ...

I have just found that metricbeat most likely is not the cause for this issue on the "non-working" servers.

  • Fields like "system.process..." from the non-working group of servers are available in Kibana .

  • The other "system"- fields (eg. system.memory or system.load. ) are simply hidden.

If I go to "Available Fields" in Kibana and uncheck "Hide Missing Fields" I can open for example "system.memory.swap.free " the mesage I get is "This field is present in your elasticsearch mapping but not in any documents ..." and this field is filled with data. This is also the case with the other unvisible "system"-fields.

So my issue is: Why are all "system"- fields but "system.process." hidden from a group of servers?

Thanks & best regards
Olaf

Hi @elkinspace,

I'm wondering, do you have any search filter applied when checking the missing fields? The message seems to be unclear in some cases (https://github.com/elastic/kibana/issues/2180), the field would be missing in the results of current search, not the whole index

Hi exekias,

Thanks a lot for your reply.

Yes, I have applied one filter when checking for the "missing" fields. The filter is "beat.hostname: host-xy" and I have added this filter just to separate this host from the other "working" servers.

I have added 2 screenshots from 2 servers to explain the issue which I don't understand.

The first one from a cluster server (40 cores, 64GB RAM, debian 8.8) with the fields shown as I would expect (metricset.rtt followed by system.core.id):

The next one from the server (64 cores, 1TB Ram, debian 8.8) with the "missing" fields (metricset.rtt followed by system.process.cmdline):

And now the strange thing I don't understand:

If I add a filter like "beat.hostname: ***nder9 AND system.load.1: >0" the list of fields changes and I see the following:

The search response gives me the following output:

{
 "took": 33,
 "hits": {
     "hits": [
       {
         "_index": "metricbeat-2017.07.24",
         "_type": "metricsets",
         "_id": "AV1zaKUFNe3rK6uVuMwd",
         "_version": 1,
         "_score": null,
         "_source": {
           "system": {
             "load": {
               "1": 0.02,
               "5": 0.04,
               "15": 0,
               "norm": {
                 "1": 0.0003,
                 "5": 0.0006,
                 "15": 0
               }
             }
          },
          "@timestamp": "2017-07-24T07:03:43.095Z",
          "beat": {
          "hostname": "***der9",
          "name": "***der9",
          "version": "5.5.0"
        },
        "@version": "1",
        "host": "***nder9",
        "metricset": {
          "rtt": 4104,
          "module": "system",
          "name": "load"
        },
        "type": "metricsets",
        "tags": [
          "beats_input_raw_event"
      ]
    },
    "fields": {
      "@timestamp": [
        1500879823095
      ]
    },
    "highlight": {
      "beat.hostname": [
        "@kibana-highlighted-field@***der9@/kibana-highlighted-field@"
      ]
    },
    "sort": [
      1500879823095
    ]
  }, ...

The same happens when I add a filter with a field like "system.memory.whatever". The list of fields change and contains now all system.memory. ... fields. But on the "working" servers I always see all available fields. All checks and screenshots were made with the following settings in "Available Fields": Aggregatable, Searchable and Type: any and "Hide missing fields" checked.

I am just a beginner with the elk-stack and I hope that the explanation of my issue understandable ...

Thanks for your help & regards
Olaf

I'm wondering, are all servers using the same version of Kibana & Elasticsearch?

I have only one central server running the ELK stack. He is receiving the input from all servers.

Elasticsearch is:

{
  "name" : "logsrv",
  "cluster_name" : "xyz-logging",
  "cluster_uuid" : "Yfy8pcbpTByWaKYBSNDfTQ",
  "version" : {
    "number" : "5.5.0",
    "build_hash" : "260387d",
    "build_date" : "2017-06-30T23:16:05.735Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  },
  "tagline" : "You Know, for Search"
}

and Kibana is installed with version 5.5.1

The Debian version is 8.8 and the kernel is 3.16.0-4-amd64

/BR
Olaf

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.