Kibana showing wrong histogram interval in output


(Varun) #1

I am creating query for histogram output of metricbeat cpu log.

i tried this query as below.

post metricbeat/_search
{ "size": 0,       
"query" : {
  "bool": { "must": 
    [  
    { "range": {      "@timestamp": {      "gte": "05-03-2018",       "lte": "05-03-2018", "format": "MM-dd-yyyy"    }    }} ,    
    { "range": {     "system.cpu.system.pct": {       "gte":"0.0", "lte":"2.0"     }    }}, {"match": {
      "metricset.name.keyword": "cpu"
    }}
  ]
}
},
  "aggs": { "count":{ "date_histogram": {"field": "@timestamp", "interval": "1d","format" : "MM-dd-yyyy","min_doc_count": 1 },
                "aggs": {
                   "PctVal": { "histogram":{"field": "system.cpu.system.pct", "interval": "0.25","min_doc_count": 0}}
                }
    
    }
  }}

result i got as below

{
  "took": 12,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 7307,
    "max_score": 0,
    "hits": []
  },
  "aggregations": {
    "count": {
      "buckets": [
        {
          "key_as_string": "05-03-2018",
          "key": 1525305600000,
          "doc_count": 7307,
          "PctVal": {
            "buckets": [
              {
                "key": 0,
                "doc_count": 7304
              },
              {
                "key": 0.25,
                "doc_count": 0
              },
              {
                "key": 0.5,
                "doc_count": 0
              },
              {
                "key": 0.75,
                "doc_count": 0
              },
              {
                "key": 1,
                "doc_count": 3
              }
            ]
          }
        }
      ]
    }
  }
}
  • i got all values between 0 and 1 as zero. why is tht ? it's a float value field.
  • There is values i know that. i saw in kibana.
  • Somehow system.cpu.system.pct field is taking as integer not as float.
  • I am using metricbeat default config file and system module

(Jaime Soriano) #2

What version of metricbeat are you using?
Did you run the metricbeat setup before starting to send metrics? You need to do it to have the correct mappings for your fields.


(Varun) #3

i am using 6.2.4 version of ELK
config file as like below

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  `# Array of hosts to connect to.`
  hosts: ["localhost:9200"]
  index: "metricbeat"

#==========================  Modules configuration ============================

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false


#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression

setup.template.name: "metricbeat"
setup.template.fields: "fields.yml"
setup.template.pattern: "metricbeat-*"


#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

(Varun) #4

Please help anybody ?
how can i solve it?


(Jaime Soriano) #5

Did you run metricbeat setup?


(Varun) #6

what do you mean by metric beat setup ?


(Varun) #7

i change configuration yml file like above then loaded template manually using powershell


(Jaime Soriano) #8

Ok, let's check then the type in the mapping of this field, make this request from Kibana developer console:

GET /metricbeat-6.2.4-*/_mapping/field/system.cpu.system.pct

You should get something like:

{
  "metricbeat-6.2.4-2018.05.22": {
    "mappings": {
      "doc": {
        "system.cpu.system.pct": {
          "full_name": "system.cpu.system.pct",
          "mapping": {
            "pct": {
              "type": "scaled_float",
              "scaling_factor": 1000
            }
          }
        }
      }
    }
  }
}

The type should be scaled_float.


(Varun) #9
{
  "metricbeat": {
    "mappings": {
      "doc": {
        "system.cpu.system.pct": {
          "full_name": "system.cpu.system.pct",
          "mapping": {
            "pct": {
              "type": "long"
            }
          }
        }
      }
    }
  }
}

i got as long
how can i change it ?


(Jaime Soriano) #10

Maybe the problem is in the template name, you configured the setup with template pattern metricbeat-*, but your elasticsearch output is storing events in the metricbeat index, that doesn't match the pattern of the template, so even if the setup was done, the mapping is not being applied.

I'd recommend to use the default index names. Is there any reason you need to change them?


(Varun) #11

default index name contain timestamp, i don't need it
That's why i change index


(Jaime Soriano) #12

Having time-based index names is useful, for example if you want to remove old data you can just remove old indexes, also if you want to move old data to masters with slower disks (the hot-warm architecture), you can just move the old indexes.
Default indexes also have version information, this is also important as Beats of different versions may send events with different type mappings.


(Varun) #13

ok thanks for help.

Is there a way to update template of an index after adding data ?
like chenging from number to float ?


(Jaime Soriano) #14

It is not possible to update the types of existing data, for that you need to create a new index and the proper mappings, and reindex.
In your case, if you want to keep your existing data you can try to directly reindex from your metricbeat index to an index whose name matches with the template pattern. And for new data I'd recommend to use the default names and patterns.


(system) #15

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.