Average calculation problem (round-off 0) on CPULoad via JMX

Hi,

I've installed an ELK on Ubuntu, i'm trying to monitor some data from a Java app with JMX (I followed this tutorial).

When I want to create a visualization for my CPU Load, the result of the average equals 0.

However, Kibana seems to receive the good values.

I think it's maybe due to the type of data which seems not to be float but I don't know how to change it and where.
Is it a Kibana or Elasticsearch misconfiguration ?

Thanks

Hi, little up here.

I still need some help on this, I've worked on all others data but I've only a problem on this one and I've no idea where is the problem. Maybe Kibana tries to round off the metric_value_number but I don't know how/where.
I'm still discovering ELK.

Edit :
When I was trying some different Time Range, it's display something for the CPU_Load, but I'm able to see a value only when I select Last 7 days on Time Range. Weird.

(I changed the Visualization to Gauge but there is no effect.)

It seems worked once but I've no idea why. I still have datas.


Hi Chuck,

that really sounds like a strange one. Could you maybe copy the "Request" and "Response" from the debug panel, at the visualization?

And could you maybe switch the metric aggregation to "unique count" on the "metric_value_number" field to check how many different values are there actually?

Cheers,
Tim

Request

{
  "size": 0,
  "aggs": {
    "2": {
      "date_histogram": {
        "field": "@timestamp",
        "interval": "1h",
        "time_zone": "Europe/Berlin",
        "min_doc_count": 1
      },
      "aggs": {
        "1": {
          "avg": {
            "field": "metric_value_number"
          }
        }
      }
    }
  },
  "version": true,
  "query": {
    "bool": {
      "must": [
        {
          "query_string": {
            "query": "type:jmx AND metric_path:jvm.OperatingSystem.SystemCpuLoad",
            "analyze_wildcard": true
          }
        },
        {
          "range": {
            "@timestamp": {
              "gte": 1508166185803,
              "lte": 1508770985803,
              "format": "epoch_millis"
            }
          }
        }
      ],
      "must_not": []
    }
  },
  "_source": {
    "excludes": []
  },
  "highlight": {
    "pre_tags": [
      "@kibana-highlighted-field@"
    ],
    "post_tags": [
      "@/kibana-highlighted-field@"
    ],
    "fields": {
      "*": {
        "highlight_query": {
          "bool": {
            "must": [
              {
                "query_string": {
                  "query": "type:jmx AND metric_path:jvm.OperatingSystem.SystemCpuLoad",
                  "analyze_wildcard": true,
                  "all_fields": true
                }
              },
              {
                "range": {
                  "@timestamp": {
                    "gte": 1508166185803,
                    "lte": 1508770985803,
                    "format": "epoch_millis"
                  }
                }
              }
            ],
            "must_not": []
          }
        }
      }
    },
    "fragment_size": 2147483647
  }
}

Response

{
  "took": 7,
  "timed_out": false,
  "_shards": {
    "total": 15,
    "successful": 15,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 5266,
    "max_score": 0,
    "hits": []
  },
  "aggregations": {
    "2": {
      "buckets": [
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-19T17:00:00.000+02:00",
          "key": 1508425200000,
          "doc_count": 15
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-20T09:00:00.000+02:00",
          "key": 1508482800000,
          "doc_count": 10
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-20T10:00:00.000+02:00",
          "key": 1508486400000,
          "doc_count": 360
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-20T11:00:00.000+02:00",
          "key": 1508490000000,
          "doc_count": 359
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-20T12:00:00.000+02:00",
          "key": 1508493600000,
          "doc_count": 360
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-20T13:00:00.000+02:00",
          "key": 1508497200000,
          "doc_count": 356
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-20T14:00:00.000+02:00",
          "key": 1508500800000,
          "doc_count": 360
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-20T15:00:00.000+02:00",
          "key": 1508504400000,
          "doc_count": 359
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-20T16:00:00.000+02:00",
          "key": 1508508000000,
          "doc_count": 360
        },
        {
          "1": {
            "value": 0.00967741935483871
          },
          "key_as_string": "2017-10-20T17:00:00.000+02:00",
          "key": 1508511600000,
          "doc_count": 310
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-23T09:00:00.000+02:00",
          "key": 1508742000000,
          "doc_count": 31
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-23T10:00:00.000+02:00",
          "key": 1508745600000,
          "doc_count": 360
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-23T11:00:00.000+02:00",
          "key": 1508749200000,
          "doc_count": 360
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-23T12:00:00.000+02:00",
          "key": 1508752800000,
          "doc_count": 360
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-23T13:00:00.000+02:00",
          "key": 1508756400000,
          "doc_count": 360
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-23T14:00:00.000+02:00",
          "key": 1508760000000,
          "doc_count": 284
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-23T15:00:00.000+02:00",
          "key": 1508763600000,
          "doc_count": 284
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-23T16:00:00.000+02:00",
          "key": 1508767200000,
          "doc_count": 360
        },
        {
          "1": {
            "value": 0
          },
          "key_as_string": "2017-10-23T17:00:00.000+02:00",
          "key": 1508770800000,
          "doc_count": 18
        }
      ]
    }
  },
  "status": 200
}

For the count (Seem's normal)

Thanks for your help ! :wink:

Could you add one other screenshot, with adding a sub-bucket aggregation type "split series", terms aggregation on the metric_value_number field (and set the size to 10 or 20 or so, please)?

Like this ?

Ah sorry missed half of the information :frowning:

On the y-axis you should just draw the count (and it looks like you selected a pretty small time range now with last 15 minutes?)

Cheers,
Tim

My bad, end of the day ahah.

Small time range shouldn't change something because the java app is on so I should receive the data but I put it back on 7 days.

Okay that is actually a very weird behavior, since also the bucket aggregation says, that there are mostly documents with value 0 and just a very few with 1, which could still average at a specific precision to 0.

I think I need to dig a bit into that issue, so you might not get a response from me today anymore, but I will respond in the next days.

1 Like

I guess what could cause that issue, is that you have the metric_value_number field indexed as an integer instead of float. That way the _source of the documents, that is shown in discover will still have the original value, but whenever you search or aggregate on that field it behaves like an integer field.

Could you please check your mappings (e.g. in the Dev Tools Console via GET /your_index_name/_mappings and check what type the metric_value_number field have?

It's that (hehe like my guess in the first post ^^), metric_value_number is a long.

How can i change it to double or float ?

And for my knowledge, how should I do to get this result in the console ? when I tried the GET method, I got an error ^^

Thanks a lot for your help ! :slight_smile::sunny:

Try to remove the _search body with the query of the request.

To update the mapping you will basically need to write the mapping to a new index (see the mapping documentation) and after that reindex your data into the new index (see e.g. this reindex blog post). After that you should be able to achieve what you want.

Cheers,
Tim

1 Like

Perfect

Once again, thanks for your help :slight_smile:

I found this on StackOverflow if someone have the same problem and want to change type / reindex step by step :wink:

Hi,

I'm back for a little question. How to change the type permanently ?

I saw this in the logs

And each day, because the index change with the date, it's reseting the field from double to long.

Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.