Original install method (e.g. download page, yum, deb, from source, etc.) and version: Elastic Cloud
Is there anything special in your setup? No
Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):
A Spring Boot Java application including Micrometer has a Gauge meter defined to monitor a timestamp. Gauges are correctly autodetected by the APM Agent, but are written to Elasticsearch as Float which becomes dynamically mapped into the apm-7.10.0-metric-* indexed as a float and
prevents using Date based functions on that field.
For what it is worth, Micrometer does support a TimeGauge which APM could use to serialize the value as a Long instead but I do not believe that changing our metric to TimeGauge will have any difference because else if (meter instanceof Gauge) will still end up writing it as a Float.
Steps to reproduce:
Define a custom metric as Meter of type Gauge (or TimeGauge I presume)
Use that Meter to monitor a timestamp as an Epoch of type Long (new Date().getTime())
Observe APM auto-maps the metric as a Float
Errors in browser console (if relevant): N/A
Provide logs and/or server output (if relevant): N/A
Do you have a bit more context on your usage of a timestamp as a metric ?
what does this timestamp value represents in your application ?
how do you use it (or plan to use it) on Kibana ?
Our current definition of metrics does not really fit your usage here as metrics are captured as float for metric values and we also have a few metric fields like count that are mapped as long. We don't (as I am aware of) have a simple way to capture any raw long value and report it as a metric.
Depending on your use-case, you might thus need to do the following:
transform your observed timestamp into an observable metric, for example if the timestamp represents the time of the last database update, you could track this as a metric that represents the number of database updates over a period of time.
build an ingestion pipeline that will transform the received agent metric from float into a long to make it fit your use-case.
Imagine from a dashboard perspective wanting a Gauge ("indicates the status of a metric") element that is simply a timestamp of when the next something is scheduled to occur. A Counter won't make sense.
Micrometer provides the TimeGauge for this purpose:
A specialized gauge that tracks a time value
Although it monitors a java.lang.Number and its interface still only exposes it as a double
My point is, if someone were to go through the trouble of using TimeGauge shouldn't the APM Agent be smarter and realize this metric has something to do with Time so the mappings are correct? In particular a Date type because the value really is
a long number representing milliseconds-since-the-epoch .
I grant that a TimeGauge doesn't necessarily have to be a timestamp/instant and could simply be a measure of time.
nextProcessingStartTime: <timestamp> vs lastProcessingDuration: <milliseconds>
But I'd argue that if one wanted lastProcessingDuration you wouldn't even need a TimeGauge but rather an ordinary Gauge. So TimeGauge would seem to me to be better suited for the timestamp.
While I understand your use-case, we hadn't had many requests for similar behavior so far, thus it's hard to conclude on what to do exactly.
Also, changing this would likely be a breaking change as the metric would have a different mapping in the index, thus it makes it not really trivial.
Also, I agree with you that mapping every TimeGauge to a long value would make sense, since it would cover both the timestamp and time duration in milliseconds use-cases, however doing so does not really fit our current metrics model. Here we probably lack a proper way have metrics that are just mapped as long, instead of having them always as double.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.