How to add application metrics to elastic APM dashbords

Elastic Search Version: 7.3.1
APM Server Version: 7.3.1
APM Client Version: APM Java Agent: 1.x (current)
APM Agent language and version: Java 1.8

We are evaluating elastic APM for our micro service stack. And we are evaluating to use elastic stack for application metrics, application log aggregation, k8s infrastructure monitoring.
Our micro services are spring boot based and we are using micrometer for application metrics. Currently micrometer pushes the metrics to elastic search. We are trying to build dashboards around the application metrics and make it available from prebuild APM metrics dashboards which is available in Kibana .

I am looking for help and learn the best practices for the following questions.

  1. How to make correlation available from kibana for application metrics generated from frameworks like micrometer?
  2. How to build dashboards for application metrics and make it available from APM metrics dashboards which is ready available in kibana?
  3. Does Elastic recommend to use java-ecs-logging to be able to correlate logs and APM traces.
    https://github.com/elastic/java-ecs-logging
  4. java-ecs-logging seems to have longer json keys. Will it impact the log index size?

Thanks for the support.

Hi and thanks for checking out Elastic APM :+1:

You could use common tags which correspond to the same fields which Elastic APM sets, for example service.name and `service.environment. We also have a longstanding open issue for better integration with micrometer. One idea is that if the agent is active, it automatically registers a registry which sends the logs to APM Server.

You can build your own visualizations and dashboards with the time series visual builder or the regular Kibana visualizations. There currently is no way, however, to include those dashboards in the metrics tab of the APM Kibana App.

Yes, that's what I made it for :slight_smile:

Could you specify what you mean with longer JSON keys? Longer compared to what? Whether in general a long field name takes up more storage in Elasticsearch vs. a shorter one, I'm not too sure. Maybe @spinscale knows?

Long name live log.level comparing shorter keys ll or message to msg etc. Also [java-ecs-logging] does not offer any mechanism populate structured arguments as part of the json. https://github.com/logstash/logstash-logback-encoder/blob/master/src/main/java/net/logstash/logback/argument/StructuredArguments.java

It does for Log4j2: https://github.com/elastic/java-ecs-logging/blob/master/log4j2-ecs-layout/README.md#structured-logging but unfortunately, Logback is not as flexible.

What you could do is to add a MDC before logging and remove it afterwards. The StructuredArguments are a bit of a hack but it seems to work well. Maybe there is a way to support them within java-ecs-logging rater than trying to reinvent it.

There are some small factors where this plays a role, but it the end the gains are negiligible to the maintenance horror you introduce by forcing everyone to know every abbreviation. The cluster state is a bit bigger due to the different mapping, the compressed _source is a bit bigger.

Changing field names is not where you would gain size advantages. Making unneeded fields not indexable, learning about the index_options of the text field, properly configuring the correct number based data types are the things that will have an impact long term from my experience.

2 Likes

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.