I have a spring-boot application running into a docker container. I have set up Filebeat, and I can read my dockerized spring-boot application logs from Kibana Stream logs menu:
More useful info on the matter can be found in our application logs guide and at the agent's docs.
In short, you can now get your logs to be automatically reformatted to Elastic Common Schema and as of the APM agent latest version, you can even get your logs indexed without Filebeat, although you may need to wait for 8.7 for the full log sending capability. @Sylvain_Juge does this intake endpoint already exist in 8.6, or would it be added only in 8.7?
So I already use the log_sending parameter, but I don't see any log in Kibana... I prefer to use your solution, so if you have any idea why it doesn't work with the log_sending parameter, please tell me.
In the meantime, I will try the solution described in the document sent by @rishikeshr.
Log sending endpoint is available in 8.6.0, prior to that filebeat was the only option to capture the logs.
For Java, we now have the most common log ingestion workflows examples in our contrib repo.
If you want to keep filebeat for logs ingestion, I would suggest to just use elastic.apm.log_ecs_reformatting=override and let filebeat capture the standard output from the container (which I am assuming you are currently using).
If you want to use log_sending, then we need to investigate a bit here, do you have anything visible in the general log stream (not the one per-service in your screenshot) ? Also do you see any log documents in discover ?
I missed it in the original comment - logs do seem to be sent and stored in ECS format, with proper service.name and those that were captured within transactions also contain transaction.id and trace.id.
So what we need to understand is why this correlation doesn't show up in UI.
Let me check
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.