Hello,
I have successfully set up an Elasticsearch license and started working on a proof-of-concept (POC). Following the documentation provided by Elastic, I have configured the Elastic Stack version 8.7 in a self-managed manner. The following resources were helpful:
Docker documentation: Install Elasticsearch with Docker | Elasticsearch Guide [8.7] | Elastic
Secure connections in Fleet: Configure SSL/TLS for self-managed Fleet Servers | Fleet and Elastic Agent Guide [master] | Elastic
Additionally, I have set up a Fleet server. As part of the configuration, I enabled the "Kubernetes integration" feature available in the Kibana dashboard. This process generates a manifest file that needs to be applied to the Kubernetes (K8s) cluster.
Everything is functioning properly. However, when the Elastic agent on Kubernetes sends logs to Elasticsearch, it is sent to the default stream called "logs-kubernetes.container_logs." This default stream actually directs the data to the ".ds-logs-kubernetes.container_logs-default-2023.05.22-000002" index.
I would like to know how to configure the Kubernetes integration policy through the Kibana dashboard so that the logs from Kubernetes containers can be directed to separate data streams and individual logs.
For instance, let's consider two microservices running as pods on the Kubernetes cluster:
my-service-1
my-service-2
I would like the logs of my-service-1 to be sent to a separate data stream called "ds-my-service-1" and stored in separate indices named "ds-log-my-service-1-". Similarly, the logs of my-service-2 should be sent to a separate data stream named "ds-my-service-2" and stored in separate indices named "ds-log-my-service-2-".
I would appreciate guidance on how to achieve this configuration. Thank you.
I did go through the following link
But didn't exactly understand how to apply the configuration that I am looking for.