Hi all,
I noticed an issue with the standalone Elastic Agent shipping Kubernetes pod logs after upgrading the Agent version from 8.5.3 to 8.6.x.
We run the Elastic-Agent standalone on Kubernetes as a DaemonSet.
After deploying the version 8.6.x (tested with 8.6.0 and 8.6.1) logs for some pods are not being shipped to elasticsearch. When I restart the affected elastic-agents they start working correctly.
However when a application pod is redeployed the logs of the new pod are not shipped as well, until the agent is restarted manually.
I tested with both types filestream and logfile.
This is the agent policy inputs config we are using for the kubernetes pod logs:
- id: kubernetes-application-container-logs
name: kubernetes-application-container-logs
revision: 1
#type: filestream
type: logfile
use_output: default
meta:
package:
name: kubernetes
version: 1.31.2
data_stream:
namespace: applications
streams:
- id: kubernetes-application-logs-${kubernetes.pod.name}-${kubernetes.container.id}
data_stream:
dataset: kubernetes.container
type: logs
paths:
- '/var/log/containers/*${kubernetes.container.id}.log'
#prospector.scanner.symlinks: true
symlinks: true
pipeline: logs-kubernetes-pipeline
condition: ${kubernetes.namespace} != 'kube-system'
# parsers:
# - container:
# stream: all
# format: auto
processors:
- add_fields:
target: ''
fields:
environment: abc
cluster: xyz
As the same config works fine on Elastic-Agent 8.5.3, I assume there's a bug in the new version.
Steps to reproduce:
- Deploy Elastic-Agent version 8.6.0 as standalone on Kubernetes as daemonset.
- Check if Kubernetes application pod logs are shipped to Elasticsearch.
- Delete a kubernetes application pod.
- Wait for Kubernetes to rededeploy the pod.
- Check that the logs from the new pod are not shipped to Elasticsearch.
Can anybody confirm this behavior/bug? Do you have a solution for that?
Thanks!