Where to Find Elasticsearch Internal Logs in ECK
When using ECK, Elasticsearch logs are handled in two main ways:
1. Default Behavior: Logs via kubectl (No Discover Access)
By default, Elasticsearch pods output their logs to stdout/stderr, which means they're accessible via standard Kubernetes logging:
# View logs from a specific Elasticsearch pod
kubectl logs <elasticsearch-pod-name> -n <namespace>
# Follow logs in real-time
kubectl logs -f <elasticsearch-pod-name> -n <namespace>
# View logs from all pods in a cluster
kubectl logs -l elasticsearch.k8s.elastic.co/cluster-name=<cluster-name> -n <namespace>
To check for flood-stage disk watermark events, you can grep for relevant messages:
kubectl logs <es-pod-name> | grep -i "flood\|watermark\|disk"
However, these logs are not indexed in Elasticsearch and cannot be queried via Discover out of the box.
2. Shipping Logs to Elasticsearch for Discover Access (Requires Configuration)
ECK supports built-in stack monitoring that automatically ships Elasticsearch logs to a monitoring cluster. There are two approaches:
Option A: Built-in Stack Monitoring (Recommended)
ECK can deploy Filebeat as a sidecar container to ship logs automatically. Add the monitoring section to your Elasticsearch spec:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: my-elasticsearch
spec:
version: 9.2.2
monitoring:
metrics:
elasticsearchRefs:
- name: monitoring-cluster # Reference to monitoring ES cluster
logs:
elasticsearchRefs:
- name: monitoring-cluster # Reference to monitoring ES cluster
nodeSets:
- name: default
count: 3
This will:
- Deploy Metricbeat and Filebeat as sidecar containers
- Ship metrics to the monitoring cluster
- Ship logs to the
filebeat-* index in the monitoring cluster
- You can then query these logs in Discover in the monitoring Kibana
Option B: Deploy Filebeat as a DaemonSet
If you want to collect logs from all containers (including Elasticsearch) and ship them to the same cluster, deploy Filebeat as a DaemonSet:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
spec:
type: filebeat
version: 9.2.2
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
filebeat.inputs:
- type: filestream
paths:
- /var/log/containers/*.log
parsers:
- container: {}
prospector:
scanner:
fingerprint.enabled: true
symlinks: true
file_identity.fingerprint: {}
processors:
- add_host_metadata: {}
- add_cloud_metadata: {}
daemonSet:
podTemplate:
spec:
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: filebeat
securityContext:
runAsUser: 0
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Searching for Disk Watermark Events in Discover
Once logs are being shipped to Elasticsearch, you can query them in Discover. Example queries for disk watermark events:
message: "flood stage disk watermark"
message: "high disk watermark"
message: "low disk watermark"
Or using KQL:
kubernetes.pod.name: "<your-es-pod-name>" and message: *watermark*