I'm trying to install the newest EFK stack on a Kubernetes cluster. ES and Kibana Helm charts are from Bitnami, while Fluentd is from Kokuwa (Bitnami's Fluentd simply doesn't work for me, if fails to link to ES). The versions are 8.5.2 for Elasticsearch and Kibana, and 4.2.3 for Fluentd
The stack mostly works, i.e. I see log statements from kube-proxy and a few other containers, BUT I get a lot of the errors as in the subject of this post. And I can't see log statements from the cluster autoscaler for example.
In my containers.input.conf file I have the following (which is vanilla code plus the one line to ignore fluentd logs themselves)
<source>
@id fluentd-containers.log
@type tail
path /var/log/containers/*.log
exclude_path /var/log/containers/*fluentd*.log
pos_file /var/log/containers.log.pos
tag raw.kubernetes.*
read_from_head true
<parse>
@type multi_format
<pattern>
format json
time_key time
time_format %Y-%m-%dT%H:%M:%S.%NZ
</pattern>
<pattern>
format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
time_format %Y-%m-%dT%H:%M:%S.%N%:z
</pattern>
</parse>
</source>
# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
@id raw.kubernetes
@type detect_exceptions
remove_tag_prefix raw
message log
stream stream
multiline_flush_interval 5
max_bytes 500000
max_lines 1000
</match>
# Concatenate multi-line logs
<filter **>
@id filter_concat
@type concat
key message
multiline_end_regexp /\n$/
separator ""
timeout_label @NORMAL
flush_interval 5
</filter>
# Enriches records with Kubernetes metadata
<filter kubernetes.**>
@id filter_kubernetes_metadata
@type kubernetes_metadata
</filter>
# Fixes json fields in Elasticsearch
<filter kubernetes.**>
@id filter_parser
@type parser
key_name log
reserve_time true
reserve_data true
remove_key_name_field true
<parse>
@type multi_format
<pattern>
format json
</pattern>
<pattern>
format none
</pattern>
</parse>
</filter>
One of the errors I get looks like this. Does the logging statement that fails to get imported look suspicious to you?
{
"error": "#<Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError: 400 - Rejected by Elasticsearch>",
"location": null,
"tag": "kubernetes.var.log.containers.bitnami-es-elasticsearch-data-0_logging_elasticsearch-3bb8cb3c06bd0306931ce88e2cb8a15ffb8663c0c67f0d0a56edd0f73e834edd.log",
"time": 1681132268,
"record": {
"stream": "stdout",
"docker": {
"container_id": "3bb8cb3c06bd0306931ce88e2cb8a15ffb8663c0c67f0d0a56edd0f73e834edd"
},
"kubernetes": {
"container_name": "elasticsearch",
"namespace_name": "logging",
"pod_name": "bitnami-es-elasticsearch-data-0",
"container_image": "docker.io/bitnami/elasticsearch:8.6.2-debian-11-r10",
"container_image_id": "docker.io/bitnami/elasticsearch@sha256:24bf1a9b04d045dd3e5efbcdd3bc00c18959a2f1cd4bcc96ef4617f937d2e9d0",
"pod_id": "f1d22705-9f9d-4119-b277-335a0885c7f8",
"pod_ip": "10.65.58.79",
"host": "ip-10-65-57-214.eu-west-1.compute.internal",
"labels": {
"app": "data",
"app.kubernetes.io/component": "data",
"app.kubernetes.io/instance": "bitnami-es",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/name": "elasticsearch",
"controller-revision-hash": "bitnami-es-elasticsearch-data-df8d879df",
"helm.sh/chart": "elasticsearch-19.6.0",
"statefulset.kubernetes.io/pod-name": "bitnami-es-elasticsearch-data-0"
},
"master_url": "https://172.20.0.1:443/api",
"namespace_id": "e4fb9570-720a-48da-a211-b4e6f6217df5",
"namespace_labels": {
"kubernetes.io/metadata.name": "logging"
}
},
"message": "[2023-04-10T13:11:08,957][INFO ][o.e.m.j.JvmGcMonitorService] [bitnami-es-elasticsearch-data-0] [gc][18967] overhead, spent [259ms] collecting in the last [1s]",
"@timestamp": "2023-04-10T13:11:08.957842094+00:00",
"tag": "kubernetes.var.log.containers.bitnami-es-elasticsearch-data-0_logging_elasticsearch-3bb8cb3c06bd0306931ce88e2cb8a15ffb8663c0c67f0d0a56edd0f73e834edd.log"
},
"message": "dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error=\"400 - Rejected by Elasticsearch\" location=nil tag=\"kubernetes.var.log.containers.bitnami-es-elasticsearch-data-0_logging_elasticsearch-3bb8cb3c06bd0306931ce88e2cb8a15ffb8663c0c67f0d0a56edd0f73e834edd.log\" time=2023-04-10 13:11:08.957842094 +0000 record={\"stream\"=>\"stdout\", \"docker\"=>{\"container_id\"=>\"3bb8cb3c06bd0306931ce88e2cb8a15ffb8663c0c67f0d0a56edd0f73e834edd\"}, \"kubernetes\"=>{\"container_name\"=>\"elasticsearch\", \"namespace_name\"=>\"logging\", \"pod_name\"=>\"bitnami-es-elasticsearch-data-0\", \"container_image\"=>\"docker.io/bitnami/elasticsearch:8.6.2-debian-11-r10\", \"container_image_id\"=>\"docker.io/bitnami/elasticsearch@sha256:24bf1a9b04d045dd3e5efbcdd3bc00c18959a2f1cd4bcc96ef4617f937d2e9d0\", \"pod_id\"=>\"f1d22705-9f9d-4119-b277-335a0885c7f8\", \"pod_ip\"=>\"10.65.58.79\", \"host\"=>\"ip-10-65-57-214.eu-west-1.compute.internal\", \"labels\"=>{\"app\"=>\"data\", \"app.kubernetes.io/component\"=>\"data\", \"app.kubernetes.io/instance\"=>\"bitnami-es\", \"app.kubernetes.io/managed-by\"=>\"Helm\", \"app.kubernetes.io/name\"=>\"elasticsearch\", \"controller-revision-hash\"=>\"bitnami-es-elasticsearch-data-df8d879df\", \"helm.sh/chart\"=>\"elasticsearch-19.6.0\", \"statefulset.kubernetes.io/pod-name\"=>\"bitnami-es-elasticsearch-data-0\"}, \"master_url\"=>\"https://172.20.0.1:443/api\", \"namespace_id\"=>\"e4fb9570-720a-48da-a211-b4e6f6217df5\", \"namespace_labels\"=>{\"kubernetes.io/metadata.name\"=>\"logging\"}}, \"message\"=>\"[2023-04-10T13:11:08,957][INFO ][o.e.m.j.JvmGcMonitorService] [bitnami-es-elasticsearch-data-0] [gc][18967] overhead, spent [259ms] collecting in the last [1s]\", \"@timestamp\"=>\"2023-04-10T13:11:08.957842094+00:00\", \"tag\"=>\"kubernetes.var.log.containers.bitnami-es-elasticsearch-data-0_logging_elasticsearch-3bb8cb3c06bd0306931ce88e2cb8a15ffb8663c0c67f0d0a56edd0f73e834edd.log\"}"
}
Thanks a lot for any pointers.