Hi guys,
I have seen that the elasticsearch nodes are creating a very huge amount of logs in my ECK cluster.
Is there a way to reduce the amount of logs?
Right now I am seeing the index being flooded by entries like this one:
Hi guys,
I have seen that the elasticsearch nodes are creating a very huge amount of logs in my ECK cluster.
Is there a way to reduce the amount of logs?
Right now I am seeing the index being flooded by entries like this one:
We aren't all guys
How are you pushing these logs, using the Elasticsearch module in Filebeat, or something else?
You sure
The logs are being collected using filebeat but without any modules. As elastic is running within the k8s cluster everything passed to stdout and stderr is being automatically collected by filebeat.
You might want to look at the Elasticsearch module, it'll stop the multiline logs from being split like that.
Once that is done you can look at what's causing the error.
Not sure whats wrong.
But should this be working?
setup.ilm.enabled: false
filebeat.autodiscover.providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths: ["/var/log/containers/*-${data.kubernetes.container.id}.log"]
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
filebeat.modules:
- module: elasticsearch
processors:
- add_host_metadata:
netinfo.enabled: false
- add_cloud_metadata:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event: #namespaces to be excluded from logging
when.or:
{{- range .Values.excludedNamespaces }}
- equals.kubernetes.namespace: {{ . | quote }}
{{- end }}
I think this here could be the issue:
2020-08-11T10:24:05.656Z ERROR [autodiscover] autodiscover/autodiscover.go:210 Auto discover config check failed for config '{
"audit": {
"enabled": true,
"input": {
"exclude_lines": [
"^\\s+[\\-`('.|_]"
],
"multiline": {
"match": "after",
"negate": false,
"pattern": "^[[:space:]]"
},
"paths": [
"/var/log/containers/*-67ff091788282884a43d13156ecbe377050b97cea7ce94a8700def85b451f467.log"
],
"stream": "all",
"type": "container"
}
},
"deprecation": {
"enabled": true,
"input": {
"exclude_lines": [
"^\\s+[\\-`('.|_]"
],
"multiline": {
"match": "after",
"negate": false,
"pattern": "^[[:space:]]"
},
"paths": [
"/var/log/containers/*-67ff091788282884a43d13156ecbe377050b97cea7ce94a8700def85b451f467.log"
],
"stream": "all",
"type": "container"
}
},
"gc": {
"enabled": true,
"input": {
"exclude_lines": [
"^\\s+[\\-`('.|_]"
],
"multiline": {
"match": "after",
"negate": false,
"pattern": "^[[:space:]]"
},
"paths": [
"/var/log/containers/*-67ff091788282884a43d13156ecbe377050b97cea7ce94a8700def85b451f467.log"
],
"stream": "all",
"type": "container"
}
},
"module": "elasticsearch",
"server": {
"enabled": true,
"input": {
"exclude_lines": [
"^\\s+[\\-`('.|_]"
],
"multiline": {
"match": "after",
"negate": false,
"pattern": "^[[:space:]]"
},
"paths": [
"/var/log/containers/*-67ff091788282884a43d13156ecbe377050b97cea7ce94a8700def85b451f467.log"
],
"stream": "all",
"type": "container"
}
},
"slowlog": {
"enabled": true,
"input": {
"exclude_lines": [
"^\\s+[\\-`('.|_]"
],
"multiline": {
"match": "after",
"negate": false,
"pattern": "^[[:space:]]"
},
"paths": [
"/var/log/containers/*-67ff091788282884a43d13156ecbe377050b97cea7ce94a8700def85b451f467.log"
],
"stream": "all",
"type": "container"
}
}
}', won't start runner: Can only start an input when all related states are finished: {Id:163938459-66313 Finished:false Fileinfo:0xc000247790 Source:/var/log/containers/elastic-es-data-100-1_elastic-system_elasticsearch-67ff091788282884a43d13156ecbe377050b97cea7ce94a8700def85b451f467.log Offset:1524076 Timestamp:2020-08-11 10:24:05.649762716 +0000 UTC m=+1.048324812 TTL:-1ns Type:container Meta:map[] FileStateOS:163938459-66313}
@warkolm
These are the logs errors that flood the log:
{"type": "server", "timestamp": "2020-08-12T10:08:02,967Z", "level": "WARN", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "elastic", "node.name": "elastic-es-master-2", "message": "unexpected error while indexing monitoring document", "cluster.uuid": "OfB8GyE3S-GoLHQr9se2BA", "node.id": "XKKpgvjxR3GMhL4FfK_RCQ" ,
"stacktrace": ["org.elasticsearch.xpack.monitoring.exporter.ExportException: java.lang.IllegalArgumentException: Limit of total fields [1000] in index [.monitoring-es-7-2020.08.12] has been exceeded",
"at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:125) ~[x-pack-monitoring-7.8.1.jar:7.8.1]",
"at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?]",
"at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?]",
"at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:?]",
"at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]",
"at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?]",
"at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?]",
"at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?]",
....
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:695) ~[elasticsearch-7.8.1.jar:7.8.1]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.8.1.jar:7.8.1]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[?:?]",
"at java.lang.Thread.run(Thread.java:832) ~[?:?]"] }
{"type": "server", "timestamp": "2020-08-12T10:08:02,975Z", "level": "WARN", "component": "o.e.x.m.MonitoringService", "cluster.name": "elastic", "node.name": "elastic-es-master-2", "message": "monitoring execution failed", "cluster.uuid": "OfB8GyE3S-GoLHQr9se2BA", "node.id": "XKKpgvjxR3GMhL4FfK_RCQ" ,
1.49.Final]",
Seems like there is an issue with the xpack.monitoring
package
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.