Why are elasticsearch stacktraces printed on newlines in JSON/stdout logging?

We have our Elasticsearch clusters running in Kubernetes using ECK to manage. Elasticsearch is set up with default logging settings (JSON to stdout). We have a Filebeat DaemonSet scraping /var/log/containers/*.log on each node, which picks up the Elasticsearch logs (as well as those for all our other pods) and sends them back to Elasticsearch to be indexed. Trouble is, even though Elasticsearch is logging in JSON, and even though each line of the stack trace is an element in the "stacktrace" array, each element still gets outputted on its own line in stdout, for example:

{"type": "server", "timestamp": "2020-04-21T08:00:15,624Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "elastic-logs", "node.name": "elasti
c-logs-es-data1-1", "message": "path: /elastalert_status/_search, params: {size=1, index=elastalert_status, _source_includes=endtime,rule_name}", "cluster.uui
d": "qWZni4INRtqbaExhnXD4eA", "node.id": "DIjWMm9vRjaYaV4SItWSkw" ,
"stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:534) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:305) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:563) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:384) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.lambda$performPhaseOnShard$0(AbstractSearchAsyncAction.java:219) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction$2.doRun(AbstractSearchAsyncAction.java:284) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773) [elasticsearch-7.5.2.jar:7.5.2]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.5.2.jar:7.5.2]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]",
"at java.lang.Thread.run(Thread.java:830) [?:?]"] }

...so Filebeat doesn't pick the whole thing up as a single event. We can leverage the multiline function to get around this but shouldn't it be logging it all on one line anyway?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.