How to send logs directly to ELK (no intermediate file storage involved)

The applications I'm running in a Docker container, managed by openshift, should not use volumes for log files. So i would like to write directly from the applications, using some module (appender) in log4j/logback/.... which sends the log entries directly to logstash (and therefore into ELK) without storing them into files (or let syslogd store them in files).

Preferable directly from the dockerized app to the central ELK instance.
If this is not possible, a daemon running either in another container or on the bare metal server could be used. I don't want to run a daemon inside the container. I plan to use RFC5424 as the serialization format of the log message, because that's understood by all central log collection systems.

Splunk offers something like this with the HTTPEvent library, which can be configured as an appender in the java logging stack.

How can i do this with ELK?

Let docker send to stdout and then use filebeat to capture that and send it to Logstash and/or Elasticsearch.

I'm talking about the application(s) inside the docker container.
There might be more than one running. Also dumping java stack traces into stdout in parallel from multiple applications is not working so well (interleaved)( on stdout. Even worse on stderr, which is line buffered and interleaved stack traces might be even harder to detect.

That's not a typical docker deployment.

But if that's the case then use filebeat in the container as well.

Well, running a stateless spring-boot java application using log-back for logging is pretty standard IMHO. Not wanting to configure a volume where log messages are written to, and let filebeat read them from files in that volume also sounds contrary to the statelessness of the container.

There's also the window of opportunity, if the application is crash looping, that the contaier (and therefore the volume) is destroyed by openshift because the app is not reporting healthy, and filebeat hasn't had a chance to completly read the log files and send them to ELK. Which makes the important log data which shows why the app is crashlooping inaccessible.

Sending the log messages directly over the network to some central logging instance looks more robust to me. I saw some code for a RFC5424 parser for logstash in https://github.com/logstash-plugins/logstash-input-syslog/issues/15 I could use when i just send out RFC5424 formatted syslog messages but the pipeline of
Structured-Logging-API->RFC5424-serialization->regexp-parsing-in-logstash(which is obviously not supported out of the box)->forward-to-elastic makes me feel a little bit uneasy.

Splunk for example allows ingestion of Json formatted messages sent to from the applications logging system via HTTP/TCP. Something like this would be real nice.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.