Send JSON formatted logs directly to elasticsearch

Hi there,

I have modules that ships JSON formatted logs to ELK stack. I'd like to omit logstash, because I don't really need to parse them additionally. I use jsonevent-layout for java and logstashFormatter for python sources. I'd like to send those JSONs over TCP or UDP directly to elasticsearch. Is there any existing appender that does it properly?


There are a few good arguments for having something in between the application and Elasticsearch, e.g. Logstash or Filebeat. One of these is that Logstash and Filebeat will send data to Elasticsearch using bulk requests, which is significantly more efficient that sending individual events. Another benefit is that they are able to buffer data so that your application does not get held up in case Elasticsearch temporarily stops accepting indexing requests, e.g. due to master election, outage or maintenance.

Thank you for your time! For the performance reasons, would you suggest
creating additional nodes? I currently run it all on single node which may
be an overkill, but disk space is limited too. I also thought of sending
logs via rsyslog, but this implies those priority, severity, facility
fields which I'd have to get rid of. I guess I'll stay with logstash, I've
read logstash documentation, but could you suggest some performance tweaks
for logstash? I don't use any filter at all. Just "codec => json" specified
in input section. I don't even know if it's necessary, because the input is
already formatted as json-like string. Without codec, all json is packed
into "message" field. The jvm heap is by default set to 1 GB for logstash.
I'm going to reduce most unimportant and unnecessary logs, but still, it'll
be like 4-5 millions of logs per hour to single logstash sitting on docker
container. Is it possible to configure logstash to be really minimalistic
when it comes to parse logs?


If you can write your logs to file with one JSOn object per line, you might be able to use Filebeat instead of Logstash. This also has the benefit that the files on disk will act as a buffer.

Shame on me for triple posting, but I managed to do this with only single

json.keys_under_root: true breaks it down. I just can't set it properly.
Therefore when I get logs, I have json.message, json.whatever ... I know
this setting is just for that, overwrite_keys doesn't break my config, but
keys under root does. Also I'd like to remove all beats.whatever fields
from logging, is it possible? I can see, they are declared inside filebeat.template.json, but simple commenting them out also breaks filebeat from sending logs.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.