Filebeat compress output to file not working

Hello,
I am reading logs from a log file (as below) and then emitting the output into another file. Note: i am not emitting the output to ES or kafka e.t.c. But another file on some other location.

I have been able to achieve the normal flow, but i am unable to perform any compression. As per documentation, we can set the compression_level field, but its not working.

Can some one help me out?

My Filebeat config file:

#=========================== Filebeat prospectors =============================
filebeat.prospectors:

  • type: log
    enabled: true
    paths:
    • C:/path/logfile.out

and then i am emitting the output in another file as this.
#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

output.file:
path: "C:/someRemoteLocation/"
compression_level: 10
gzip: true
filename: filebeat

As you can see i have specified various tags like compression_level and gzip e.t.c, but none of them seems to be working.

Hi Omair,

Unfortunately, the file output doesn't support compression. This is only available at the elasticsearch and logstash outputs.

Although the file output is currently intended for diagnostics, it might make sense to add support for this. If you're insterested please open a GitHub issue with the feature request here, so we can discuss it further.

1 Like

Hi,
Thanks for your reply.
Couple of quick question here.
1 - If we compress the output and send it to logstash, then who would decompress it. Because elasticsearch is just a storage place. Its not intelligent enough to decompress it. Or do we need another application to fetch the data from elastic again and decompress it.
2 - Any idea that in case of we do compression at filebeat, then how much extra CPU is consumed from normal processing. What is the effect on CPU if lets say compression level is 9.?

Compression is used only to send requests over the network. The events themselves are stored decompressed, as if no compression was used.

About the CPU impact, you can have a look at this benchmarks, taking into account only the values for gzip, which is the library used in outputs:


https://tukaani.org/lzma/benchmarks.html

However, these benchmarks are performed with large files. In the case of beats, where the events are usually small, I think a compression level of 1 is enough and will have little impact on the CPU.

Beats sends batches of events to Logstash and Elasticsearch. Compression happens on application network layer only, but the events themselves are not compressed. Beats uses gzip compression, which operates at 32KB blocks I think. Events are JSON encoded. Adding compression really reduces bytes being send over the network (at the cost of higher CPU usage). The default compression level of 3 is a quite good default I think (higher values don't add much more compression benefits, but increase CPU usage).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.