Hello!
I transfer postgres logs with Filebeat PostgreSQL module directly to Elasticsearch 6.1. But bandwidth is limited. How I can tuning my config for max performance?
My log: 2018-02-13T12:51:33Z INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=967973984 beat.memstats.memory_alloc=812186752 beat.memstats.memory_total=5431355861888 filebeat.events.added=231424 filebeat.events.done=231424 filebeat.harvester.open_files=1 filebeat.harvester.running=1 libbeat.config.module.running=1 libbeat.output.read.bytes=1658725 libbeat.output.write.bytes=234873621 libbeat.pipeline.clients=2 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.published=231424 libbeat.pipeline.events.total=231424 libbeat.pipeline.queue.acked=231424 registrar.states.current=1 registrar.states.update=231424 registrar.writes=99
If I understand, filebeat.events.done=231424 this is how many events transfered to ES per 30 sec.
If you are trying to optimize for a bandwidth limited link then perhaps you should test the impact of data compression by setting the output.elasticsearch.compression_level option to enable gzip.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.