Filebeat 6.1 performance

Hello!
I transfer postgres logs with Filebeat PostgreSQL module directly to Elasticsearch 6.1. But bandwidth is limited. How I can tuning my config for max performance?
My log:
2018-02-13T12:51:33Z INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=967973984 beat.memstats.memory_alloc=812186752 beat.memstats.memory_total=5431355861888 filebeat.events.added=231424 filebeat.events.done=231424 filebeat.harvester.open_files=1 filebeat.harvester.running=1 libbeat.config.module.running=1 libbeat.output.read.bytes=1658725 libbeat.output.write.bytes=234873621 libbeat.pipeline.clients=2 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.published=231424 libbeat.pipeline.events.total=231424 libbeat.pipeline.queue.acked=231424 registrar.states.current=1 registrar.states.update=231424 registrar.writes=99

If I understand, filebeat.events.done=231424 this is how many events transfered to ES per 30 sec.

This is my config /etc/filebeat/filebeat.yml
filebeat.prospectors:
filebeat.config.modules:
path: ${path.config}/modules.d/.yml
reload.enabled: false
setup.template.name: "filebeat-%{[beat.version]}"
setup.template.pattern: "filebeat-
"
setup.kibana:
host: "kibana.somehost.com:5601"
output.elasticsearch:
hosts: ["elastic1.somehost.com:9200", "elastic2.somehost.com:9200", "elastic3.somehost.com:9200"]
index: "filebeat-pg_postgresql-9.5-main.log-%{+yyyy.MM.dd}"
worker: 6
bulk_max_size: 4096
loadbalance: true

Module config /etc/filebeat/modules.d/postgresql.yml
- module: postgresql
log:
enabled: true
var.paths: ["/var/log/postgresql/postgresql-9.5-main.log"]

If you are trying to optimize for a bandwidth limited link then perhaps you should test the impact of data compression by setting the output.elasticsearch.compression_level option to enable gzip.

See https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html#_literal_compression_level_literal

I have 1Gbps link, filebeat use only 70Mbps.
I check with compression_level: 4, but any changes in log

2018-02-16T07:43:38Z INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=100961584 beat.memstats.memory_alloc=89754624 beat.memstats.memory_total=19207924664 filebeat.events.active=-4096 filebeat.events.added=239616 filebeat.events.done=243712 filebeat.harvester.open_files=1 filebeat.harvester.running=1 libbeat.config.module.running=1 libbeat.output.read.bytes=1728145 libbeat.output.write.bytes=21602486 libbeat.pipeline.clients=2 libbeat.pipeline.events.active=4117 libbeat.pipeline.events.published=239616 libbeat.pipeline.events.total=239616 libbeat.pipeline.queue.acked=239616 registrar.states.current=1 registrar.states.update=241664 registrar.writes=113

Maybe I need tuning my ES config?
/etc/elasticsearch/elasticsearch.yml

cluster.name: mycluster
node.name: ${HOSTNAME}

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.bind_host: 0
network.host: 0.0.0.0
http.port: 9200

transport.host: 10.0.13.1
transport.tcp.port: 9300

node.master: false
node.data: true
node.ingest: true

discovery.zen.minimum_master_nodes: 2
transport.profiles.default.port: 9300-9400
discovery.zen.ping.unicast.hosts:
   - 10.0.13.91
   - 10.0.13.92
   - 10.0.13.93
   - 10.0.13.1
   - 10.0.13.2
   - 10.0.13.3
   - 10.0.13.11
   - 10.0.13.12
   - 10.0.13.13

indices.requests.cache.size: 10%

# HOT node
node.attr.box_type: hot

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.