Why filebeat write disk with fast speed

Hi, my filebeat affect my system performance
I collect log with filebeat version 6.2.2,and my filebeat.yml below:

name: general
filebeat.prospectors:
- type: log
  paths:
    - /logs/log_
  include_lines: ['[^A]*ANALYZE.*']
  tail_files: true
  scan_frequency: 2s
  fields:
    service: general
    ip: 10.117.79.92
output.kafka:
  hosts: ["kafka01.qunhequnhe.com:9094"]
  topic: 'topic'
  codec.format:
    string: '%{[fields]}|%{[source]}|%{[message]}'
  version: '0.11.0.0'
  bulk_max_size: 2048
  compression: snappy
  required_acks: 1
  channel_buffer_size: 2048
xpack.monitoring:
  enabled: true
  elasticsearch:
    hosts: ["eshost"]
    username: "username"
    password: "password"

if yaml file contains “include_lines” config,the filebeat write disk with fast speed,and it is proportional to the filtered file.
Can somebody help me in solving this problem thanks

Is there anything in the Filebeat logs?

nothing,here is the log:
2018-05-07T23:06:38.436+0800 INFO instance/beat.go:468 Home path: [/nova/env/_env/FileBeat/1.0.2] Config path: [/nova/env/_env/FileBeat/1.0.2] Data path: [/nova/ env/_env/FileBeat/1.0.2/data] Logs path: [/nova/env/_env/FileBeat/1.0.2/logs]
2018-05-07T23:06:38.441+0800 INFO instance/beat.go:475 Beat UUID: cb6de039-bd66-4263-ad81-0619c5761774
2018-05-07T23:06:38.441+0800 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.2
2018-05-07T23:06:38.443+0800 INFO pipeline/module.go:76 Beat name: general-10.117.79.92
2018-05-07T23:06:38.444+0800 INFO elasticsearch/client.go:145 Elasticsearch url: http://10.80.112.128:9200
2018-05-07T23:06:38.444+0800 INFO elasticsearch/client.go:145 Elasticsearch url: http://10.80.100.56:9200
2018-05-07T23:06:38.444+0800 INFO elasticsearch/client.go:145 Elasticsearch url: http://10.80.220.74:9200
2018-05-07T23:06:38.445+0800 INFO instance/beat.go:301 filebeat start running.
2018-05-07T23:06:38.445+0800 INFO elasticsearch/elasticsearch.go:154 Start monitoring endpoint init loop.
2018-05-07T23:06:38.445+0800 INFO registrar/registrar.go:71 No registry file found under: /nova/env/env/FileBeat/1.0.2/data/registry. Creating a new registry file.
2018-05-07T23:06:38.445+0800 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-05-07T23:06:38.473+0800 INFO registrar/registrar.go:108 Loading registrar data from /nova/env/env/FileBeat/1.0.2/data/registry
2018-05-07T23:06:38.473+0800 INFO registrar/registrar.go:119 States Loaded from registrar: 0
2018-05-07T23:06:38.474+0800 WARN beater/filebeat.go:261 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-05-07T23:06:38.474+0800 INFO crawler/crawler.go:48 Loading Prospectors: 1
2018-05-07T23:06:38.475+0800 INFO elasticsearch/elasticsearch.go:177 Stop monitoring endpoint init loop.
2018-05-07T23:06:38.475+0800 INFO elasticsearch/elasticsearch.go:183 Start monitoring metrics snapshot loop.
2018-05-07T23:06:38.474+0800 INFO log/prospector.go:111 Configured paths: [/opt/jetty/logs/log
]
2018-05-07T23:06:38.482+0800 INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 1
2018-05-07T23:06:40.482+0800 INFO log/harvester.go:216 Harvester started for file: /opt/jetty/logs/log

2018-05-07T23:06:58.459+0800 INFO beater/filebeat.go:323 Stopping filebeat
2018-05-07T23:06:58.459+0800 INFO crawler/crawler.go:109 Stopping Crawler
2018-05-07T23:06:58.459+0800 INFO crawler/crawler.go:119 Stopping 1 prospectors
2018-05-07T23:06:58.459+0800 INFO prospector/prospector.go:121 Prospector ticker stopped
2018-05-07T23:06:58.459+0800 INFO prospector/prospector.go:138 Stopping Prospector: 5426548190830938784
2018-05-07T23:06:58.459+0800 INFO crawler/crawler.go:135 Crawler stopped
2018-05-07T23:06:58.459+0800 INFO registrar/registrar.go:210 Stopping Registrar
2018-05-07T23:06:58.473+0800 INFO registrar/registrar.go:165 Ending Registrar
2018-05-07T23:06:58.486+0800 INFO instance/beat.go:308 filebeat stopped.
2018-05-07T23:06:58.487+0800 INFO [monitoring] log/log.go:132 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":560,"time":56 8},"total":{"ticks":930,"time":940,"value":930},"user":{"ticks":370,"time":372}},"info":{"ephemeral_id":"9e5677f5-5315-4b50-9c67-e0b992feeb39","uptime":{"ms":20083}},"mem stats":{"gc_next":4194304,"memory_alloc":2074560,"memory_total":13755552,"rss":14848000}},"filebeat":{"events":{"active":3,"added":2668,"done":2665},"harvester":{"closed" :1,"open_files":0,"running":0,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"kafka"},"pipeline":{"clients":0,"events":{"active":0,"filtered" :2668,"total":2668}}},"registrar":{"states":{"current":1,"update":2665},"writes":2667},"system":{"cpu":{"cores":4},"load":{"1":0.76,"15":0.35,"5":0.43,"norm":{"1":0.19,"1 5":0.0875,"5":0.1075}}},"xpack":{"monitoring":{"pipeline":{"clients":1,"events":{"active":1,"published":2,"retry":3,"total":2},"queue":{"acked":1}}}}}}}
2018-05-07T23:06:58.491+0800 INFO [monitoring] log/log.go:133 Uptime: 20.088568903s
2018-05-07T23:06:58.491+0800 INFO [monitoring] log/log.go:110 Stopping metrics logging.
2018-05-07T23:06:58.492+0800 INFO elasticsearch/elasticsearch.go:191 Stop monitoring metrics snapshot loop.

the speed of filebeat write disk is proportional to the filtered file.
will it write useless lines into /dev/null or /dev/pts?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.