Filebeat exclude and include lines

Hello Experts,

Before sending my logs to logstash I would like remove some lines which is not needed, and I also know which exact lines I need to send to my logstash to further process my logs using groks.

Which option is better exclude or include or should I define both?

exclude_lines: ['Bad rsync return code', 'utilization', 'health_check', 'account-auditor:', 'account-reaper:', 'account-replicator:', 'container-auditor:', 'container-reconciler:', 'container-replicator:', 'container-sharder:', 'container-updater:', 'object-auditor:', 'object-expirer:', 'object-reconstructor:', 'object-replicator:', 'object-updater:', 'proxy-server:swiftstack_authen:', 'swift:']

include_lines: ['proxy-server:', 'object-server:', 'container-server:', 'account-server:']
  1. I have tried enabling both in first attempt but I don't find anything written into elastic index.
  2. I have tried with only enabling exclude lines parameter but its taking lot of time process and after waiting of 30 mins it sent only 2 docs to elastic.
    [root@elk1 swiftlogs]# ll
    total 1757000
    -rw-r--r--. 1 elk elk 1799134000 May 14 10:18 all.log.1.dllabsw1
    -rw-r--r--. 1 root root 32512 May 16 09:05 test.log >>>> data processed from only this log.
    [root@elk1 swiftlogs]# cat * | awk '{ print $5}' | sort | uniq -c
    120 account-auditor:
    49 account-reaper:
    144 account-replicator:
    56992 account-server:
    82 container-auditor:
    5958 container-reconciler:
    138 container-replicator:
    2354922 container-server:
    486 container-sharder:
    107 container-updater:
    48 object-auditor:
    1347 object-expirer:
    8667 object-reconstructor:
    2205 object-replicator:
    1197460 object-server:
    30188 object-updater:
    398936 proxy-server:
    30 proxy-server:swiftstack_authen:
    230411 swift:

[root@elk1 swiftlogs]#

  1. and with Include lines parameter same thing nothing written into elastic.

Thanks
Chandra

Hi,

In your case, as you already know the exact lines you want, is easier to only define include_lines, as the exclude_lines definition is much longer.

About the problem you are suffering, I doubt it's a performance problem. Both your include and exclude definitions are short and simple enough that there shouldn't be a noticeable performance impact.

When you re-run filebeat, it continues where it left in the previous run, so I think that what is happening is that during your testing you already consumed most of the file and now it's at the end of the file waiting for new entries to be added. That explains why you got only a few documents in 30 minutes.

If you want to process the whole file again, you must remove your registry (/var/lib/filebeat/registry) so that it forgets what it's been already processed (don't do this on production or you'll get a lot of duplicates).

Thanks @adrisr for your response.

as you said it might be performance issue but I am pretty sure its not processing all the include lines, filebeat should process 4M log lines(below is the individual count), but in 30 mins only 2 lines, that kind of surprised me.
398936 proxy-server:
1197460 object-server:
2354922 container-server:
56992 account-server:

I do see below log prints in filebeat logs, can we make out something like how many processed and how many in flight?

":387}},"total":{"ticks":1270,"time":{"ms":1281},"value":1270},"user":{"ticks":890,"time":{"ms":894}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":10},"info":{"ephemeral_id":"c345ac40-0c0b-4799-b11c-a8dadea8af99","uptime":{"ms":30043}},"memstats":{"gc_next":30194336,"memory_alloc":25457200,"memory_total":47430720,"rss":41299968}},"filebeat":{"events":{"active":4199,"added":4206,"done":7},"harvester":{"open_files":2,"running":2,"started":2}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"output":{"events":{"active":4096,"batches":2,"total":4096},"read":{"bytes":30},"type":"logstash","write":{"bytes":393078}},"pipeline":{"clients":1,"events":{"active":4117,"filtered":88,"published":4116,"retry":2048,"total":4205}}},"registrar":{"states":{"current":2,"update":7},"writes":{"success":8,"total":8}},"system":{"cpu":{"cores":10},"load":{"1":0.17,"15":0.23,"5":0.29,"norm":{"1":0.017,"15":0.023,"5":0.029}}}}}}
2019-05-16T09:54:18.393-0700 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":390,"time":{"ms":3}},"total":{"ticks":1290,"time":{"ms":14},"value":1290},"user":{"ticks":900,"time":{"ms":11}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":10},"info":{"ephemeral_id":"c345ac40-0c0b-4799-b11c-a8dadea8af99","uptime":{"ms":60039}},"memstats":{"gc_next":30194336,"memory_alloc":25890448,"memory_total":47863968,"rss":241664}},"filebeat":{"harvester":{"open_files":2,"running":2}},"libbeat":{"config":{"module":{"running":0},"reloads":2},"output":{"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":2}},"system":{"load":{"1":0.1,"15":0.22,"5":0.26,"norm":{"1":0.01,"15":0.022,"5":0.026}}}}}}

2019-05-16T09:54:48.393-0700 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":390,"time":{"ms":6}},"total":{"ticks":1300,"time":{"ms":13},"value":1300},"user":{"ticks":910,"time":{"ms":7}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":10},"info":{"ephemeral_id":"c345ac40-0c0b-4799-b11c-a8dadea8af99","uptime":{"ms":90038}},"memstats":{"gc_next":30194336,"memory_alloc":26210912,"memory_total":48184432,"rss":229376}},"filebeat":{"harvester":{"open_files":2,"running":2}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"output":{"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":2}},"system":{"load":{"1":0.12,"15":0.22,"5":0.25,"norm":{"1":0.012,"15":0.022,"5":0.025}}}}}}

2019-05-16T09:59:41.488-0700 INFO [monitoring] log/log.go:152 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":440,"time":{"ms":444}},"total":{"ticks":1520,"time":{"ms":1529},"value":1520},"user":{"ticks":1080,"time":{"ms":1085}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":8},"info":{"ephemeral_id":"c345ac40-0c0b-4799-b11c-a8dadea8af99","uptime":{"ms":383136}},"memstats":{"gc_next":35592592,"memory_alloc":17850816,"memory_total":51343096,"rss":42246144}},"filebeat":{"events":{"active":4197,"added":4206,"done":9},"harvester":{"closed":2,"open_files":0,"running":0,"started":2}},"libbeat":{"config":{"module":{"running":0},"reloads":38},"output":{"events":{"active":4096,"batches":2,"total":4096},"read":{"bytes":456},"type":"logstash","write":{"bytes":393078}},"pipeline":{"clients":0,"events":{"active":4116,"failed":2,"filtered":88,"published":4116,"retry":2048,"total":4206}}},"registrar":{"states":{"current":2,"update":7},"writes":{"success":9,"total":9}},"system":{"cpu":{"cores":10},"load":{"1":0,"15":0.17,"5":0.11,"norm":{"1":0,"15":0.017,"5":0.011}}}}}}

small correction.. filebeat is been running since last 5 hours but nothing written into elastic.

below are the latest log prints:

2019-05-16T15:19:31.172-0700    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":2450},"total":{"ticks":11620,"time":{"ms":8},"value":11620},"user":{"ticks":9170,"time":{"ms":8}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":10},"info":{"ephemeral_id":"fae4e6a7-8ef3-4dee-b751-cb01d2626e54","uptime":{"ms":18720021}},"memstats":{"gc_next":35553376,"memory_alloc":18843160,"memory_total":247228824}},"filebeat":{"harvester":{"open_files":2,"running":2}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"output":{"read":{"bytes":36}},"pipeline":{"clients":1,"events":{"active":4117}}},"registrar":{"states":{"current":2}},"system":{"load":{"1":0.13,"15":0.1,"5":0.07,"norm":{"1":0.013,"15":0.01,"5":0.007}}}}}}
2019-05-16T15:20:01.170-0700    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":2450,"time":{"ms":4}},"total":{"ticks":11670,"time":{"ms":53},"value":11670},"user":{"ticks":9220,"time":{"ms":49}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":10},"info":{"ephemeral_id":"fae4e6a7-8ef3-4dee-b751-cb01d2626e54","uptime":{"ms":18750020}},"memstats":{"gc_next":35553152,"memory_alloc":17794856,"memory_total":247464048}},"filebeat":{"harvester":{"open_files":2,"running":2}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"output":{"read":{"bytes":36}},"pipeline":{"clients":1**,"events":{"active":4117}}}**,"registrar":{"states":{"current":2}},"system":{"load":{"1":0.08,"15":0.1,"5":0.06,"norm":{"1":0.008,"15":0.01,"5":0.006}}}}}}

Thanks
Chandra

I did ran with debug mode to understand what happening inside filebeat and logstash.

Filebeat after processing few log lines its saying start next scan but its not really processing any logs, given debug output below.

Last log processed and stopped processing after that:

Logstash debug:

Even logstash received messages from filebeat but not sure why its not sent to elastic.

logstash debug log.
[2019-05-17T07:40:35,436][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.28.176:5044, remote: 10.1.28.175:55632] Sending a new message for the listener, sequence: 326
[2019-05-17T07:40:35,437][DEBUG][logstash.pipeline ] filter received {"event"=>{"host"=>{"name"=>"elk1"}, "source"=>"/opt/swiftlogs/all.log", "input"=>{}, "tags"=>["swift_paco_logs", "beats_input_codec_plain_applied"], "@timestamp"=>2019-05-17T14:40:33.804Z, "message"=>"May 13 03:30:34 dllabsw1 container-server: 172.24.7.107 - - [13/May/2019:03:30:34 +0000] "HEAD /d3/53652/ACC_VZ_9/CONT_VZ_6" 204 - "HEAD http://172.24.7.107/v1/ACC_VZ_9/CONT_VZ_6?states=listing&format=json" "tx349541d20afc4188bdac6-005cd8e4d9" "proxy-server 26939" 0.0185 "-" 36132 0", "log"=>{"file"=>{"path"=>"/opt/swiftlogs/all.log"}}, "prospector"=>{}, "@version"=>"1", "fields"=>{"app_id"=>"swift_paco_logs"}, "offset"=>110547}}
[2019-05-17T07:40:35,437][DEBUG][logstash.pipeline ] filter received {"event"=>{"host"=>{"name"=>"elk1"}, "source"=>"/opt/swiftlogs/all.log", "input"=>{}, "tags"=>["swift_paco_logs", "beats_input_codec_plain_applied"], "@timestamp"=>2019-05-17T14:40:33.803Z, "message"=>"May 13 03:30:33 dllabsw1 container-server: 172.24.7.109 - - [13/May/2019:03:30:33 +0000] "DELETE /d3/61714/.expiring_objects/1557791912/1557804522-ACC_VZ_4/CONT_VZ_3/thumbnail/NzYwMDA2NzA1M3wwMDAwMDAwLTAwMDAtMDE3Ny0wMDA3LTg0MTkxMnwyMDE5LTA1LTE0VDAzOjI4OjQxWg%3D%3D" 204 - "DELETE http://172.24.7.107/d41/60545/ACC_VZ_4/CONT_VZ_3/thumbnail/NzYwMDA2NzA1M3wwMDAwMDAwLTAwMDAtMDE3Ny0wMDA3LTg0MTkxMnwyMDE5LTA1LTE0VDAzOjI4OjQxWg%3D%3D" "txa1ddced372b94852a4f45-005cd8e46a" "object-updater 22421" 0.0018 "-" 35912 0", "log"=>{"file"=>{"path"=>"/opt/swiftlogs/all.log"}}, "fields"=>{"app_id"=>"swift_paco_logs"}, "@version"=>"1", "prospector"=>{}, "offset"=>109031}}
[2019-05-17T07:40:35,438][DEBUG][logstash.pipeline ] filter received {"event"=>{"host"=>{"name"=>"elk1"}, "source"=>"/opt/swiftlogs/all.log", "input"=>{}, "tags"=>["swift_paco_logs", "beats_input_codec_plain_applied"], "@timestamp"=>2019-05-17T14:40:33.804Z, "message"=>"May 13 03:30:34 dllabsw1 container-server: 172.24.7.107 - - [13/May/2019:03:30:34 +0000] "PUT /d3/7732/.expiring_objects/1557791907/1557804633-ACC_VZ_3/CONT_VZ_1/thumbnail/NzYwMDA2NzI3NnwwMDAwMDAwLTAwMDAtMDE3Ny0wMDA3LTYxOTgwNnwyMDE5LTA1LTE0VDAzOjMwOjMzWg%3D%3D" 201 - "PUT http://172.24.7.107/d28/43306/ACC_VZ_3/CONT_VZ_1/thumbnail/NzYwMDA2NzI3NnwwMDAwMDAwLTAwMDAtMDE3Ny0wMDA3LTYxOTgwNnwyMDE5LTA1LTE0VDAzOjMwOjMzWg%3D%3D" "tx526b3f47d49042b48b87b-005cd8e4d9" "object-server 26869" 0.0011 "-" 35878 0", "log"=>{"file"=>{"path"=>"/opt/swiftlogs/all.log"}}, "fields"=>{"app_id"=>"swift_paco_logs"}, "@version"=>"1", "prospector"=>{}, "offset"=>110047}}
[2019-05-17T07:40:35,438][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.28.176:5044, remote: 10.1.28.175:55632] Sending a new message for the listener, sequence: 327
[2019-05-17T07:40:35,439][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.28.176:5044, remote: 10.1.28.175:55632] Sending a new message for the listener, sequence: 328
[2019-05-17T07:40:35,440][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.28.176:5044, remote: 10.1.28.175:55632] Sending a new message for the listener, sequence: 329

Thanks
Chandra

filebeat debug log:
2019-05-17T07:40:37.307-0700 DEBUG [filter] pipeline/processor.go:175 fail to apply processor global{drop_fields=version, beat, beat.version, input.type, prospector.type, beat.hostname, beat.name}: key not found, key not found, key not found, key not found
2019-05-17T07:40:37.307-0700 DEBUG [publish] pipeline/processor.go:309 Publish event: {
"@timestamp": "2019-05-17T14:40:37.306Z",
"@metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.7.1"
},
"log": {
"file": {
"path": "/opt/swiftlogs/all.log"
}
},
"message": "May 13 03:31:13 dllabsw1 container-server: 172.24.7.109 - - [13/May/2019:03:31:13 +0000] "DELETE /d3/43397/ACC_VZ_3/CONT_VZ_6/1f876f46-6594-11e9-919d-9be49e7e7ad1/1557545379646%3ARGVmYXVsdC8zMzAwMDAzMTAy%3A024225e1-739d-11e9-bed6-679edf32393d" 204 - "DELETE http://localhost/d45/13194/ACC_VZ_3/CONT_VZ_6/1f876f46-6594-11e9-919d-9be49e7e7ad1/1557545379646%3ARGVmYXVsdC8zMzAwMDAzMTAy%3A024225e1-739d-11e9-bed6-679edf32393d" "txbb1d41ae62084f0b9e900-005cd8e501" "object-server 14320" 0.0007 "-" 36028 0",
"prospector": {},
"input": {},
"host": {
"name": "elk1"
},
"source": "/opt/swiftlogs/all.log",
"offset": 1815645,
"tags": [
"swift_paco_logs"
],
"fields": {
"app_id": "swift_paco_logs"
}
}
2019-05-17T07:40:43.431-0700 DEBUG [cfgfile] cfgfile/reload.go:176 Scan for new config files
2019-05-17T07:40:43.432-0700 DEBUG [cfgfile] cfgfile/reload.go:195 Number of module configs found: 0
2019-05-17T07:40:43.432-0700 DEBUG [reload] cfgfile/list.go:62 Starting reload procedure, current runners: 0
2019-05-17T07:40:43.432-0700 DEBUG [reload] cfgfile/list.go:80 Start list: 0, Stop list: 0
2019-05-17T07:40:43.433-0700 DEBUG [input] input/input.go:152 Run input
2019-05-17T07:40:43.433-0700 DEBUG [input] log/input.go:174 Start next scan
2019-05-17T07:40:43.433-0700 DEBUG [input] log/input.go:404 Check file for harvesting: /opt/swiftlogs/all.log
2019-05-17T07:40:43.433-0700 DEBUG [input] log/input.go:494 Update existing file for harvesting: /opt/swiftlogs/all.log, offset: 1816145
2019-05-17T07:40:43.433-0700 DEBUG [input] log/input.go:546 Harvester for file is still running: /opt/swiftlogs/all.log
2019-05-17T07:40:43.433-0700 DEBUG [input] log/input.go:195 input states cleaned up. Before: 1, After: 1, Pending: 0
2019-05-17T07:40:53.432-0700 DEBUG [cfgfile] cfgfile/reload.go:176 Scan for new config files
2019-05-17T07:40:53.432-0700 DEBUG [cfgfile] cfgfile/reload.go:195 Number of module configs found: 0
2019-05-17T07:40:53.432-0700 DEBUG [reload] cfgfile/list.go:62 Starting reload procedure, current runners: 0
2019-05-17T07:40:53.432-0700 DEBUG [reload] cfgfile/list.go:80 Start list: 0, Stop list: 0
2019-05-17T07:40:53.433-0700 DEBUG [input] input/input.go:152 Run input
2019-05-17T07:40:53.434-0700 DEBUG [input] log/input.go:174 Start next scan
2019-05-17T07:40:53.434-0700 DEBUG [input] log/input.go:404 Check file for harvesting: /opt/swiftlogs/all.log
2019-05-17T07:40:53.434-0700 DEBUG [input] log/input.go:494 Update existing file for harvesting: /opt/swiftlogs/all.log, offset: 1816145
2019-05-17T07:40:53.434-0700 DEBUG [input] log/input.go:546 Harvester for file is still running: /opt/swiftlogs/all.log
2019-05-17T07:40:53.434-0700 DEBUG [input] log/input.go:195 input states cleaned up. Before: 1, After: 1, Pending: 0
2019-05-17T07:41:03.430-0700 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":780,"time":{"ms":782}},"total":{"ticks":2500,"time":{"ms":2504},"value":2500},"user":{"ticks":1720,"time":{"ms":1722}}},"handles":{"limit":{"hard":65535,"soft":65535},"open":8},"info":{"ephemeral_id":"86fa3b7d-db38-4836-90bd-bb23e9058a58","uptime":{"ms":30035}},"memstats":{"gc_next":33133680,"memory_alloc":20371192,"memory_total":72206584,"rss":46055424}},"filebeat":{"events":{"active":4150,"added":4152,"done":2},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0},"reloads":2},"output":{"events":{"active":2942,"batches":3,"total":2942},"read":{"bytes":30},"type":"logstash","write":{"bytes":279459}},"pipeline":{"clients":1,"events":{"active":4117,"filtered":35,"published":4116,"retry":805,"total":4152}}},"registrar":{"states":{"current":2,"update":2},"writes":{"success":2,"total":2}},"system":{"cpu":{"cores":10},"load":{"1":0.24,"15":0.07,"5":0.09,"norm":{"1":0.024,"15":0.007,"5":0.009}}}}}}
2019-05-17T07:41:03.432-0700 DEBUG [cfgfile] cfgfile/reload.go:176 Scan for new config files
2019-05-17T07:41:03.433-0700 DEBUG [cfgfile] cfgfile/reload.go:195 Number of module configs found: 0
2019-05-17T07:41:03.433-0700 DEBUG [reload] cfgfile/list.go:62 Starting reload procedure, current runners: 0
2019-05-17T07:41:03.433-0700 DEBUG [reload] cfgfile/list.go:80 Start list: 0, Stop list: 0
2019-05-17T07:41:03.434-0700 DEBUG [input] input/input.go:152 Run input
2019-05-17T07:41:03.434-0700 DEBUG [input] log/input.go:174 Start next scan
2019-05-17T07:41:03.434-0700 DEBUG [input] log/input.go:404 Check file for harvesting: /opt/swiftlogs/all.log
2019-05-17T07:41:03.434-0700 DEBUG [input] log/input.go:494 Update existing file for harvesting: /opt/swiftlogs/all.log, offset: 1816145
2019-05-17T07:41:03.434-0700 DEBUG [input] log/input.go:546 Harvester for file is still running: /opt/swiftlogs/all.log
2019-05-17T07:41:03.434-0700 DEBUG [input] log/input.go:195 input states cleaned up. Before: 1, After: 1, Pending: 0

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.