Filebeat can't exclude HTTP200

Hello,

I am trying to exclude HTTP 200 code from the logs sent to elasticsearch by filebeat

Here an example of the message:
IP- - [07/Jun/2018:22:39:23 +0000] "HEAD / HTTP/1.1" 200 - "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.67 Safari/537.36" vhost=acquia-sites.com host=www.sites.com hosting_site=sites pid=18611 request_time=609949 forwarded_for="IP, IP" request_id="v-a058f816-6aa3-11e8-a843-0a0c37e0cf8d"

I have two place I try to exclude it

processors:
- drop_event:
    when:
      equals:
         http.code: 200

AND

exclude_lines: ['^DBG','^DEBUG','HTTP\/1\.1\" 200']

But it doesn't work. I tested the regex it seems to work. Any idea why it not?

Thank you

Hi,

Could you please try it with below format.

exclude_lines: ["^DBG"]

Or if its not working then try it with below.

# For example, you can use the following processors to keep the fields that
# contain CPU load percentages, but remove the fields that contain CPU ticks
# values:
#
#processors:
#- include_fields:
#    fields: ["cpu"]
#- drop_fields:
#    fields: ["cpu.user", "cpu.system"]
#
# The following example drops the events that have the HTTP response code 200:
#
#processors:
#- drop_event:
#    when:
#       equals:
#           http.code: 200

Please do let me know If you still are getting error then please share the filebet.yml file it will help me to understand the issue.

Regards,

Hello,

Switching to double quote instead of single quote created more issue. Because in my exclude_line I already have double quote and the escape character stop working. Regarding you second solution, I already use with the exclude_line both doesn't work.

###################### Filebeat Configuration Example #########################

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    - /var/log/custom_log/*/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  exclude_lines: ['^DBG','^DEBUG','HTTP\/1\.1\" 200']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  exclude_files: ['.gz$','\.dat$', '\.gz$', 'filebeat', 'mslogprod.log']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: reload

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================

#============================= Elastic Cloud ==================================
cloud.auth: "Using This method"

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
xpack.monitoring.elasticsearch:

#======================== Custom =======================
processors:
- drop_event:
    when:
      equals:
        http.code: 200
- drop_event:
    when:
      regexp:
        message: "^DBG:"
- drop_event:
    when:
      regexp:
        message: "^DEBUG:"
- add_cloud_metadata: ~

#===== module ======
filebeat.modules:
- module: apache2
- module: auditd
- module: mysql
- module: nginx
- module: system

Ohk, Could you please share the logs of filebeat so that i can identify the issue.

Regards,

2018-06-12T15:28:57.243Z        INFO    instance/beat.go:468    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-06-12T15:28:57.245Z        INFO    instance/beat.go:475    Beat UUID: 18fe4e9b-e654-42b0-8d71-d4d2660307fd
2018-06-12T15:28:57.245Z        INFO    instance/beat.go:213    Setup Beat: filebeat; Version: 6.2.4
2018-06-12T15:28:57.248Z        INFO    add_cloud_metadata/add_cloud_metadata.go:301    add_cloud_metadata: hosting provider type detected as ec2, metadata={"availability_zone":"us-west-2c","instance_id":"i-0c8788050e301b03f","machine_type":"t2.small","provider":"ec2","region":"us-west-2"}
2018-06-12T15:28:57.249Z        INFO    elasticsearch/client.go:145     Elasticsearch url: URL
2018-06-12T15:28:57.249Z        INFO    pipeline/module.go:76   Beat name: NAME
2018-06-12T15:28:57.252Z        INFO    beater/filebeat.go:62   Enabled modules/filesets: apache2 (access, error), auditd (log), mysql (slowlog, error), nginx (access, error), system (auth, syslog),  ()
2018-06-12T15:28:57.255Z        INFO    elasticsearch/client.go:145     Elasticsearch url: URL
2018-06-12T15:28:57.255Z        INFO    [monitoring]    log/log.go:97   Starting metrics logging every 30s
2018-06-12T15:28:57.623Z        INFO    elasticsearch/client.go:690     Connected to Elasticsearch version 6.2.4
2018-06-12T15:28:57.623Z        INFO    kibana/client.go:69     Kibana url: URL
2018-06-12T15:29:25.953Z        INFO    instance/beat.go:583    Kibana dashboards successfully loaded.
2018-06-12T15:29:25.953Z        INFO    instance/beat.go:301    filebeat start running.
2018-06-12T15:29:25.953Z        INFO    registrar/registrar.go:110      Loading registrar data from /var/lib/filebeat/registry
2018-06-12T15:29:25.954Z        INFO    registrar/registrar.go:121      States Loaded from registrar: 9
2018-06-12T15:29:25.954Z        INFO    crawler/crawler.go:48   Loading Prospectors: 10
2018-06-12T15:29:25.958Z        INFO    log/prospector.go:111   Configured paths: [/var/log/*.log /var/log/custom_log/*/*.log]
2018-06-12T15:29:25.958Z        INFO    log/prospector.go:111   Configured paths: [/var/log/apache2/access.log* /var/log/apache2/other_vhosts_access.log*]
2018-06-12T15:29:25.958Z        INFO    log/prospector.go:111   Configured paths: [/var/log/apache2/error.log*]
2018-06-12T15:29:25.959Z        INFO    log/prospector.go:111   Configured paths: [/var/log/audit/audit.log*]
2018-06-12T15:29:25.960Z        INFO    log/prospector.go:111   Configured paths: [/var/log/mysql/error.log* /var/log/mysqld.log*]
2018-06-12T15:29:25.960Z        INFO    log/prospector.go:111   Configured paths: [/var/log/mysql/mysql-slow.log* /var/lib/mysql/mssynclog-slow.log]
2018-06-12T15:29:25.960Z        INFO    log/prospector.go:111   Configured paths: [/var/log/nginx/access.log*]
2018-06-12T15:29:25.961Z        INFO    log/prospector.go:111   Configured paths: [/var/log/nginx/error.log*]
2018-06-12T15:29:25.964Z        INFO    log/prospector.go:111   Configured paths: [/var/log/auth.log* /var/log/secure*]
2018-06-12T15:29:25.965Z        INFO    log/prospector.go:111   Configured paths: [/var/log/messages* /var/log/syslog*]
2018-06-12T15:29:25.965Z        INFO    crawler/crawler.go:82   Loading and starting Prospectors completed. Enabled prospectors: 10
2018-06-12T15:29:25.965Z        INFO    cfgfile/reload.go:127   Config reloader started
2018-06-12T15:29:25.966Z        INFO    cfgfile/reload.go:219   Loading of config files completed.
2018-06-12T15:29:25.967Z        INFO    log/harvester.go:216    Harvester started for file: /var/log/custom_log/rkym/access.log
2018-06-12T15:29:25.968Z        INFO    log/harvester.go:216    Harvester started for file: /var/log/audit/audit.log
2018-06-12T15:29:25.994Z        INFO    log/harvester.go:216    Harvester started for file: /var/log/secure
2018-06-12T15:29:26.017Z        INFO    log/harvester.go:216    Harvester started for file: /var/log/messages
2018-06-12T15:29:26.075Z        INFO    log/harvester.go:216    Harvester started for file: /var/log/custom_log/rkym/error.log
2018-06-12T15:29:26.078Z        INFO    log/harvester.go:216    Harvester started for file: /var/log/secure-20180610
2018-06-12T15:29:26.079Z        INFO    log/harvester.go:216    Harvester started for file: /var/log/messages-20180610
2018-06-12T15:29:26.397Z        INFO    elasticsearch/client.go:690     Connected to Elasticsearch version 6.2.4
2018-06-12T15:29:26.484Z        INFO    template/load.go:73     Template already exists and will not be overwritten.
2018-06-12T15:29:27.257Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":34},"total":{"ticks":250,"time":254,"value":250},"user":{"ticks":220,"time":220}},"info":{"ephemeral_id":"2afd8c1a-c4fc-44b9-b78e-d8cedd3f70eb","uptime":{"ms":30018}},"memstats":{"gc_next":25618576,"memory_alloc":15300696,"memory_total":44044832,"rss":33746944}},"filebeat":{"events":{"active":4123,"added":4139,"done":16},"harvester":{"open_files":7,"running":7,"started":7},"prospector":{"log":{"files":{"truncated":2}}}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"read":{"bytes":9979},"type":"elasticsearch","write":{"bytes":3156}},"pipeline":{"clients":10,"events":{"active":4120,"filtered":16,"published":4116,"total":4136}}},"registrar":{"states":{"current":11,"update":16},"writes":16},"system":{"cpu":{"cores":1},"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.05,"5":0.01}}}}}}
2018-06-12T15:29:57.256Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":170,"time":177},"total":{"ticks":1230,"time":1239,"value":1230},"user":{"ticks":1060,"time":1062}},"info":{"ephemeral_id":"2afd8c1a-c4fc-44b9-b78e-d8cedd3f70eb","uptime":{"ms":60018}},"memstats":{"gc_next":25811952,"memory_alloc":14634728,"memory_total":198267504,"rss":24649728}},"filebeat":{"events":{"active":-4123,"added":28641,"done":32764},"harvester":{"open_files":7,"running":7}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":12393,"batches":249,"total":12393},"read":{"bytes":230644},"write":{"bytes":9727859}},"pipeline":{"clients":10,"events":{"active":0,"filtered":20371,"published":8277,"retry":50,"total":28644},"queue":{"acked":12393}}},"registrar":{"states":{"current":11,"update":32764},"writes":248},"system":{"load":{"1":0.13,"15":0.05,"5":0.05,"norm":{"1":0.13,"15":0.05,"5":0.05}}}}}}

Thanks, After a restart it start working I don't know why

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.