Filbeat tail_filter Logs 5.4 Version

Hi,

I have installed filebeat on ubuntu and ELK master is also ubuntu. By following

Where in my ELK server

#input.conf

input {
  beats {
    host => "0.0.0.0"
    port => 5443
    type => weblog
    #ssl => true
    #ssl_certificate => "/etc/logstash/logstash.crt"
    #ssl_key => "/etc/logstash/logstash.key"
  }
}

#filter.conf

filter{
  if [type] == "weblog" {
    grok {
      match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel} (?<logger>(?:[a-zA-Z0-9]+\.)*[-A-Za-z0-9$]+) %{GREEDYDATA:message}"]
      overwrite => [ "message" ]
    }

  }
}

#output.conf

output {
  elasticsearch { hosts => ["10.141.127.145:9200"]
    hosts => "10.141.127.145:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Built Filbeat on ubuntu client machine

Inside Filbeat Machine, my web logs under /var/log/web.log as follows

#web.log

2017-06-01 12:15:40 INFO  Utils-  DETAILS SUCCESSFULLY UPLOADED
2017-06-01 12:17:16 INFO  PortalUpload- Upload Structure Data webservice 
2017-06-01 12:15:39 DEBUG PortalUpload-  check
2017-06-01 12:15:39 ERROR PortalUpload-  investigate details

#filbeat.yml

filebeat.prospectors:


- input_type: log

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
     - /var/log/web.log

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]
  hosts: ["10.141.127.145:5443"]
  bulk_max_size: 2048
  #ssl.certificate_authorities: ["/etc/filebeat/logstash.crt"]
  template.name: "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false

#Filbeat Logs, tail -f /var/log/filebeat

2017-06-26T17:48:18+05:30 ERR Failed to publish events caused by: write tcp 30.30.0.6:37661->10.141.127.145:5443: write: connection reset by peer
2017-06-26T17:48:18+05:30 INFO Error publishing events (retrying): write tcp 30.30.0.6:37661->10.141.127.145:5443: write: connection reset by peer
2017-06-26T17:48:38+05:30 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.publish.read_bytes=24 libbeat.logstash.publish.write_bytes=10749 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_and_acked_events=84 libbeat.logstash.published_but_not_acked_events=84 libbeat.publisher.published_events=84 publish.events=85 registrar.states.current=1 registrar.states.update=85 registrar.writes=1
2017-06-26T17:49:08+05:30 INFO No non-zero metrics in the last 30s
2017-06-26T17:49:38+05:30 INFO No non-zero metrics in the last 30s
2017-06-26T17:50:08+05:30 INFO No non-zero metrics in the last 30s
2017-06-26T17:50:38+05:30 INFO No non-zero metrics in the last 30s
2017-06-26T17:50:38+05:30 INFO Harvester started for file: /var/log/web.log
2017-06-26T17:50:38+05:30 ERR Failed to publish events caused by: write tcp 30.30.0.6:37662->10.141.127.145:5443: write: connection reset by peer
2017-06-26T17:50:38+05:30 INFO Error publishing events (retrying): write tcp 30.30.0.6:37662->10.141.127.145:5443: write: connection reset by peer
2017-06-26T17:51:08+05:30 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.publish.read_bytes=6 libbeat.logstash.publish.write_bytes=450 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_and_acked_events=3 libbeat.logstash.published_but_not_acked_events=3 libbeat.publisher.published_events=3 publish.events=5 registrar.states.update=5 registrar.writes=1

My questions are

  1. How to resolve " write: connection reset by peer" error
  2. How to send only "ERROR" logs to logstash server by configuring in filebeat.yml

I read in online, filbeat 5.4 version need `"tail_files = true"

&&
logging:
to_files: true
files:
path: /var/log/mybeat
name: mybeat
level: info

Is this correct way in 5.4 version to filter "ERROR" info in my web log to send logstash server.

Any update ?

  1. How to resolve " write: connection reset by peer" error

which logstash version are you using? Someone, e.g. Logstash, the server/firewall/network device is closing the connection while filebeat is publish log messages to logstash. Filebeat normally reconnects and sends logs again. try with debug logging (-d '*'). In logstash you can increate client_connectivity_timeout, so Logstash is less likely to close the connection. Updating Logstash to most recent 5.3.x or 5.4 release + increasing client_connectivity_timeout should help

  1. How to send only "ERROR" logs to logstash server by configuring in filebeat.yml

Use include_lines setting in filebeat prospector.

I read in online, filbeat 5.4 version need `"tail_files = true"

Huh, why?

logging:
to_files: true
files:
path: /var/log/mybeat
name: mybeat
level: info

Is this correct way in 5.4 version to filter "ERROR" info in my web log to send logstash server.

Sorry? This is the filebeat its internal logging capabilities. You want to send errors from filebeat or from monitored services to Logstash?

My Logs are as follows

2017-06-27 19:16:55 INFO PortalUpload- Pandy Sample Logs
2017-06-27 19:16:55 ERROR PortalUpload- Pandy Sample Logs

I would like to parse logs contains field "ERROR", Include_lines is to search lines starts with, but my use case is find FIELD contains "ERROR" , I found from https://www.elastic.co/guide/en/beats/filebeat/current/filtering-and-enhancing-data.html

fields:
    app_id: query_engine_12

How to use above in my use case ?

include_lines setting accepts a regular expression. If all you want to forward is messages containing errors you can use include_lines: ['ERROR']. You can have the regular expression as strict as you want. For example this pattern '^\d{4}(-\d{2}){2} \d{2}(:\d{2}){2} ERROR' only includes lines starting with data + time + "ERROR" string.

Plus, instead of the include_lines, you can also use the drop_events processor like:

processors:
- drop_event:
    when.not.contains.message: 'ERROR'

If add "include_lines: ['ERROR']" it is trying to find lines starting with ERROR, but for me ERROR is middle of row.. how to

when.not.contains.message: 'ERROR' I Need to send lines contains word "ERROR" see the sample logs below

2017-06-27 19:16:55 INFO PortalUpload- Pandy Sample Logs
2017-06-27 19:16:55 ERROR PortalUpload- Pandy Sample Logs

Where ERROR is on third column, sometime it can be any column, so its better to use syntax matching "ERROR" word

please re-read my previous answer again. include_lines is a regular expression, it does not look for ERROR at the beginning of the line. The drop_event + when.not will remove all events not having ERROR. It's a double-negation -> send only messages having string ERROR.

Agreed with answer

when.not.contains.message: 'ERROR'

But I have tried Include_lines: [^ERROR], but not working ..

also by using "when.not.contains.message: " can i use two filters like INFO and ERROR ?

But I have tried Include_lines: [^ERROR], but not working ..

It's not working, because you're using the ^ (beginning of text) operator. Just use include_lines: ["INFO", "ERROR"].

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.