Filebeat not sending logs to elasticsearch when pipeline is configured in filebeat.yml

hi,
I have scenario trying to upload product logs using Filebeat to Elasticsearch. I have created ingest pipeline for converting log date to event date. Below is the ingest pipeline processor,

PUT _ingest/pipeline/mylogs { "description" : "Extracting date from log line", "processors" : [ { "date" : { "field" : "log.timestamp", "target_field" : "@timestamp", "formats" : [ "yyyy-MM-dd HH:mm:ss.SSSS", "ISO8601" ] } } ] }

here is the configuration filebeat.yml pointing to pipeline

output.elasticsearch: 
  hosts: ["127.0.0.1:9200"] 
  pipeline: "mylogs"

(sorry about the formatting, post doesn't allow me to put next line, but above yaml file is indexed propertly), the moment I add pipeline to filebeat.yml, logs are not being indexed at elastic. Everytime I restart Filebeat, this is what I see in file beat logs.

2020-01-22T17:03:09.151+0530 INFO log/harvester.go:251 Harvester started for file: somepath\logfile-2020-01-21y - Copy (3).log

2020-01-22T17:03:09.152+0530 INFO log/harvester.go:251 Harvester started for file: somepath\logfile-2020-01-21y.log
2020-01-22T17:03:09.153+0530 INFO log/harvester.go:251 Harvester started for file: somepath\logfile-2020-01-21y - Copy (2) - Copy - Copy.log
2020-01-22T17:03:09.153+0530 INFO log/harvester.go:251 Harvester started for file: somepath\logfile-2020-01-21y - Copy (2) - Copy.log
2020-01-22T17:03:09.175+0530 INFO log/harvester.go:251 Harvester started for file: somepath\logfile-2020-01-21y - Copy (2).log
2020-01-22T17:03:09.187+0530 INFO log/harvester.go:251 Harvester started for file: somepath\logfile-2020-01-21y - Copy (3) - Copy.log
2020-01-22T17:03:38.352+0530 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":828,"time":{"ms":78}},"total":{"ticks":1609,"time":{"ms":141},"value":1609},"user":{"ticks":781,"time":{"ms":63}}},"handles":{"open":276},"info":{"ephemeral_id":"02b57ff2-f216-4886-8c42-0e4c9916b8b7","uptime":{"ms":960239}},"memstats":{"gc_next":13469456,"memory_alloc":9853264,"memory_total":80321576,"rss":8192},"runtime":{"goroutines":55}},"filebeat":{"events":{"added":6,"done":6},"harvester":{"open_files":6,"running":6,"started":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0,"filtered":6,"total":6}}},"registrar":{"states":{"current":2369,"update":6},"writes":{"success":6,"total":6}}}}}

Not sure what is blocking the logs from pipline into index, any help greatly appreciated. we can't use logstash as this is going to add one more tool for maintenance, so please provide solution without logstash. Thanks in advance.

Hi @ramkms6666 :slight_smile:

You can format your code using common markdown syntax (triple backtick).

I suggest to post the entire config file and run filebeat with -e -d "*" to see more logs.

You may also add to your pipeline an on failure https://www.elastic.co/guide/en/elasticsearch/reference/master/handling-failure-in-pipelines.html:

  "on_failure" : [{
    "set" : {
      "field" : "error.log",
      "value" : "{{ _ingest.on_failure_message }}"
    }
  }]

Check any of the modules in filebeat for examples

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.