Filebeat is not Seeing Log Traffic on Server

I am running into a strange issue with filebeat on one of our production transaction servers.

Background:

Filebeat on one of our production servers went down for some ‘unknown’ reason (still haven’t figured out why this happened) but when I brought the service back up it started without reporting any errors but it isn’t recognizing any file activity. Double checked everything and traffic was coming in but filebeat wasn’t sending any information on to elasticsearch.

I performed a complete uninstall of v7.9.2 (including the service) and then installed 7.10.2 with no change in behavior.

I’ve attached the documents from the server in the attached zip file with the following directory structure

Filebeat Issue

  • Config
  • Osp.yml
  • Logs
    • Filebeat.log
    • Filebeat_Debug.zip (debugging turned on)
  • Filebeat.yml
  • Install-service-filebeat.txt (Install-service-filebeat.ps1)

When I place filebeat in debug mode I'm seeing transactions being grabbed by filebeat but not being passed on to elasticsearch. This can seen in the stand-alone log file.

The service is using an account that infrastructure uses to allow servers to talk to each other in production.

This was working up until recently and the infrastructure team swears that there were not changes made to the production server.

Thanks for any help you can provide,
Bill

File details (thought I could upload the files)

Config (filebeat.yml):

  • Applicable Settings:

filebeat.config.inputs:
enabled: true
path: config/*.yml

output.elasticsearch:
hosts: ["http://xxxx:9200"]
pipeline: geoip-info
index: "filebeat-%{[fields.doc_type]}-%{+yyyy.MM.dd}"

setup.template.name: "filebeat-request"
setup.template.pattern: "filebeat-request-"
setup.template.name: "filebeat-response"
setup.template.pattern: "filebeat-response-
"

setup.ilm.enabled: false

processors:

  • add_host_metadata:
    when.not.contains.tags: forwarded
  • add_cloud_metadata: ~
  • add_docker_metadata: ~
  • add_kubernetes_metadata: ~

OSP.yml:

  • type: log
    paths:

    • D:\VTMS\Logs\VTMS.Utility.OSeriesProxy.Request-POSESB02.log
      #json.message_key: GUID
      json.keys_under_root: true
      json.add_error_key: true
      fields:
      doc_type: request
  • type: log
    paths:

    • D:\VTMS\Logs\VTMS.Utility.OSeriesProxy.Response-POSESB02.log
      #json.message_key: GUID
      json.keys_under_root: true
      json.add_error_key: true
      fields:
      doc_type: response

If anyone wants to see my logs let me know and I'll get content posted - unfortunately to include them puts me way of the character limit.

I am also working with Mike Mulcahy on this and he says that it might be file that is hung up in filebeat - he's seen the same behavior in logstash. So then my question becomes how can I find out if a file is hung up in filebeat and then clear.

I've tried deleting the data directory and restarting filebeat and that didn't clear the issue.

Thanks

Hi!

Please share the debug logs in a pastebin so as to have a better view. It's still weird though what you are seeing :thinking: . Can you make sure you don't see any errors in the debug logs?

I have also been working with Mike Mulcahy from Elastic and said that '...Sometimes there is a file that holds a pointer and let's filebeat know where it left off'

So I tried deleting the data folder one more time and restarting filebeat and then just leaving it alone and when I came back this morning filebeat was indeed sending information to Elasticsearch. Not sure why that file pointer was being so stubborn in clearing.

It's working but now I need to do some deeper digging into how filebeat works under the covers and get a deeper understanding of how it uses it's filepointers.

If anybody who reads this post has some recommended reading on the subject it would be greatly appreciated. I will wait a few days before marking this as resolved.

Thanks,
Bill

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.