Filebeat not ingesting logs from s3

Hi there,

I've configured my Filebeat following the guide for s3 input (found here), but when running, both the input worker and filebeat are starting then stopping almost immediately (all in less than a second).

I'm not sure what I've done wrong as the configuration of the yml file seems correct, and I am able to access both SQS and the S3 bucket and objects within the bucket using the CLI, so it is not a networking or permissions issue.

Could I have some help troubleshooting this? Thanks :slight_smile:

Hi @ben.sharp,

Could you share your Filebeat config file and logs? I'm wondering if there is something there telling us what happened.

Best regards

1 Like

Hi Carlos, sorry for the delay in getting back, my config is below:

###################### Filebeat Configuration #########################

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: s3

  enabled: true

  queue_url: https://sqs.eu-west-2.amazonaws.com/XXXXXXXXXXXX/FilebeatInputQueue
  access_key_id: XXXXXXXXXXXXXXXXXXXX
  secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

  # Paths that should be crawled and fetched. Glob based paths.
  ##  paths:
  ##   - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
        hosts: ["XXX.XXX.XXX.XXX:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

and I've uploaded a file with my logs here: https://drive.google.com/open?id=12W4CHVzrFevnHjDe9SZ7s4HsnsQiKun8

Thanks for all the info! I see you are using filebeat 7.4.2. What kind of logs are in the S3 bucket?

Also I know in 7.4 we have a bug in the code that causes silent failure for s3 input. It is fixed in https://github.com/elastic/beats/pull/14113/files#diff-ede3f69796ae7b221405174f187a241dR260 for 7.5.

I will need to check what logs are in the bucket it's looking at, I will check and get back to you.

I will also try updating the filebeat version at the same time and let you know if that works.

The logs are .log.gz ALB logs, which I believe is supported by Filebeat?

Also, would you be able to give me some assistance in getting Filebeat up and running from the Git repository? I've tried following the same "getting started" instructions, but I wasn't able to find the "filebeat" file to run in the git directory, just want to check I'm not missing a step?

Thanks for checking! .gz logs will be supported in 7.5 actually so in order to try it out now, you have to build filebeat binary locally.

What I do is go to beats/x-pack/filebeat, run mage update && mage build. Then you should see filebeat binary file under the same directory beats/x-pack/filebeat. ./filebeat modules enable aws should enable the aws module, then you can use the elb fileset there. Or you can just use s3 input directory.

Would that be using this Mage? GitHub - magefile/mage: a Make/rake-like dev tool using Go

Yes!

Great, thanks, I'll get back to you once I have that all set up (or if I have some more questions in the meantime!)

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.