filebeat bin file location in the service file declared might be in the wrong location for debian platform. after moving the filebeat bin file (located in /usr/ share/filebeat/) to /usr/share/filebeat/bin/ folder everything works.
the service file states
ExecStart=/usr/share/filebeat/bin/filebeat --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS
which expects it to be in the bin folder but the default installation comes one directory up from the bin folder.
when i look into the log files the only Error log output is the below line ERROR fileset/modules.go:127 Not loading modules. Module directory not found: /usr/share/filebeat/module.
no idea why it is not seeing the module folder even though it's there. the behaviour is the same when installing from the link you provided.
Hi Jamie I think i might have been affected by the bug you mentioned, after applying the fix its up and running now. my system is Ubuntu 20.04. but still not able to pull data from s3 bucket. followed this instruction([https://www.elastic.co/blog/getting-aws-logs-from-s3-using-filebeat-and-the-elastic-stack]) to set up s3 bucket for cloud trail logs and sqs queue. data gets pulled on my mac though. ERROR [input.s3] s3/collector.go:107 SQS ReceiveMessageRequest failed: EC2RoleRequestError: no EC2 instance role found
Hi Jamie i wanted to let you know that i was able to resolve the problem. For some reason, it was not able to pick the aws credentials from the .aws directory. I had to put the credentials in the aws.yml file directly to resolve the problems. Also, another way that worked was specifying var.shared_credential_file to the /home/{your-user-name}/.aws/credentials. it should have looked in that directory if not specified but for some reason, it will not work unless explicitly specified.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.