Error when configuring Cloudflare LogPush integration for R2

No matter what configurations I try with the Cloudflare Logpush integration, the following error shows in the log:

[elastic_agent][error] Unit state changed aws-s3-default-aws-s3-cloudflare-xxxxxx-xxxx-xxx-xxx-xxxxxxxxxxxx (CONFIGURING->FAILED): neither queue_url, bucket_arn, access_point_arn, nor non_aws_bucket_name were provided accessing config

I have set the access key & secret key (and have verified them and the endpoint with Postman) and have set the R2 bucket in multiple places. I have placed values in the global and the Spectrum area (only Spectrum is enabled) but no matter what configuration I try, I always get the same error.

I did also find the following log entries upon a wider inspection:

[elastic_agent.filebeat][error] Error creating runner from config: neither queue_url, bucket_arn, access_point_arn, nor non_aws_bucket_name were provided accessing config
[elastic_agent.filebeat][info] add_cloud_metadata: hosting provider type not detected.

The agent is 8.18.3, and the integration is 1.39.0.

Thanks in advance!

(Note… pruned a couple messages related to me giving up on this before as I’m getting back on this horse. My agent is now 8.18.8 and the integration is 1.43.0. Figured reusing this thread rather than starting another may be better.)

I had a couple things going on with this initial configuration. First off it appeared I had too many things in too many places. You really only need four things. The endpoint, the compatible bucket name, the key ID, and the key secret. I also added the region us-east-1 at some point in troubleshooting and just left it in there, though I don’t know if that matters.

When I got down to what I thought was all those, I was still having problems but they were related to having a session token set somewhere along the line. The error included the following:

Input 'aws-s3' failed with: failed to create S3 API: failed to get AWS region for bucket: operation error S3: GetBucketLocation, https response error StatusCode: 400, RequestID: , HostID: , api error InvalidArgument: X-Amz-Security-Token

Once I got the session token cleared from the config it was able to establish connection.

I had some more failures where it couldn’t find anything until I removed the prefix under the Spectrum section as there are no directory prefixes, just the time-based directories.

But, now I am getting the following error message in the log numerous times each time it attempts to load data and the only thing put in the index is metadata for the attempt rather than real records:

"message":"saving completed object state: can not executed store/set operation on closed store 'filebeat'","component":{"binary":"filebeat","dataset":"elastic_agent.filebeat","id":"aws-s3-default","type":"aws-s3"},"log":{"source":"aws-s3-default"},"log.logger":"input.aws-s3.s3","log.origin":{"file.line":189,"file.name":"awss3/s3_input.go","function":""}"github.com/elastic/beats/v7/x-pack/filebeat/input/awss3.(*s3PollerInput).workerLoop.func2"},"service.name":"filebeat","id":"aws-s3-cloudflare_logpush.spectrum_event-39fe6d60-38d7-417a-99ec-375bc0905fb1","ecs.version":"1.6.0","ecs.version":"1.6.0"}

Has anyone here ever seen Cloudflare Logpush integration sucessfully pull from Cloudflare R2 buckets? I’m starting to feel like I’m hunting a unicorn :neutral_face: I may have to throw in the towel and have our MSP get me a case with Elastic if I keep spinning my wheels if I don’t make any more progress today.

Thanks!

Can you share a screenshot of your configuration?

Sure. I only have the global area configured and the Spectrum enabled. All other types are disabled. I just captured the configured areas.

Thanks!

I’m cautiously optimistic, but overnight, there were a bunch of historical records brought in!

I had been checking things earlier with last n hours, etc. where if I had been looking at last n days, etc., I would have seen records (I didn’t check index count this morning and was relying on a discover session I had left open).

I had come to realize that the Cloudflare Logpush had stopped pushing sometime yesterday, as I had made some changes to the API token permissions that were resulting in a 403 from Logpush to R2 due to permission changes.

Those filebeat errors may have been transient. It has successfully pulled in from the R2, and Cloudflare is once again pushing to R2. I’ll give it some time, but it appears I may have resolved it yesterday and not realized it due to the Logpush pushes failing.

Fingers crossed as this has been a long saga, most likely complicated by having that unneeded session token in there all along.

Thanks,
Tim