Kibana aws integration cloudtrail collect from s3 warning

Hi all,

ELK version: 8.9
we tried to collect cloudtrail logs from s3 bucket by using kibana default aws integration.

All information that integration wanted is set.

But no logs shows up in kibana and no dataview shows up also.

The agent logs said that error when pagination listing.

Can anyone face this issue before ? any support will be good.

11:22:50.443
elastic_agent
[elastic_agent][info] Unit state changed aws-s3-default-aws-s3-cloudtrail-fd525543-1b3c-4deb-a9c7-c2d58abd8b58 (CONFIGURING->HEALTHY): Healthy
11:22:50.927
elastic_agent.filebeat
[elastic_agent.filebeat][info] Non-zero metrics in the last 30s
11:22:50.943
elastic_agent.filebeat
[elastic_agent.filebeat][info] Non-zero metrics in the last 30s
11:22:51.035
elastic_agent.filebeat
[elastic_agent.filebeat][info] Non-zero metrics in the last 30s
11:22:51.480
elastic_agent.filebeat
[elastic_agent.filebeat][info] number_of_workers is set to 5.
11:22:51.480
elastic_agent.filebeat
[elastic_agent.filebeat][info] bucket_list_interval is set to 1h0m0s.
11:22:51.480
elastic_agent.filebeat
[elastic_agent.filebeat][info] bucket_list_prefix is set to .
11:22:51.480
elastic_agent.filebeat
[elastic_agent.filebeat][info] AWS region is set to eu-west-1.
11:22:51.480
elastic_agent.filebeat
[elastic_agent.filebeat][info] registering
11:22:52.013
elastic_agent.filebeat
[elastic_agent.filebeat][warn] Error when paginating listing.
11:22:52.081
elastic_agent.filebeat
[elastic_agent.filebeat][warn] Error when paginating listing.
11:22:52.143
elastic_agent.filebeat
[elastic_agent.filebeat][warn] Error when paginating listing.
11:22:52.218
elastic_agent.filebeat
[elastic_agent.filebeat][warn] Error when paginating listing.
11:22:52.285
elastic_agent.filebeat
[elastic_agent.filebeat][warn] Error when paginating listing.
11:22:52.285
elastic_agent.filebeat
[elastic_agent.filebeat][warn] 5 consecutive error when paginating listing, breaking the circuit.

Are you using the integration with SQS or direct listing the S3 bucket?

We are using s3 bucket.

Hi,

Is there any idea why this happened?

Regards

Hello,

Cloudtrail buckets can have thousands of files and if you are not using SQS you may have issues just listing those files, I had a lot of issues trying to consume Cloudtrail logs on Logstash using just the S3 input without SQS and was not able to solve those.

The recommendation is to use SQS notifications and configure the integration to use those notifications instead of listing the bucket everytime.

I'm not sure this is the issue, but I would say that this is probably the cause.

Can you enable SQS on your buckets and change your integration to use the SQS notifications to know which file to download?

Hi,

Is there any documentation related to SQS in elastic side? how to config SQS with elastic?

Regards

And also, ı just want to collect cloudtrail logs to elastic, how is that possible with SQS notifications ? Regards

You need to configure your s3 buckets to create a notification on a SQS queue everytime a new object is added in the bucket, this is an AWS configuration not an Elastic configuration, then on the AWS Cloudtrail intagration you will add this SQS queue on it, so the integration will poll the queue to know which files to download.

The documentation for it is not good enough, but basically when use SQS you configure just the settings that starts with [SQS].

For example, in the screenshot below you would configura just the settings that have [SQS] on the name and ignore the ones that have [S3].

I do not use this integration, so I can not provide more help about it.

Thanks for your interest and response,

The s3 bucket will notify the SQS when there will be a new file in bucket, right ? or whatever ı would like to notify the SQS server. it should be replicate, delete, put, create. etc.

After that ı put SQS URL in the integration, How the integration fetch cloudtrail logs from SQS url, because in the SQS url, there is just an information that saying there is some file added to the bucket.

This is the part ı do not understand clearly.

Regards

You should configure your s3 bucket to notify SQS everytime a new object is created on the bucket, cloudtrail logs are not updated nor deleted, so you just need to notify when the object is created.

The integration will consume the SQS queue, the messages in the queue have the path of the s3 object, when a new message arrive the integration will consume it, get the path and download the object from S3.

Instead of listing the S3 bucket to get the objects available, the integration get the path for the objects from the SQS queue.

It will still get the object from the S3, but it will not need to list the bucket every time as this can be very expensive both in resources and API consumption for Cloudtrail buckets.

thanks for the information you have been provided so far,

its working thanks so much.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.