AWS fleet integration fails with API key error on ingest

Hi

I'm in the process of trying to set AWS log ingestion into Elastic Cloud via a Fleet elastic cloud managed elastic agent. I followed the instructions on Install Fleet-managed Elastic Agents | Fleet and Elastic Agent Guide [8.2] | Elastic and have I am running the agent successfully.

However, upon ingestion of cloudwatch logs, I get the following error:

Cannot index event publisher.Event......
action [indices:admin/auto_create] is unauthorized for API key id [l1OHzYABJubmJXKg8ytJ] of user [elastic/fleet-server] on indices [logs-generic-aws], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}, dropping event!

I've checked the fleet service account permissions that the API key is tied to and all seems ok

{
    "elastic/fleet-server": {
        "role_descriptor": {
            "cluster": [
                "monitor",
                "manage_own_api_key"
            ],
            "indices": [
                {
                    "names": [
                        "logs-*",
                        "metrics-*",
                        "traces-*",
                        "synthetics-*",
                        ".logs-endpoint.diagnostic.collection-*",
                        ".logs-endpoint.action.responses-*"
                    ],
                    "privileges": [
                        "write",
                        "create_index",
                        "auto_configure"
                    ],
                    "allow_restricted_indices": false

I have also upgraded to 8.2 both on the Fleet side in Elastic cloud and also the agent is now on 8.2 and still the same problem.

I have also tried both the default namespace and the aws namespace (as above) to no avail.

Any guidance would be helpful, thank you in advance!

Hi @diogof ! Did you manage to fix this error?
I'm in the same situation with the CloudWatch integration and have no idea what causes it.

I found a similar issue on [AWS] CloudWatch logs integration fails with custom namespace and dataset · Issue #3112 · elastic/integrations · GitHub

Are you overriding the namespace and dataset?

If you're able to post your agent policy that might help with debugging. Please make sure to redact any sensitive information (aws account ids, access keys, secret keys, etc) before posting.

Yes, I was using a custom_namespace and I tried with both the default and a custom dataset (aws.eks). An odd thing is that when I use a custom dataset, the error message still refers to the default generic dataset:

action [indices:admin/auto_create] is unauthorized for API key id [redacted] of user [elastic/fleet-server] on indices [logs-generic-custom_namespace]

Agent Policy:

  monitoring:
    enabled: true
    logs: true
    metrics: true
    namespace: custom_namespace
    use_output: default
fleet:
  hosts:
  - https://fleet:8220
id: <id>
inputs:
- data_stream:
    namespace: custom_namespace
  id: aws-s3-cloudtrail-<id>
  meta:
    package:
      name: aws
      version: 1.16.0
  name: AWS
  revision: 19
  streams:
  - access_key_id: REDACTED
    data_stream:
      dataset: aws.cloudtrail
      type: logs
    endpoint: amazonaws.com
    expand_event_list_from_field: Records
    file_selectors:
    - expand_event_list_from_field: Records
      regex: /CloudTrail/
    - regex: /CloudTrail-Digest/
    - expand_event_list_from_field: Records
      regex: /CloudTrail-Insight/
    id: aws-s3-aws.cloudtrail-<id>
    max_number_of_messages: 5
    publisher_pipeline.disable_host: true
    queue_url: https://sqs.us-redacted.amazonaws.com/redacted/redacted
    secret_access_key: REDACTED
    tags:
    - forwarded
    - aws-cloudtrail
  type: aws-s3
  use_output: default
- data_stream:
    namespace: custom_namespace
  id: aws-cloudwatch-cloudwatch-<id>
  meta:
    package:
      name: aws
      version: 1.16.0
  name: AWS
  revision: 19
  streams:
  - access_key_id: REDACTED
    api_sleep: 200ms
    data_stream: null
    dataset: aws.eks
    endpoint: amazonaws.com
    id: aws-cloudwatch-aws.cloudwatch_logs-<id>
    log_group_arn: arn:aws:logs:us-redacted:redacted:log-group:/aws/eks/redacted/cluster:*
    publisher_pipeline.disable_host: true
    scan_frequency: 1m
    secret_access_key: REDACTED
    start_position: beginning
    tags:
    - forwarded
    - aws-cloudwatch-logs
  type: aws-cloudwatch
  use_output: default
output_permissions:
  default:
    _elastic_agent_checks:
      cluster:
      - monitor
    _elastic_agent_monitoring:
      indices:
      - names:
        - logs-elastic_agent.apm_server-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.apm_server-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.auditbeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.auditbeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.elastic_agent-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.endpoint_security-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.endpoint_security-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.filebeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.filebeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.fleet_server-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.fleet_server-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.heartbeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.heartbeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.metricbeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.metricbeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.osquerybeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.osquerybeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-elastic_agent.packetbeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - metrics-elastic_agent.packetbeat-custom_namespace
        privileges:
        - auto_configure
        - create_doc
    AWS:
      indices:
      - names:
        - logs-aws.cloudtrail-custom_namespace
        privileges:
        - auto_configure
        - create_doc
      - names:
        - logs-aws.cloudwatch_logs-custom_namespace
        privileges:
        - auto_configure
        - create_doc
outputs:
  default:
    api_key: REDACTED
    hosts:
    - https://elastic:443
    type: elasticsearch
revision: 22

Thanks @AndreiRD ! I'll mention this thread on the issue. Feel free to subscribe there to keep updated on any fixes.

I managed to use the Custom AWS Logs - Cloudwatch Integration and I'm successfully pulling logs now.
Although I have another issue that I haven't been able to solve it yet: the policy log stream prefix setting simply doesn't work. The agent is pulling logs from all log streams and it just ignores the prefix from config.

I noticed the same thing with the AWS - CloudWatch Integration. When I got the errors mentioned in my previous post, they were actually triggered for log streams that did not match my prefix.

Looks like that probably gets backed by beats/cloudwatch.go at db73681d67be859f5c048d07da985eadaa780e21 · elastic/beats · GitHub in the end.

It might be worth trying the same prefix via filter-log-events — AWS CLI 1.24.6 Command Reference (or directly on the API) to see if it's working as expected.

If you can get it working via the AWS API but not via agent or filebeat, definitely open a beats issue with the details.

Hi @AndreiRD, are you able to give some insight how you got it working, I've tried with both a custom namespace and dataset to no avail

Thanks in advance!

I used the "Custom AWS Logs" Integration.

Thank you! Were you able to make this work without specifying log group ARNs?

Hmm, I only used the Log Group ARN option. Didn't try the "advanced options".

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.