Change in aws.cloudwatch_logs.log_group Field Behavior in Elastic Agent 8.19

Impacted Component: Elastic Agent – AWS Integration (CloudWatch Logs)

Issue Description:
After upgrading the Elastic Agent from version 8.15 to 8.19, we observed a change in how the output field aws.cloudwatch_logs.log_group is populated. Previously, this field contained the name of the CloudWatch log group. In version 8.19, it now contains the ARN of the log group instead.

While we understand the value of having the ARN for traceability and uniqueness, the log group name is significantly more user-friendly for filtering and dashboarding purposes.

Suggestion:
We propose that the integration outputs both values:

A new field for the log group name (e.g., aws.cloudwatch_logs.log_group_name)
Retain the current field for the ARN (e.g., aws.cloudwatch_logs.log_group_arn)
This would preserve backward compatibility and improve usability for filtering and visualization.

Question:
Is this change in field behavior intentional? If so, is there a recommended workaround or plan to support both values in future versions?

Below are the log output samples obtained on each version:

version 8.15

```json {
"_index":".ds-logs-aws.cloudwatch_logs-clrstaging-2025.08.05-000025",
"_id":"39137057676886253665224720494858672754866189070509015041",
"_version":1,
"_source":{
"awscloudwatch":{
"log_group":"/aws-glue/crawlers",
"ingestion_time":"2025-08-12T02:39:23.000Z",
"log_stream":"cloud-reporting-uat-smsliverel32-crawler-raw"
},
"agent":{
"name":"elastic-agent-aws-clr-agent-5c6b6df766-tvvbk",
"id":"155bdbc1-4f63-45e3-9a06-f85b4e7fee4c",
"type":"filebeat",
"ephemeral_id":"b5677822-ec10-435f-b94f-9ebc148d5dbe",
"version":"8.15.3"
},
"log":{
"file":{
"path":"/aws-glue/crawlers/cloud-reporting-uat-smsliverel32-crawler-raw"
},
"level":"info"
},
"elastic_agent":{
"id":"155bdbc1-4f63-45e3-9a06-f85b4e7fee4c",
"version":"8.15.3",
"snapshot":false
},
"message":"The crawl is running by consuming Amazon S3 events.",
"tags":[
"forwarded",
"aws-cloudwatch-logs",
"preserve_original_event"
],
"labels":{
"tmp":"test",
"project":"cloud-reporting"
},
"cloud":{
"provider":"aws",
"region":"ap-southeast-2"
},
"input":{
"type":"aws-cloudwatch"
},
"@timestamp":"2025-08-12T02:39:22.000Z",
"ecs":{
"version":"8.11.0"
},
"data_stream":{
"namespace":"clrstaging",
"type":"logs",
"dataset":"aws.cloudwatch_logs"
},
"event":{
"agent_id_status":"auth_metadata_missing",
"ingested":"2025-08-12T02:45:00Z",
"original":"[1bb71768-7623-3c43-a84f-adad01fb93b6] INFO : The crawl is running by consuming Amazon S3 events.",
"kind":"event",
"id":"39137057676886253665224720494858672754866189070509015041",
"dataset":"aws.cloudwatch_logs"
},
"aws.cloudwatch":{
"log_group":"/aws-glue/crawlers",
"ingestion_time":"2025-08-12T02:39:23.000Z",
"log_stream":"cloud-reporting-uat-smsliverel32-crawler-raw"
}
}
}```

version 8.19

```json
{"_index":".ds-logs-aws.cloudwatch_logs-clrstaging-2025.08.05-000025","_id":"39137020112061492822338608354819978800775375453653696512","_version":1,"_source":{"awscloudwatch":{"log_group":"arn:aws:logs:ap-southeast-2:339712804644:log-group:/aws-glue/crawlers","ingestion_time":"2025-08-12T02:11:21.000Z","log_stream":"cloud-reporting-platform-uat-crawler-analytics"},"agent":{"name":"elastic-agent-aws-clr-agent-d56948f8f-lxjsf","id":"155bdbc1-4f63-45e3-9a06-f85b4e7fee4c","type":"filebeat","ephemeral_id":"ad808525-66ce-4ac9-b96e-f3c23ba0b2f6","version":"8.19.0"},"log":{"file":{"path":"arn:aws:logs:ap-southeast-2:339712804644:log-group:/aws-glue/crawlers/cloud-reporting-platform-uat-crawler-analytics"},"level":"info"},"elastic_agent":{"id":"155bdbc1-4f63-45e3-9a06-f85b4e7fee4c","version":"8.19.0","snapshot":false},"message":"Crawler has finished running and is in state READY","tags":["forwarded","aws-cloudwatch-logs","preserve_original_event"],"labels":{"tmp":"test","project":"cloud-reporting"},"cloud":{"provider":"aws","region":"ap-southeast-2"},"input":{"type":"aws-cloudwatch"},"@timestamp":"2025-08-12T02:11:17.801Z","ecs":{"version":"8.11.0"},"data_stream":{"namespace":"clrstaging","type":"logs","dataset":"aws.cloudwatch_logs"},"event":{"agent_id_status":"auth_metadata_missing","ingested":"2025-08-12T02:17:29Z","original":"[665fd756-7f25-455b-b7d4-7148f454f5eb] BENCHMARK : Crawler has finished running and is in state READY","kind":"event","id":"39137020112061492822338608354819978800775375453653696512","dataset":"aws.cloudwatch_logs"},"aws.cloudwatch":{"log_group":"arn:aws:logs:ap-southeast-2:xxxxxxxxxxxx:log-group:/aws-glue/crawlers","ingestion_time":"2025-08-12T02:11:21.625Z","log_stream":"cloud-reporting-platform-uat-crawler-analytics"},"aws":{"tags":{"eks:cluster-name":"overwritten"}}}}
```

This was changed on 8.16 if I'm not wrong, here you have more context: [AWS] Support linked accounts when using log_group_name_prefix to select log groups · Issue #11457 · elastic/integrations · GitHub

It was a change to support this: [Filebeat] [AWS] Support getting cloudwatch logs from linked cross-account monitoring source accounts · Issue #36642 · elastic/beats · GitHub

You would need to use the custom ingest pipeline to parse the log_group from the log_group_id and replace it.

This can be done using the dissect processor for example.

The pattern would be something like this I think:

"arn:%{}:log-group:%{aws.cloudwatch.log_group}"

But there is another long standing issue, this field is non compliant with ECS, to use it in Ingest Pipelines you need to use the dot_expander processor on it before any other processor.

Oi Leandro,

Thanks (obrigado) for taking the time to go through my case — really appreciate it!

It’s definitely doable to extract the log group name from the full ARN using ingest pipeline. No dramas there.

The only pity is that ECS fields for exported CloudWatch log groups aren’t documented anywhere. I’m not sure what the formal process is to get endorsement for newly proposed/exported ECS fields names, but here’s what I had in mind:

aws.cloudwatch_logs.log_group_name
aws.cloudwatch_logs.log_group_arn

If there’s no consensus from the elastic community yet, I’d prefer to populate a custom label field for now — something like "labels.cloudwatch_logs.log_group_name".

Thanks again,
Vinicius