I use functionbeat to ship aws lambda function logs to elasticsearch. I added a new log group today and when i attempt to update functionbeat i get the following error:
The final policy size (20576) is bigger than the limit (20480). (Service: AWSLambdaInternal; Status Code: 400; Error Code: PolicyLengthExceededException; Request ID: 4815dab8-f579-439b-977a-dd71810d168a; Proxy: null)
The policy is something functionbeat is updating automatically i assume as that is not something i am handling in my configuration
functionbeat: 8.2.2
functionbeat.yml
- name: cloudwatch
enabled: true
type: cloudwatch_logs
# Description of the method to help identify them when you run multiples functions.
description: "lambda function for cloudwatch logs"
# Concurrency, is the reserved number of instances for that function.
# Default is 5.
#
# Note: There is a hard limit of 1000 functions of any kind per account.
#concurrency: 5
# The maximum memory allocated for this function, the configured size must be a factor of 64.
# There is a hard limit of 3008MiB for each function. Default is 128MiB.
#memory_size: 128MiB
# Dead letter queue configuration, this must be set to an ARN pointing to a SQS queue.
#dead_letter_config.target_arn:
# Execution role of the function.
#role: arn:aws:iam::123456789012:role/MyFunction
# Connect to private resources in an Amazon VPC.
#virtual_private_cloud:
# security_group_ids: []
# subnet_ids: []
# Optional fields that you can specify to add additional information to the
# output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging
# List of cloudwatch log group registered to that function.
triggers:
# - log_group_name: /aws/eks/<redacted>-prod-eks/cluster // it broke when i added this latest one
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
- log_group_name: /aws/lambda/<redacted>
# Define custom processors for this function.
processors:
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 5
target: "<redacted>"
overwrite_keys: true
add_error_key: true # Create a function that accepts events from SQS queues.
# List of SQS queues.