Hello all,
Currently I am trying to implement filebeat into an EC2 instance to do logshipping from this ec2 instance to another ec2 (that is the kafka's).
ContainerDefinitions:
-
Name: !Ref service
Image: !Ref repoPath
Memory: !Ref memoryApp
PortMappings:
- ContainerPort: !Ref port
Environment:
- Name: JAVA_GC_OPTS
Value: !Ref jvmFlags
- Name: JAVA_XMX
Value: !Ref javaXmx
- Name: JAVA_XMS
Value: !Ref javaXms
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref logGroup
awslogs-region: !Ref AWS::Region
- !If
- needLogger
- Name: !Sub ${service}-filebeat
Image: !Ref pathToLogger
Memory: 512
Environment:
- Name: output.kafka.hosts
Value: '["12.123.123.123:2181","12.123.123.124:2181","12.123.123.125:2181"]'
- Name: output.kafka.topic
Value: !Sub preprocess.splunk.${service}
- Name: filebeats.inputs.paths
Value: /app/logs/*
VolumesFrom:
- ReadOnly: true
SourceContainer: !Ref service
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref logGroup
awslogs-region: !Ref AWS::Region
- Ref: AWS::NoValue
Is there alternative methods to state the logs location and outputs for filebeat? I read the documentations that we should use the filebeat.yml, but I am looking for a more generic way to give these inputs instead of having to config a filebeat.yml for each of our services using this template