We currently have filebeat running parallel with other ecs tasks in our ecs cluster. Each filebeat logs data from /var/lib/dockers/containers//.log and sends them to our elasticsearch and ultimately shows up in our Kibana. All the data is there, however from the fields, 'container.name' in particular only ever shows "ecs-agent" and not the name of the container of the actual task running in that instance. Has anyone else got this issue or have a work-around to get the correct container/image names?
Hi again @edmond.qiu, I was not able to reproduce it with the latest master of the upstream project. Could you provide a more specific configuration and environment information in order to try to reproduce it?
filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/*/*.log'
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
output.console:
pretty: true
processors:
- add_docker_metadata: ~
Then I start Filebeat like: sudo -E ./filebeat --strict.perms=false | jq '.container.name'
and I see proper names or null (when container name is not added) but nothing similar to what you mention.
Yeah, when I test on my local env it works perfectly showing the correct container name and all. But when it comes from the AWS ecs service it starts displaying the odd behaviour.
@edmond.qiu It might happen that this is how metadata are provided from this service. Does it happen to know from where ecs-agent comes from? It might worth opening a Github issue for this for further investigation.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.