Configuration with Filebeat and S3

I currently have a logstash configuration running with filebeat using pipelines that is somewhat simplistic. Logstash is load balanced across two instances in the file beats configuration. The logstash pipelines are configured to simply take input from three ports configured in three filebeat instances. Logstash is sending the output to different s3 buckets and associated subdirectories based on the input port designation. In addition logs are also sent to kibana via elasticsearch. My problem/question is as follows. Filebeats, at our site, is running across a fairly large ec2 infrastructure, ( some docker instances, some just ec2 instances); all with the same configuration. I am interested in using filebeats fields (located in the fields.yml) such docker container.image as input to logstash to determine where I send output i.e. logstash s3 bucket/container.image/year/date/etc.... or send this same output to and associate logstash to an index in kibana. How do I reference the filebeats fields in logstash. I am able to reference currently in logstash the fields that are included in the filebeat.yml (i.e. input/prospectors). But if I reference fields in logstash that are not included in the filebeat.yml but are included in the fields.yml, I am not seeing any output. Or if you have a sample logstash configuration that explains how I might better do this, it would be most appreciated. Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.