Hi All
I have a pipeline in which I would like to perform a s3 backup in an output in logstash.
My source event has "role" and "type" fields among others. In "filter" part of my config I have this:
filter {
mutate {
add_field => { "role" => "%{[fields][role]}" }
}
mutate {
add_tag => [ "matched" ]
}
prune {
whitelist_names => [ "role","type","metric"]
}
}
so "role" and "type" fields are not pruned. In "output" section:
output {
if ("timeseries" in [metric_type]){
influxdb {
host => "host"
db => "db"
measurement => "%{[metric]}" # aka "table"
allow_time_override => true
time_precision => "n"
use_event_fields_for_data_points => true
exclude_fields => [ "type","metric"]
send_as_tags => [ "role","type"]
}
}
s3 {
access_key_id => "${AWS_ACCESS_KEY_ID}"
secret_access_key => "${AWS_SECRET_ACCESS_KEY}"
region => "region"
bucket => "${S3_ARCHIVE_BUCKET}"
size_file => 10485760
prefix => "metrics/%{fields[role]}/%{fields[type]}/%{+YYYY.MM.dd}/"
time_file => 10
rotation_strategy => size_and_time
codec => "json"
canned_acl => "private"
server_side_encryption => "true"
storage_class => STANDARD_IA
temporary_directory => "/usr/share/logstash/tmp_archive/pipeline01"
}
}
The problem is that as a result of the second output I get in /usr/share/logstash/tmp_archive/pipeline01 paths such as:
$ ls -l %\{fields\[role\]\}/%\{fields\[type\]\}/2019.10.23/ls.s3.dbd2b9f4-8985-4848-a4f7-16896b732fa0.2019-10-23T13.16.part1.txt
-rw-r--r-- 1 logstash logstash 0 Oct 23 13:16 %{fields[role]}/%{fields[type]}/2019.10.23/ls.s3.dbd2b9f4-8985-4848-a4f7-16896b732fa0.2019-10-23T13.16.part1.txt
Additionally I have paths in s3 also with "%{fields[role]}/%" and "%{fields[type]}/%" instead of values of these fields.
From my perspective it seems that in first output "type" fiels is removed and then after event is pushed to influxdb the second output processes the modified event without "type" field. Do outputs work this way? My initial assumption was that every output gets a not modified version of an event and processes it separately.
Best Regards,
Rafal.