I don't think so. Reason is simple multiple sources, a single destination. Should be used 1:1. After all you will have the multiple destinations, and the destination can be string, not array.
Why is messy configuration? If is long,... You can split in three or more files, input.conf, filter.conf, output.conf or filter01.conf,filter02.conf,...which will be conmbine in a single file in LS runtime.
As @Rios says, you cannot do this. If you look at the code, it pulls the first entry of the match array out as the "source", or string to parsed. Everything else in the array is a pattern to match it against.
Specifying a syntax where the match option could be an array of arrays would be hard. What would tag_on_failure even mean? Should it support the timezone option being an array of the same length?
There is a related issue open about supporting arrays of strings as the input, which has been ignored for several years. I am pretty sure nothing will change.
BTW. Are you sure you want at.date rather than [at][date]? logstash does not use the same syntax for fields nested within objects that other parts of the stack do.
If you have five fields that might contain the timestamp then you need five date filters. Whether you need if conditions around each would depend on how much meaning you assign to a _dateparsefailure tag.
Whether you can do them all with a single list of date formats even when you know most of them don't apply would depend on your tolerance for overhead caused by attempting to parse date formats that do not apply. I wouldn't do it, because I think it makes the configuration harder to understand, not easier.
Can this notation for the if condition be simplified so there aren't so many or's? Some kind of array?
if ("3ds" in [tags] or "posy" in [tags] or "rabbitmq" in [tags] or "app01" in [tags] or "patp01" in [tags] or "hand" in [tags] or "app03" in [tags] or "app04" in [tags] or "app02" in [tags] or "app01" in [tags] or "bitcash" in [tags] or "app09" in [tags]) {
...
}
The main issue is that the tags field in logstash is an array and you cannot use it to compare with another array of tags, like [tags] in ["tag1","tag2", "tagN"], this will not work.
So, there are 2 tricks here, one would be to use a ruby filter to add a value on second field or a trick with the translate filter.
I had a pipeline using the translate filter in this way:
filter {
mutate {
add_field => {
"[@metadata][temp_tags]" => "%{tags}"
}
}
translate {
source => "[@metadata][temp_tags]"
target => "[@metadata][validate_tag]"
dictionary => {
"tag01" => "tag_match"
"tag02" => "tag_match"
"tagN" => "tag_match"
}
fallback => "no_match"
regex => true
}
if [@metadata][validate_tag] == "tag_match" {
filters if any of the tags have a match
}
if [@metadata][validate_tag] == "no_match" {
filters if no tag matches
}
}
The mutate will create a temp metadata field with the value of the tags field, if there are more than one tag, the value will be something like this: tag01,tag02
The translate filter will then use regex to check if any of the keys in the dictionary are present in the string in the source field, if there is a match it will populate another temporary field with the value tag_match, if there is no match the value in the fallback option will be populated.
Then you can use the if conditionals to apply other filters.
If you have a lot of tags, you can have this dictionary on an external file as well.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.