Why we chose the ignore (exclusion) process than inclusion string is because every time postgres comes with new version - some new strings are introduced which may be crucial. So we want to monitor everything unless it has been confirmed by Admin as not required
So we want to read from postgres logfile filter out anything which matches string in ignore_alert.txt
If you could get your "ignore_alert" info in one single line and into an environmental variable in your logstash system, then you could just use it in a filter:
[...]
mutate {
add_field {
IGNORE => ${<your_environment_variable_here>:<default_value__just_in_case__not_required>}
}
if [ALERTCODE] in [IGNORE] {
drop {}
}
[...]
Well the ignore_alert.txt file is few hundred lines. It has alert IDs as shown above or sometimes strings as well like "wrong statement" with space in between etc etc. this is just too big to bring in 1 single line. Also it is expected that Admins will add new patterns at the end of ignore_alert.txt file - if he doesn't want to be alerted on some newly discovered pattern. Just wondering if this is even feasible solution considering I cannot just hog on to lot of cpu memory just to process this for every line in logs
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.