The dissect{} has to come after the csv{}, otherwise the threadName field does not exist. Filters are executed in the order listed in the configuration.
Don't guess, test it Run logstash with a config like this and then type something like "CMS_MOD 15-3" into stdin.
input { stdin {} }
output { stdout { codec => rubydebug } }
filter {
# So we can inject stuff like "PME_MOC 15-1" on stdin instead of needing a csv
mutate { "add_field" => { "threadName" => "%{message}" } }
# Split into 2 fields with space as separator
dissect { mapping => { "threadName" => "%{part1} %{part2}" } }
# No separator, so it grabs the whole thing
dissect { mapping => { "threadName" => "%{part3}" } }
# Match the first [a-zA-Z0-9._-]+ in the field and throw it away
grok { match => ["threadName", "%{USERNAME}"] }
# Match the first [a-zA-Z0-9._-]+ in the field and put it in the username field
grok { match => ["threadName", "%{USERNAME:username}"] }
# Match the first [a-zA-Z0-9._-]+ in the field, anchored to optimize performance
grok { match => ["threadName", "^%{USERNAME:username2}"] }
}
If you save that as /tmp/test.conf then you can probably run logstash using
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.