Hi,
I have ELK stack receiving logs from Winlogbeat but I have a problem whereby the standard analyzer is being used to process the fields and the text field 'event_data.TargetUserName' sometimes contains a $ as it sometimes contains a machine name, which I want to filter out, but the standard analyzer strips the $ so I can't then filter it in Kibana.
As this ELK is only receiving logs from Winlogbeat, I am trying to make a custom analyzer in a template that applies to all indices, that will leave the text in the field alone and pass it all through, allowing me to filter on $.
When I create the template with just the custom analyzer in it like this:
{
"template": "*",
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1,
"analysis": {
"analyzer": {
"TargetUserNameAnalyzer": {
"type": "custom",
"tokenizer": "keyword"
}
}
}
}
}
it obviously doesn't change anything but the template is accepted and I still get data from Winlogbeat.
If I then try it by adding in a custom mapping to apply that analyzer to the field, the indices are created in ES but they are all empty:
green open winlogbeat-6.2.4-2015.08 mUyYiNtFRZqItBVeiC7l0Q 2 1 0 0 920b 460b
green open winlogbeat-6.2.4-2014.08 dHcEBwiIQ32H8pNkNL3N9Q 2 1 0 0 920b 460b
green open winlogbeat-6.2.4-2017.01 5SM0gqNlQUWVaPUOspCP5A 2 1 0 0 920b 460b
green open winlogbeat-6.2.4-2016.09 jwzvvQjpTaKMugAIOyU_xg 2 1 0 0 920b 460b
green open winlogbeat-6.2.4-2012.11 fzQE_Sv4Te2b9jv25lbB3A 2 1 0 0 920b 460b
green open winlogbeat-6.2.4-2011.10 V4Nj7QsYSgiWflMCAED3_Q 2 1 0 0 920b 460b
green open winlogbeat-6.2.4-2014.06 ZPzx5FsiTXKqgH3Zlzd1Ag 2 1 0 0 920b 460b
green open winlogbeat-6.2.4-2015.12 M0McqzKERDyPw1QwdC7PZA 2 1 0 0 920b 460b
green open winlogbeat-6.2.4-2013.03 SUSpwm0rRn-0eJGxWKPdKw 2 1 0 0 920b 460b
I've have tried every permutation of mapping I can find and it behaves exactly the same. This is my latest template:
{
"template": "*",
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1,
"analysis": {
"analyzer": {
"TargetUserNameAnalyzer": {
"type": "custom",
"tokenizer": "keyword"
}
}
}
},
"mappings": {
"_doc": {
"dynamic_templates": [
{
"TargetUserNameField": {
"match_mapping_type": "string",
"match": "event_data.TargetUserName",
"mapping": {
"type": "text",
"analyzer": "TargetUserNameAnalyzer"
}
}
}
]
}
}
}
I have tried this without a match_mapping_type and also used the whitespace tokenizer, all to no avail.
Can anyone see what the problem is?
Thanks.