Hi I have a similar use case to this: Compare two datasets
However, we are an e-commerce platform and are looking to flag data from bad sellers/buyers instead of IP addresses. So we have a list of basically banned addresses, business names, etc.
We basically get a ton of logs that are submissions and shipments and payments etc and we were hoping to enrich the data to flag and visualize the suspicious stuff.
I was thinking of breaking each column into its own dictionary file then adding a new flag for each field we check (e.g. banned_address: true, banned_business: true).
Would doing this be the best way or is there a better way? Also as we update the dictionary files, would old data be automatically re-indexed or would we have to trigger that somehow?
I am pretty new to advanced usage of ELK so just looking for some light suggestions to get me started thanks!