Hi,
I've tried searching, but can't find a way around what seems to be a rather basic usecase, so please send me to a suitable thread if there is one, but if not, here is my problem:
I want to use the ELK stack to visualize the contents of data in xml files, and in particular there are some fields (xml-attributes) that consist of comma-separated lists that I would like to use for categorizing the data.
I have the whole thing set up and the data flows all the way, but I only get those fields into ES as either text ("searchable"), or keyword (aggregateable) field, so either I can search on the parts of the comma-separated list (not really what I need - I want terms in kibana not filters) or I can aggregate on the whole list and not the indivudual comma-separated parts, which isn't what I'm after either.
I have created a custom comma-separation-analyzer, but that seems to affect the text-part, while for the keyword-part I can't specify an analyzer of course. (There is the normalizer but the docs say that has to result in a unique value per document).
It seems to be possible to split the field into different fields, or different documents, but nether suite this usecase (several documents would make a lot of other statistics a lot more complex, while different fields doesn't allow the same kind if easy filtering in the kibana ui).
What I'm after is, when having a comma-separated list in a field in the input data, containing say between 0 and 100 values that I don't know beforehand (and which values and the numbe of values varies between rows/documents), how can I make those values show up in a kibana "terms"-list, when creating barcharts or datatables (for easy click-filtering)?
Thank you for any advice!