I have a text field which is big. (~3000 char). I am storing the flow/path of the user inside my application.
Doc1:
flow_path: "ABCD12345678,ABCD098765432,PQRS56789043,EFG321987309,ABCD12345678,ABCD098765432,PQRS56789043,EFG321987309,ABCD12345678,ABCD098765432,PQRS56789043,EFG321987309,ABCD12345678,ABCD098765432,PQRS56789043,EFG321987309,ABCD12345678,ABCD098765432,PQRS56789043,EFG321987309"
Doc2:
flow_path: "ABCD12345678,ABCD098765432,PQRS56789043,EFG321987309"
So, user steps are recorded inside this comma-separated string.
I want to be able to search and also aggregate on this field. So, by default, Elasticsearch created 1 analyzed and 1 non-analyzed string and I expected it to work for my usecase.
I am able to search using analyzed flow_path
field as expected, but when aggregating on flow_path.keyword
field, it is not producing all the terms as expected in Kibana.
Ex: (based on above example)
Term : Count
ABCD12345678,ABCD098765432,PQRS56789043,EFG321987309 : 1
Doc1 term is completely ignored. I increased the size
parameter to huge value(20000), but still the issue persists.
Version: ES, Kibana 5.6.3
Please help how I can aggregate on such big text field getting all the terms.