Hey,
This is a follow-up question to the following discussion:
Since the topic is roughly two years old, I wonder if there are any new resources or features solving this problem. As the previous author I'm planning on feeding encoded features vectors such as this into ES:
[word1: 10, word20: 3, word32: 4, word200: 11]
which basically encodes the TF of each word occuring in the current document. For testing if the concept works at all I spared the frequency encoding and passed my feature vectors plain as:
[word1 word1 word2 word3 word3 word3 ...]
However the matching performance of this approach in ES is already significantly worse than what I've implemented in a standalone application as a proof of concept where I basically just create an inverted index of all my indexed documents which is then queried with a bunch of test images.
This is exactly what I expected ES to do internally however it disproves me by showing significantly worse results. Do I have any errors in my thought process or is there any setting in ES that behaves differently than what I'm expecting?