What do you generally do to evaluate your search system's performance? Do
you use a metrics based approach where they can compare how changes to
scoring, analysis, or similarities effect hits in a quantitative way? Or
something more manual?
Going through Intro to Information Retrievalhttp://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-ranked-retrieval-results-1.html,
I see they pay a lot of attention to this and I can see the advantage of
having that kind of feedback loop, but I haven't heard too many cases of
this being used in practice.
For my own system, I've been looking to implement bpref (PDF; see Chapter
3.1) http://trec.nist.gov/pubs/trec16/appendices/measures.pdf, since I
have fairly incomplete knowledge of which documents are relevant/irrelevant
for my queries. I found that it would be helpful to be able to run a query
and give some expected documents as parameters and just get back the ranks
of those (I suppose I could implement this as a scan, but it would be nice
to avoid the traffic). Any similar experiences?
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8c97dce1-1c77-47c8-8f2c-9488b2af4eaa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.