I am working on a system where I index multiple versions of objects by
timestamp. I am including timestamp in the document id so that each version
gets a different document id and is searchable by itself. This works fine.
When I search over a time range, I get matching documents and I deduplicate
the result set only keeping the latest version of each object from the
result set.
Now I would like to use aggregations in elasticsearch for building facets.
But the facet calculation needs to happen after deduplication otherwise the
counts will be inaccurate (objects for which multiple versions matched will
be counted multiple times). Is there a deduplication filter available in
elasticsearch? How do I write one myself?
The only way I could get it to work was in multiple steps. First, query and
get matching document ids. Then, deduplicate based on timestamp. Last,
another query with ids filter listing all the ids from 2nd step to compute
aggregations. But the last step takes even more time than the first query.
Any suggestions? I was thinking that any system that tracks multiple
versions of an object will face this issue.
Simplest way might be to push an update to the old versions of the
documents to mark them as old and do aggregations filtering those out.
There isn't a great way to deduplicate, really.
On Thu, Jan 1, 2015 at 11:50 PM, Kshitij Gupta kshitij@vnera.com wrote:
Hi,
I am working on a system where I index multiple versions of objects by
timestamp. I am including timestamp in the document id so that each version
gets a different document id and is searchable by itself. This works fine.
When I search over a time range, I get matching documents and I deduplicate
the result set only keeping the latest version of each object from the
result set.
Now I would like to use aggregations in elasticsearch for building facets.
But the facet calculation needs to happen after deduplication otherwise the
counts will be inaccurate (objects for which multiple versions matched will
be counted multiple times). Is there a deduplication filter available in
elasticsearch? How do I write one myself?
The only way I could get it to work was in multiple steps. First, query
and get matching document ids. Then, deduplicate based on timestamp. Last,
another query with ids filter listing all the ids from 2nd step to compute
aggregations. But the last step takes even more time than the first query.
Any suggestions? I was thinking that any system that tracks multiple
versions of an object will face this issue.
The most scalable method is to index all the documents in timewindow based
indexes and use the latest timewindow index only so no deduplication is
required.
Jörg
On Fri, Jan 2, 2015 at 5:58 AM, Nikolas Everett nik9000@gmail.com wrote:
Simplest way might be to push an update to the old versions of the
documents to mark them as old and do aggregations filtering those out.
There isn't a great way to deduplicate, really.
On Thu, Jan 1, 2015 at 11:50 PM, Kshitij Gupta kshitij@vnera.com wrote:
Hi,
I am working on a system where I index multiple versions of objects by
timestamp. I am including timestamp in the document id so that each version
gets a different document id and is searchable by itself. This works fine.
When I search over a time range, I get matching documents and I deduplicate
the result set only keeping the latest version of each object from the
result set.
Now I would like to use aggregations in elasticsearch for building
facets. But the facet calculation needs to happen after deduplication
otherwise the counts will be inaccurate (objects for which multiple
versions matched will be counted multiple times). Is there a deduplication
filter available in elasticsearch? How do I write one myself?
The only way I could get it to work was in multiple steps. First, query
and get matching document ids. Then, deduplicate based on timestamp. Last,
another query with ids filter listing all the ids from 2nd step to compute
aggregations. But the last step takes even more time than the first query.
Any suggestions? I was thinking that any system that tracks multiple
versions of an object will face this issue.
Good suggestion and we do that already. But we allow querying over a time
range as well which could be in the past. Here multiple "old" versions
might match and we need to pick the latest "old" version.
On Friday, January 2, 2015 10:28:58 AM UTC+5:30, Nikolas Everett wrote:
Simplest way might be to push an update to the old versions of the
documents to mark them as old and do aggregations filtering those out.
There isn't a great way to deduplicate, really.
On Thu, Jan 1, 2015 at 11:50 PM, Kshitij Gupta <ksh...@vnera.com
<javascript:>> wrote:
Hi,
I am working on a system where I index multiple versions of objects by
timestamp. I am including timestamp in the document id so that each version
gets a different document id and is searchable by itself. This works fine.
When I search over a time range, I get matching documents and I deduplicate
the result set only keeping the latest version of each object from the
result set.
Now I would like to use aggregations in elasticsearch for building
facets. But the facet calculation needs to happen after deduplication
otherwise the counts will be inaccurate (objects for which multiple
versions matched will be counted multiple times). Is there a deduplication
filter available in elasticsearch? How do I write one myself?
The only way I could get it to work was in multiple steps. First, query
and get matching document ids. Then, deduplicate based on timestamp. Last,
another query with ids filter listing all the ids from 2nd step to compute
aggregations. But the last step takes even more time than the first query.
Any suggestions? I was thinking that any system that tracks multiple
versions of an object will face this issue.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.