I'm new elasticsearch, so this might be a stupid question but i'd love some
input before I get started creating my elasticsearch cluster.
Basically I will be indexing documents with a few fields (documents are
pretty small in size). there are ~90million documents total.
On the search side of things, each search will be limited by the small
subset of documents that the user doing the search owns.
my initial thought was to just have one large index for all documents and
have a multi-value field that held the user ids of each user that owned
that document. then when searching across the index i would do a filter
query to limit by that user id. My only concern here is that this might be
slow query times because you are always having to filter down by user id
from a large data set to a very small subset (on average a user probably
owns less than 1k documents).
The other option I had is that i could create an index for each user and
just index their documents into their index, but this would duplicate a
massive amount of data and just seems hacky.
Look at routing. It will help by limiting the searches to the shard with
the user's data. Beyond that, you can generally trust the caching on
filters to make this kind of use case quick. At least that is what I've
seen on the mailing list.
On Jul 15, 2014 9:27 AM, "Mike Topper" topper@gmail.com wrote:
Hello,
I'm new elasticsearch, so this might be a stupid question but i'd love
some input before I get started creating my elasticsearch cluster.
Basically I will be indexing documents with a few fields (documents are
pretty small in size). there are ~90million documents total.
On the search side of things, each search will be limited by the small
subset of documents that the user doing the search owns.
my initial thought was to just have one large index for all documents and
have a multi-value field that held the user ids of each user that owned
that document. then when searching across the index i would do a filter
query to limit by that user id. My only concern here is that this might be
slow query times because you are always having to filter down by user id
from a large data set to a very small subset (on average a user probably
owns less than 1k documents).
The other option I had is that i could create an index for each user and
just index their documents into their index, but this would duplicate a
massive amount of data and just seems hacky.
On Tue, Jul 15, 2014 at 3:26 PM, Mike Topper topper@gmail.com wrote:
Hello,
I'm new elasticsearch, so this might be a stupid question but i'd love
some input before I get started creating my elasticsearch cluster.
Basically I will be indexing documents with a few fields (documents are
pretty small in size). there are ~90million documents total.
On the search side of things, each search will be limited by the small
subset of documents that the user doing the search owns.
my initial thought was to just have one large index for all documents and
have a multi-value field that held the user ids of each user that owned
that document. then when searching across the index i would do a filter
query to limit by that user id. My only concern here is that this might be
slow query times because you are always having to filter down by user id
from a large data set to a very small subset (on average a user probably
owns less than 1k documents).
The other option I had is that i could create an index for each user and
just index their documents into their index, but this would duplicate a
massive amount of data and just seems hacky.
Try the filter approach first and only if performance isn't good enough,
look into other approaches. Lucene is quite fast at intersecting filters
with large postings lists these days...
Separate index per user is not only wasteful, because of the duplicated
content, but will consume substantially more RAM/disk/file descriptors just
because of the overhead required for an index.
On Tue, Jul 15, 2014 at 9:26 AM, Mike Topper topper@gmail.com wrote:
Hello,
I'm new elasticsearch, so this might be a stupid question but i'd love
some input before I get started creating my elasticsearch cluster.
Basically I will be indexing documents with a few fields (documents are
pretty small in size). there are ~90million documents total.
On the search side of things, each search will be limited by the small
subset of documents that the user doing the search owns.
my initial thought was to just have one large index for all documents and
have a multi-value field that held the user ids of each user that owned
that document. then when searching across the index i would do a filter
query to limit by that user id. My only concern here is that this might be
slow query times because you are always having to filter down by user id
from a large data set to a very small subset (on average a user probably
owns less than 1k documents).
The other option I had is that i could create an index for each user and
just index their documents into their index, but this would duplicate a
massive amount of data and just seems hacky.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.