Elastic search document access based control and vector search

Following is my use case to store data in Elasticsearch for workspace search connecting different data sources.

  1. Text from file is chunked and stored in different document used to vector and keyword search chunk.
  2. However, each file has set of allowed users as well as allowed groups who can access the document. Users can belong to group and used to search based on access. I want to support both keywork and semantic search.
  3. I want to avoid permissions duplicacy for each text chunk.
    What's the best way in order to index such data so that filtering also becomes easy. I want to filter the data while querying instead of applying pre filter/ post filter.
  4. For access based control, each document can have list of allowed user, allowed group, or all user can have permission to it.

Consider I have millions of files and their permissions to index, for example google drive of an organisation using service accounts, what should be ideal data storage strategy, optimal search.

Should a single index strategy be used or multi index with terms lookup?

I'd suggest a single index for your document data.

I'd also suggest not storing each chunk in a separate document, but instead store each chunk as a different value in the same array field on the same document. This way, you won't need to duplicate the permissions metadata on each chunk.

Then, you can set up an Elasticsearch Role Template for your index that filters results based on the permissions metadata.

Prior art that you may want to read through:

You may not be using these exact same tools, but they should give you some ideas on how we've solved similar problems in a similar space.

When i try to create such an index, it gives me the following error:
[dense_vector] fields cannot be indexed if they're within [nested] mappings

What version of Elasticsearch are you on?

PUT chunker
{
  "mappings": {
    "dynamic": "true",
    "properties": {
      "passages": {
        "type": "nested",
        "properties": {
          "vector": {
            "properties": {
              "predicted_value": {
                "type": "dense_vector",
                "index": true,
                "dims": 384,
                "similarity": "dot_product"
              }
            }
          }
        }
      }
    }
  }
}

(Copied from the blog's mapping example)
works fine for me.

{
  "acknowledged": true,
  "shards_acknowledged": true,
  "index": "chunker"
}

Got it. My version was 8.10 and I think ES started supporting this from 8.11.

Is it possible to get all passage vectors. I have a use case of finding top-k passages irrespective of the top level document.

Even if the top 2 passages are from same document, then both of those passages should be returned.

Referring the documentation on ES 8.11

In that case, perhaps you be better served indexing each chunk as a separate elasticsearch document. Elasticsearch isn't intended to return the same _id across multiple hits in a single response.

I'm curious what your use case is. I could see the desire for this if you're wanting to highglight which passages matched a given query, and these might span multiple chunks. However, highlighting isn't easy to do with vector search, without highlighting the whole matched passage.

Doing some post processing based on chunks of a document. How seperate relevant chunks from same documet are

What about using terms lookup from another permission index and then search from another index. How effectively will that be in case of millions of chunks and 1000's of documents? and or permission and meta
data redundancy is something that is currently unavoidable.

Because for me , permission updates and content change are 2 different events.

Since on content change , I don't have control over the number chunks, I have to reindex all chunks.

Perhaps look at this blog a colleague and I wrote might give you some additional thoughts

Problem is I need most relevant passages and not document when searching.

Considering this, I think I cannot store them in a single document per file but a document per passage.

There are 2 options now:

  1. ither store permission in permission index and file chunks in passage_index., and use terms lookup for filtering

  2. Store both in same index at chunk level which leads to duplicates of permission.

Which would be better in the case of enterprise search at scale

store permission in permission index and file chunks in passage_index., and use terms lookup for filtering

This is my recommendation, and is how we've implemented DLS for other Elastic products.

I'll suggest again that you should read through:

Note that the difficult part with this is ensuring at search time that you can accurately associate a given search request's origination with the right set of permissions. This will be something you'll have to implement in your backend code.

Actually, I think I misunderstood you. My mistake.

Store both in same index at chunk level which leads to duplicates of permission.

Each Elasticsearch document should have the metadata fields on it necessary to filter that document for DLS. I think that's what you're meaning by "permissions". So I'd actually recommend this approach.

I understand that this is going to cause you to store identical permissions values for multiple Elasticsearch documents because the permissions are associated with source documents, and your use case necessitates that you'll need to store each chunk of a source document as a separate Elasticsearch document.

I don't think there's really a way around this, without changing your requirements. As I'd originally stated, it's typically better to put all of the chunks for a source document in one Elasticsearch document. You may want to re-evaluate if you really need/want to display a different hit for every single passage that matches. This is definitely an uncommon UX.

For example, if your dataset was "Books" and a query of "Alice" was issued, do you really want thousands of hits from "Alice In Wonderland"? Or do you want just one hit that previews a few of the relevant passages?

1 Like

Yes. My use case requires that lets say the top passages are passage 1 and passage 3, then I also need passage2.

So , final suggestion is to have one index with duplicates, right?

Let's forget about the permission use case for now and simplify the use case.

Each document in ES is a passage, chunked from a piece of text and metadata is associated with a text.

Let's consider this:

  1. My only concern is concurrent updates at the same time to both content as well as content metadata, and they can update independently.
  2. Whenever content is changes, since it's chunked and number of passages can change, I would always have to re-index all passages belonging to same document along with all the metadata for that content.
  3. For content metadata change, I only want to update the content metadata.

In concurrent situations, when using message bus, this can lead to a data inconsistency.