A way to compress large Queries

If you have one set of ids per query and they are all in the filter that is
applied to the entire query, you can create a filtered alias for each id
list. Besides saving on network traffic you will also save on parsing this
huge list of ids for every request.

On Friday, November 2, 2012 12:32:43 PM UTC-4, revdev wrote:

Hi,
I am at a point where some of my search queries are close to 1MB, for
example queries where the "terms" filter has around 50k ids. I assume this
is putting a lot of network overhead to transfer such big queries to ES. I
am wondering if there is a way I can send "macro" in a query which is later
on translated by ES.
Let me describe how it would work:

  1. Query sent to ES:
    filter: { terms : { ids: [macro_label_list_of_ids] }}
    where "macro_label_list_of_ids" is just a label which uniquely identifies
    a list of integer ids.

  2. Upon receiving this query, ES does a lookup for
    "macro_label_list_of_ids" on another index (created by us) to extract
    values of ids and then replaces these ids where "macro_label_list_of_ids"
    appears in the query.

So, can you tell me three things:

  1. Is this possible in ES current?
  2. If not, is there any plans to implement this?
  3. Is there any other workaround to shorten my queries? ( I can't change
    the way I am storing data because of business requirements)

Thanks!

--