You'll get an output with a bucket for each city, and the buckets would be unique. The doc_count key will give you the count of how many times that particular city is found, given your search criteria.
Thanks for the feedback, but the expected output is not the unique values for 'ameta.city' or any other field values under 'ameta', but the unique key names across all 'ameta' fields.
The query or aggregation I'm looking for should produce something like
The whole issue is that we want to provide list of possible keys to the user to choose from. We do not know how many different "keys" there are in the index under "ameta" beforehand.
What we used to do is get all fields and their values and extrapolate from that. However after upgrading our labs to 7.6.x from 6.8.x we run into error
field expansion matches too many fields, limit: 1024, got: 1672
Mapping API would solve this, if the data would be for whole index. In this case I can take a look at mapping and get the fields from there. This is sort of what we have been doing thus far by grabbing all data as mentioned in previous reply.
Actual use case involves retrieving list of "ameta" fields (or key names) for a subset of records..
Such as aid between 100 and 200 or some other criteria.
I've looked around quite a bit and starting to feel like such use case is not supported in elastic
Unfortunately it is a hard requirement to have this aggregation done for subset of documents. As each targeted "segment" of data may have very different available keys to choose from.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.