I am not able to fetch the data for the for the grand child aggregation.
for example : I have used the parent and child relationship to create the index and types.
index name is "employeedetails", which is having following three types
country -> employee -> department
country is parent of employee
employee is parent of department
Here is my query to fetch the department data from rest service browser.
The aggregation looks good. I think if you remove the type from the url you do get results. If not can you share your entire search request here?
If you also use only one primary shard in production you may consider just using the terms aggregation, because in your case the bucket count will always be correct.
Thank you for responding. I tried the same query by removing the type from the URL, it didn't worked. However, my goal is to achieve the aggregations on the grand child like, count, sum, avg etc.
I am trying from the parent type employee and country from both.
I am able to get the result for this. But I have one more index data with similar hierarchy it is throwing the exception in the console
nested: SearchParseException[[children] no [_parent] field not configured that points to a parent type];
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:853)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:652)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:618)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:369)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: SearchParseException[[children] no [_parent] field not configured that points to a parent type]
at org.elasticsearch.search.aggregations.bucket.children.ChildrenParser.parse(ChildrenParser.java:82)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:198)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:103)
at org.elasticsearch.search.aggregations.AggregationParseElement.parse(AggregationParseElement.java:60)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:836)
... 10 more
Though, my mapping for that type having the "_parent" configuration.
My use case is parent and 1st child has the one unique key and 1st child and 2nd child relationship has another key called dept1 and dept2. Hence when I am searching for the last child aggregation, it will say 0 records as it didn't find the key from any of the parent. Is there any way I can handle this ?
The reason the children agg isn't working is because the department documents do not point to do an employee that exists. dept1 and dept2 are no valid employee documents. The only two ids you can use here are 1 and 2 (which are employee ids).
Also employee with id 2 does not point to an existing country. There is no country with id 2.
What is the purpose of the department_id, employee_id, dept_id and country_id fields? The ids of a department, employee and country are already in the _id field. The children aggregation doesn't use these field.
Thank You So much for the response. I am able to solve this problem, This was the issue with the _parent routing. The ids are basically for the uniqueness of the documents.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.