Hi,
I'm trying to create using kibana visualizations with two buckets to produce an aggregated results, that i need apply factors and sum both.
Use case: dedicated an shared (like building)
My documents at Elasticsearch
doc 1: { '_id': 1, 'cost': 10, 'Department': 'SALES', '_type': 'cost' }
doc 2: { '_id': 2, 'cost': 50, 'Building': 'MyTower', '_type': 'cost' }
doc 3: { '_id': 3, 'cost': 10, 'Department': 'SALES', '_type': 'cost' }
doc 4: { '_id': 4, 'cost': 30, 'Building': 'MyTower', '_type': 'cost' }
My attempts:
1 - Use two buckets.
- Create first Sum bucket aggregated by term filed Department
- Create second sum bucket aggregation by term field Builid
I was not able to use script to sum both aggregation results.
2 - Simple sum metric and create bucket using filters.
Restriction here was access the filtered result in script field.
3 - Create script field to calculate:
documents that has Department.keyword value = SALES and sum with documents that has Build.keyword value
My impression is the usage of scripted fields seems to be fit more here but can someone guide to samples regarding script fields?
Hi Tim,
Thanks a lot. for the time and attention.
My objective work with sum metric. Filter two terms.
The first one is an dedicated cost the second tem "Bulding field" is a shared resource.
The idea is represent a single cost for sales
sum(department filter value) + ((Bulding filter value)*0.5)
My limitation is how to sum both results. (Sales: 60)
Hi I've found an alternative. If the buckets we change from terms to filters and create filters to match the exact condition worked.
The only impact that I have is how to access filter values using the advanced, and add a json file to create an painless function.
Thanks
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.