We are doing analysis twice a week in pyspark. I am currently writing only the latest data to elasticsearch, but now we want to append the data. Once we start appending the data, all charts in kibana will start taking average of all the available data. But we want to see latest analysis date data by default later user wish to change he / she can.
Is it possible that we can create a filter in kibana using query DSL which will be applied by default when user opens the dashboard. Later user can deselect max(Analysis Date) and select Analysis Date of his/her choice.
I tried below query in Kibana->Filter -> Query DSL but it didn't work. When I try the same in Dev Tools with index name and _search then it works.
ah ok, i taught you had a query with such filter working in dev tools.
i am afraid this won't be possible, you are trying to do aggregations inside your query.
i don't exactly understand what you are trying to achieve, but if you are adding new data to es, and you include timestamp of when the data is added, couldn't you then just a have a default timerange for 'last week' or something similar ?
Thank you for replying, actually in our dashboard we are using 2 dates. One used for trends Say trend of sales for various products and categories. Another date is Analysis date this is the date on which we do analysis like comparing two or more products sales (ANOVA) or tukey etc. Now we have
Analysis Date say today's date and there is another date which is used for trend which contains last 1 year data starting from today. Again when we do analysis on Thursday i.e. on 20 jun 2019 we will have analysis date as 20 Jun 2019 but for trend we will have data from 20 Jun 2019 to last 1 year. That date for trend is our calendar date and that is tagged to top right time filter. But Analysis date we want to put in filter.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.