I have few reports that needs to be ingested or mapped as a whole file individually into elasticsearch and creating a specific index for all the reports i.e creating a dropdown for each and every individual report. How do I achieve this within ELK and how would I write a script to create a dashboard based on the timeframe that these reports are being ingested. ex: from 2 days earlier to now or specifically mentioning any day or time ??
But, that would ship as multiple events through filebeat or "file" input filter of logstash. In my use case, I have some reports generated by a tool everyday. so, to avoid duplication of data I need to search specific reports that I should get it as entire file instead of getting as events. something like a drop down specifically to query my index using any particular csv file report to get the desired results. do we have any options that I could map my software to Elasticsearch ??
So some reports might be updated day-to-day and they need to be overwritten? That's ok!
You can just take a few fields from the document and create your own Elasticsearch _id. Then Logstash can use that to index the report, so if an update happens it will use those same fields and create the same -id and Elasticsearch will treat it as an update
This will definitely help me out. But, how would I check the whole report or search for the previous reports i.e like a list of reports that I wanted to look through for future ref and avoiding the redundancy.
or let me ask this, do we need write a script to get the latest reports ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.