2019-03-07 04:35:57.421 19EC | INFO ROOT: (Netto2600-Brutto2608)AllocatedBlockCount=0
2019-03-07 04:35:57.421 19EC | INFO ROOT: (Netto2600-Brutto2608)ReservedAddressSpace=0
2019-03-07 04:35:57.421 19EC | INFO TIMEDATA.DLL: Capacity allocated: Hashtable entries 20648881, Heap mem 805306368
2019-03-07 04:35:57.421 19EC | INFO DPREAD: Data import started: 07.03.19 04:35:57
2019-03-07 04:35:57.421 19EC | INFO DPREAD: Importing file: C:\path\to\file1
2019-03-07 04:35:57.452 19EC | INFO DPREAD: Completed importing file: C:\path\to\file1
2019-03-07 04:35:57.452 19EC | INFO DPREAD: Importing file: C:\path\to\file2
2019-03-07 04:36:43.545 19EC | INFO DPREAD: Completed importing file: C:\path\to\file2
2019-03-07 05:38:55.332 19EC | INFO TMDPDATA-INIT: Datasupply info: vwdpm.dcsDefault.2.0
2019-03-07 05:38:55.520 19EC | DEBUG TMDPDATA-GETFILEDATE: MdpFileIdentifier=[+]
2019-03-07 05:38:55.520 19EC | DEBUG TMDPDATA-GETFILEDATE: FileDate=[06.03.19]
What i want to do is to calculate the difference between the time of "Importing file" and "Completed importing". Could somone help me on this? Should i use logstash filter (like aggregate or elapsed)? Is this done directly in filebeat?
For Logstash the elapsed filter plugin would be what you're looking for.
You need to add tags to your events (start/end) and an ID (maybe your file name?) via some other filter means (possibly a grok or better yet dissect filter).
If you just want to convert seconds to millisecond a ruby filter will do it.
if "time_elapsed" in [tags] {
ruby { code => 'event.set("elapsed_time_ms", 1000.0 * event.get("elapsed_time"))' }
}
In that stackoverflow answer the re-write of the initial solution using the event API is missing an essential event.set, so the aggregate filters are not doing anything.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.