Monitor elapsed time on events for a processing plant

I am new to Elasticsearch and is currently evaluating if it is the right system for us. I work for a processing plant and want to get more insight into our data. One of the futures that I want is to process the event log from valves that are opening and closing. They basically have a limit on max opening and closing time. If they go past the limit we would have to fix the valves. It is also interesting to trend the change over time to detect problems before they happen.

Sample of the event log:

Timestamp,           Tag,       Event
30.01.2018 07:19:35, EV-10-001, commando close
30.01.2018 07:19:35, EV-10-002, commando close
30.01.2018 07:19:38, EV-10-002, closed
30.01.2018 07:19:48, EV-10-001, closed

I have looked at the elapsed filter plugin for Logstash and it looks that I can use it to compute the time from an commando close event to the valve is closed. But how could I compare it with a defined max time? There is about hundred valves and they have an individual time limit.

This is for now a project I am doing in my spare time so I have limited resources to develop new stuff.

Any advises and pointers in the right direction is much appreciated. I can dig into the details myself.

What about this?

1 Like

There's a few ways to do the comparison;

  1. Have an index in Elasticsearch that contains the valve identifier with their timings, then have an Elasticsearch filter lookup as part of your Logstash config to check and then add data to the event
  2. Do the same thing but with a translate filter and associated lookup table
1 Like

Thank you both for a quick answer. I will look into this.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.