How to use elapsed filter- logstash

I am working in the Elapsed filter. I read the guide of Elapsed filter in logstash. then i made a sample config file and csv to test the working of Elapsed filter. But it seems to be not working. There is no change in uploading the data to ES. i have attached the csv file and config code. Can you give some examples for how to use the elapsed filter.

Here's my csv data:
sample csv data

here's my config file:

input {
     file {
      path => "/home/paulsteven/log_cars/aggreagate.csv"
      start_position => "beginning"
      sincedb_path => "/dev/null"
   }
}
filter {
    csv {
        separator => ","
        quote_char => "%"
        columns => ["state","city","haps","ads","num_id","serial"]
    }
    elapsed {
        start_tag => "taskStarted"
        end_tag => "taskEnded"
        unique_id_field => "num_id"
    }

}
output {
  elasticsearch {
    hosts => "localhost:9200"
    index => "el03"
    document_type => "details"
  }
  stdout{}
}

Output in ES:

{
          "city" => "tirunelveli",
          "path" => "/home/paulsteven/log_cars/aggreagate.csv",
        "num_id" => "2345-1002-4501",
       "message" => "tamil nadu,tirunelveli,hap0,ad1,2345-1002-4501,1",
      "@version" => "1",
        "serial" => "1",
          "haps" => "hap0",
         "state" => "tamil nadu",
          "host" => "smackcoders",
           "ads" => "ad1",
    "@timestamp" => 2019-05-06T10:03:51.443Z
}
{
          "city" => "chennai",
          "path" => "/home/paulsteven/log_cars/aggreagate.csv",
        "num_id" => "2345-1002-4501",
       "message" => "tamil nadu,chennai,hap0,ad1,2345-1002-4501,5",
      "@version" => "1",
        "serial" => "5",
          "haps" => "hap0",
         "state" => "tamil nadu",
          "host" => "smackcoders",
           "ads" => "ad1",
    "@timestamp" => 2019-05-06T10:03:51.447Z
}
{
          "city" => "kottayam",
          "path" => "/home/paulsteven/log_cars/aggreagate.csv",
        "num_id" => "2345-1002-4501",
       "message" => "kerala,kottayam,hap1,ad2,2345-1002-4501,9",
      "@version" => "1",
        "serial" => "9",
          "haps" => "hap1",
         "state" => "kerala",
          "host" => "smackcoders",
           "ads" => "ad2",
    "@timestamp" => 2019-05-06T10:03:51.449Z
}
{
          "city" => "Jalna",
          "path" => "/home/paulsteven/log_cars/aggreagate.csv",
        "num_id" => "2345-1002-4501",
       "message" => "mumbai,Jalna,hap2,ad3,2345-1002-4501,13",
      "@version" => "1",
        "serial" => "13",
          "haps" => "hap2",
         "state" => "mumbai",
          "host" => "smackcoders",
           "ads" => "ad3",
    "@timestamp" => 2019-05-06T10:03:51.452Z
}

I have never used the elapsedfilter but was curious so I opened the discussion.

Sounds like num_id should be an unique value for each row or possibly event. In your example data it seems to be identical on each row.

What do you expect to happen?

1 Like

I want to know the usage of it by using the above example or any other example. i read stackoverflow questions and made a csv file like them to check. Everyone used a same data for the column. but they claimed it as unique(num_id). So i made like this. I came to know the start tag and end tag calculates time based on timestamp by reading logstash guide. how it does calculates.can u provide me some examples to know the usage. I read logstash guide but unable to figure it out more.

Again, I have never used the elapsed filter but am curious...

From the documentation

The events managed by this filter must have some particular properties. The event describing the start of the task (the "start event") must contain a tag equal to start_tag . On the other side, the event describing the end of the task (the "end event") must contain a tag equal to end_tag . Both these two kinds of event need to own an ID field which identify uniquely that particular task. The name of this field is stored in unique_id_field .

I don't see a start_tag nor an end_tag in your data.

Ok. thanks for viewing my topic and replying with some suggestions. If i found some more
answer about this filter, i will post here.

In an attempt to be slightly more helpful than just quoting the documentation :smiley:

I would add a column to your CSV named tags and add taskStarted and taskEnded to appropriate rows. I would also make sure the num_id field actually has unique values (per transaction).

This meas there should be only one line with num_id XXX where tags is taskStarted and one line where tags in taskEnded also with num_id XXX. The next transaction should have num_id XXY and so on.

That is how I imagine it should work...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.