Can we make filebeat to read files in particular sequence

Hi,
I am using filebeat 1.2.0 to fetch my log files. I have a pair of log files, request andresponse log files.
In my logstash configuration I am doing some aggregation between request and response files. In my log directory first request files will be falling first and after few millisec response files get fallen into the path. So first I want to read the request file then I want to read response file second. I want to continue this process for all newly falling log files in the directory.

To do that I have utilized ignore_older property in my filebeat config file. Will that work satisfy my scenario?

-
      paths:
        - /path/*_request.xml
      document_type: req
      close_older: 5m
      multiline:
       pattern: '<\/.*:Error>'
       negate: true
       match: before
    -
      paths:
        - /path/*_response.xml
      document_type: resp
      ignore_older: 15s
      close_older: 5m
      multiline:
       pattern: '<\/.*:Error>'
       negate: true
       match: before

Is above scenario is possible in filebeat?

You want to delay reading the response logs? Then ignore_older doesn't help here, because that would ignore files that weren't touch for 15s, but it will read the newly changed files immediately. IMHO, best would be to adjust the correlation logic in Logstash to accept the response before the request, if possible.

Can we use mutliple instance of Filebeat to faciliate this

As per my understanding you suggesting a seperate instance for request and response right?. Eventhough if we create a seperate instance both the instance will be running at same time. The same process will get continued, one instance will pick the request files randomly and another with response files . If the response file instance sends the event to logstash first means once my correlation/aggregation will get failed once again.

Please correct me if I am wrong.

Hi
We had also the same problem what we did is backoff factor we set as .01s to get the req and response.in the logstash we use the aggregation filter using specific task id.

@Atul_Patel

Thanks for your kind response. I have used backoff factor and set the time to 0.01s but still it picking the response file first.

Can you please share your configuration file. So that I can review it properly.