File beat reading data from rotated File not in prospect

I am using two filebeat instance to dump logs from my application server. One dumps the accessLog and the other dumps the tomcat logs. My pipeline is Filebeat - Logstash - ElasticSearch. Application logs have rotation policy of 50 MB and it takes around 30 minutes to an hour for the rotation. I am running exactly the similar configuration on my 25 application servers. Few of them are behaving in a weird manner.

They are dumping logs from a rotated file. On deeper investigation of the registry file, i grep the inode of the file and could trace it to bear a rotated file reference
/usr/local/tomcat/logs/2016-04/oAuthLogs-04-14-2016-13

Surprisingly, when i grep the inode number from registry file after 5 minutes, i could see the reference to
/usr/local/tomcat/logs/2016-04/oAuthLogs-04-14-2016-11

Though, logs are being constantly being dumped in oAuthLogs file. This behavior is happening on frequent basis for some servers

filebeat version 1.1.0 (amd64)

filebeat:
 # List of prospectors to fetch data.
prospectors:
-
  paths:
    - /usr/local/tomcat/logs/oAuthEventLogs.log
    - /usr/local/tomcat/logs/roleCheck.log
    - /usr/local/tomcat/logs/oAuthLogs.log
    - /usr/local/tomcat/logs/oAuthBaseListenerLogs.log

  input_type: log
  ignore_older: 10h
  scan_frequency: 10s
  harvester_buffer_size: 32384
  multiline:
    pattern:  "201*"
    negate: true
    match: after
  spool_size: 2000
  registry_file: /var/lib/filebeat/registry2

output:
  logstash:
   hosts: ["ipAddress:5043"]
   worker: 1
   index: index

File beat configuration for second instance

filebeat:
 prospectors:
  -
  paths:
   -/usr/local/tomcat/logs/localhost_access_log.txt

  input_type: access
  ignore_older: 10h
  scan_frequency: 10s
  harvester_buffer_size: 16384
  spool_size: 1000
  registry_file: /var/lib/filebeat/registry3

output:
 logstash:   
  hosts: ["ipAddress:5042"]
  worker: 1

Can, some one guide me what's happening wrong here.

This could be related to the issue here:

Thanks, for the prompt reply.
Will update my version and let you know

@Rachit_Puri It is not merged yet ...