Logstash 2.4 "Flushing buffer at interval" after write a few records to es after start/restart

Hi,there:
I use logstash 2.4.0 to read file log and write to elasticsearch 2.3.3.
logstash had been installed as:
sudo yum install logstash
My es was install and run on another server on port 9200.
And the logstash's config is below:

input {
        file {
                path => ["mylog_path"]
                start_position => "beginning"
                sincedb_path => "/dev/null"
                codec => plain {
                        charset => "UTF-8"
                }
        }
 }
filter {
  grok {
           match => { "message" => "%{DATA:timestamp}\^%{DATA:uniqueAppId}\^%{DATA:id}"}
          }
}
output {
    elasticsearch {
        hosts => ["myip:9200"]
        index => "myindex"
        document_id => "%{id}"
    }
}

Run logstash like:
sudo nohup bin/logstash -f myconfig.conf -l logstash.log --debug &

The same time,my server write it's log into the file "mylog_path" in real time,new line append to the log file like below, hundreds of records per second.
2016-09-30 11:08:49^1287621@android^9006251178

The debug log of the logstash below:

.....
{:timestamp=>"2016-09-30T11:15:58.963000+0800", :message=>"starting agent", :level=>:info, :file=>"logstash/agent.rb", :line=>"213", :method=>"execute"}
{:timestamp=>"2016-09-30T11:15:58.965000+0800", :message=>"starting pipeline", :id=>"main", :level=>:info, :file=>"logstash/agent.rb", :line=>"487", :method=>"start_pipeline"}
{:timestamp=>"2016-09-30T11:15:59.246000+0800", :message=>"Registering file input", :path=>["my_log"], :level=>:info, :file=>"logstash/inputs/file.rb", :line=>"171", :method=>"register"}
....
{:timestamp=>"2016-09-30T11:15:59.281000+0800", :message=>"my_log: initial create, no sincedb, seeking to beginning of file", :level=>:debug, :file=>"filewatch/tail_base.rb", :line=>"144", :method=>"_add_to_sincedb"}
 .... 
{:timestamp=>"2016-09-30T11:16:00.534000+0800", :message=>"Starting pipeline", :id=>"main", :pipeline_workers=>4, :batch_size=>125, :batch_delay=>5, :max_inflight=>500, :level=>:info, :file=>"logstash/pipeline.rb", :line=>"188", :method=>"start_workers"}
{:timestamp=>"2016-09-30T11:16:00.544000+0800", :message=>"Pipeline main started", :file=>"logstash/agent.rb", :line=>"491", :method=>"start_pipeline"}
{:timestamp=>"2016-09-30T11:16:00.545000+0800", :message=>"Received line", :path=>"my_log", :text=>"2016-09-30 00:00:08^234252^9006215562", :level=>:debug, :file=>"logstash/inputs/file.rb", :line=>"306", :method=>"log_line_received"}
{:timestamp=>"2016-09-30T11:16:01.569000+0800", :message=>"Regexp match object", :names=>["DATA:timestamp", "DATA:uniqueAppId", "DATA:id"], :level=>:debug, :file=>"grok-pure.rb", :line=>"192", :method=>"match_and_capture"}
... 
{:timestamp=>"2016-09-30T11:16:05.553000+0800", :message=>"Pushing flush onto pipeline", :level=>:debug, :file=>"logstash/pipeline.rb", :line=>"458", :method=>"flush"}
{:timestamp=>"2016-09-30T11:16:06.036000+0800", :message=>"Flushing buffer at interval", :instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x216ecd07 @stopping=#<Concurrent::AtomicBoolean:0x715697b6>, @last_flush=2016-09-30 11:16:05 +0800, @flush_thread=#<Thread:0x64633684 run>, @max_size=500, @operations_lock=#<Java::JavaUtilConcurrentLocks::ReentrantLock:0x2d904799>, @submit_proc=#<Proc:0x4f05c2f@/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:57>, @flush_interval=1, @logger=#<Cabin::Channel:0x40341431 @subscriber_lock=#<Mutex:0x1b19bde5>, @data={}, @metrics=#<Cabin::Metrics:0x37021349 @channel=#<Cabin::Channel:0x40341431 ...>, @metrics={}, @metrics_lock=#<Mutex:0x694039d5>>, @subscribers={13206=>#<Cabin::Subscriber:0x5876e297 @options={}, @output=#<Cabin::Outputs::IO:0x1d1e71f9 @lock=#<Mutex:0x39cca15e>, @io=#<File:logstash2.log>>>, 13208=>#<Cabin::Subscriber:0x40be5a50 @options={:level=>:fatal}, @output=#<Cabin::Outputs::IO:0x623f91f @lock=#<Mutex:0x1f07524b>, @io=#<IO:fd 1>>>}, @level=:debug>, @buffer=[], @operations_mutex=#<Mutex:0x19abc9d7>>", :interval=>1, :level=>:debug, :file=>"logstash/outputs/elasticsearch/buffer.rb", :line=>"90", :method=>"interval_flush"}

As you see, after some lines recived/filter and flush to es, the debug log going to "Flushing buffer at interval".
And when restart the logstash(Kill the PID, and run again.),Things like above.
By the way, a few days ago,I can see the .sincedb file in my_home_path,then I saw solutions about this problem like " delete the sincedb file",so I delete it.Then I could not found any .sincedb anywhere.
This time ,I set the
sincedb_path => "/dev/null"
in my config file ,things not changed.

So,what's wrong with my logstash(or es,or logstash config or anything elase)?

TKS

As you see, after some lines recived/filter and flush to es, the debug log going to "Flushing buffer at interval".

Yes, Logstash periodically sends a flush message into the event pipeline and that's what you're seeing. Nothing to worry about.

I'd start by temporarily replacing the elasticsearch output with a stdout { codec => rubydebug } output. Are you still seeing the behavior where Logstash seems to stop after processing just a few events? If yes, focus on the log messages related to reading the input file. What's it doing?

Thanks for reply. @magnusbaeck
I just modify the output :

output {
        stdout { codec => rubydebug }
}

and restart with:

sudo nohup bin/logstash -f xxx.conf -l logstash2.log --debug -vv&

After a few records was received ,the output end of the logstash2.log was:

{:timestamp=>"2016-09-30T15:23:48.138000+0800", :message=>"Pushing flush onto pipeline", :level=>:debug, :file=>"logstash/pipeline.rb", :line=>"458", :method=>"flush"}

and then,nothing output else.