I found some logs from logstash side. It said there were 3 running workers(8 workers in total), and each of these running workers were working on the job specified by the last plugin date.
{:timestamp=>"2016-10-18T09:42:15.320000+0200", :message=>#<LogStash::PipelineReporter::Snapshot:0x7241d839 @data={:events_filtered=>58098, :events_consumed=>58098, :worker_count=>8, :inflight_count=>179, :worker_states=>[{:status=>"dead", :alive=>false, :index=>0, :inflight_count=>0}, {:status=>"run", :alive=>true, :index=>1, :inflight_count=>36}, {:status=>"dead", :alive=>false, :index=>2, :inflight_count=>0}, {:status=>"dead", :alive=>false, :index=>3, :inflight_count=>0}, {:status=>"dead", :alive=>false, :index=>4, :inflight_count=>0}, {:status=>"run", :alive=>true, :index=>5, :inflight_count=>88}, {:status=>"dead", :alive=>false, :index=>6, :inflight_count=>0}, {:status=>"run", :alive=>true, :index=>7, :inflight_count=>55}], :output_info=>[{:type=>"elasticsearch", :config=>{"hosts"=>"146.89.179.204", "index"=>"logstash-site-%{+YYYY.MM.dd}", "workers"=>8, "flush_size"=>1000, "ALLOW_ENV"=>false}, :is_multi_worker=>true, :events_received=>58098, :workers=><Java::JavaUtilConcurrent::CopyOnWriteArrayList:-1980582399 [<LogStash::Outputs::Elasticsearch hosts=>["146.89.179.204"], index=>"logstash-site-%{+YYYY.MM.dd}", workers=>8, flush_size=>1000, codec=><LogStash::Codecs::Plain charset=>"UTF-8">, manage_template=>true, template_name=>"logstash", template_overwrite=>false, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5>, <LogStash::Outputs::Elasticsearch hosts=>["146.89.179.204"], index=>"logstash-site-%{+YYYY.MM.dd}", workers=>8, flush_size=>1000, codec=><LogStash::Codecs::Plain charset=>"UTF-8">, manage_template=>true, template_name=>"logstash", template_overwrite=>false, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5>, <LogStash::Outputs::Elasticsearch hosts=>["146.89.179.204"], index=>"logstash-site-%{+YYYY.MM.dd}", workers=>8, flush_size=>1000, codec=><LogStash::Codecs::Plain charset=>"UTF-8">, manage_template=>true, template_name=>"logstash", template_overwrite=>false, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5>, <LogStash::Outputs::Elasticsearch hosts=>["146.89.179.204"], index=>"logstash-site-%{+YYYY.MM.dd}", workers=>8, flush_size=>1000, codec=><LogStash::Codecs::Plain charset=>"UTF-8">, manage_template=>true, template_name=>"logstash", template_overwrite=>false, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5>,
this log line is too long, the rest please look at another response below.