Java sleep (Redundancy in logging “Flushing buffer at interval”)

I'm using logstash 2.3.2(jdbc input/elasticsearch output) and elasticsearch 2.3.3 from CentOS7.

process status:

Name: java
State: S (sleeping)
Tgid: 4486
Ngid: 0
Pid: 4486
PPid: 4485
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 256
Groups: 10 1000
VmPeak: 7406444 kB
VmSize: 7305992 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 697632 kB
VmRSS: 697632 kB
VmData: 7094780 kB
VmStk: 136 kB
VmExe: 4 kB
VmLib: 22752 kB
VmPTE: 2032 kB
VmSwap: 0 kB
Threads: 35
SigQ: 1/256556
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000001000
SigCgt: 2000000181004ccf
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000001fffffffff
Seccomp: 0
Cpus_allowed: fff
Cpus_allowed_list: 0-11
Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 26
nonvoluntary_ctxt_switches: 5

my logstash log:

{:timestamp=>"2016-12-05T13:34:36.911000+0900", :message=>"closing output delegator", :klass=>LogStash::Outputs::Elasticsearch, :level=>:debug, :file=>"logstash/output_delegator.rb", :line=>"137", :method=>"do_close"}
{:timestamp=>"2016-12-05T13:34:36.911000+0900", :message=>"closing", :plugin=><LogStash::Outputs::Elasticsearch hosts=>["localhost:29200"], index=>"sz_20161205133401", document_type=>"%{type}", document_id=>"%{cs_sn}", routing=>"%{hb_sn}", parent=>"%{cs_sn}", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, flush_size=>500, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, retry_on_conflict=>1, action=>"index", path=>"/", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5>, :level=>:debug, :file=>"logstash/plugin.rb", :line=>"35", :method=>"do_close"}
{:timestamp=>"2016-12-05T13:34:37.512000+0900", :message=>"Flushing buffer at interval", :instance=>"#<LogStash::Outputs::Elasticsearch::Buffer:0x22da17b4 @stopping=#Concurrent::AtomicBoolean:0x1c4a3c4c, @last_flush=2016-12-05 13:34:36 +0900, @flush_thread=#<Thread:0x1ecc9c39 run>, @max_size=500, @operations_lock=#Java::JavaUtilConcurrentLocks::ReentrantLock:0x5b71b74, @submit_proc=#Proc:0x11519466@/home/admin/logstash-2.3.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.6.2-java/lib/logstash/outputs/elasticsearch/common.rb:57, @flush_interval=1, @logger=#<Cabin::Channel:0xeb56a28 @subscriber_lock=#Mutex:0x284c6fa3, @data={}, @metrics=#<Cabin::Metrics:0x32354b00 @channel=#<Cabin::Channel:0xeb56a28 ...>, @metrics={}, @metrics_lock=#Mutex:0xfc44c98>, @subscribers={12378=>#<Cabin::Subscriber:0x5a49781d @options={}, @output=#<Cabin::Outputs::IO:0x65edf5a0 @lock=#Mutex:0x69990fa7, @io=#<File:logstash_sz_start.out>>>}, @level=:debug>, @buffer=, @operations_mutex=#Mutex:0x3a6696fb>", :interval=>1, :level=>:debug, :file=>"logstash/outputs/elasticsearch/buffer.rb", :line=>"90", :method=>"interval_flush"}
{:timestamp=>"2016-12-05T13:34:37.512000+0900", :message=>"Flushing buffer at interval", :instance=>"#<LogStash::Outputs::Elasticsearch::Buffer:0x22da17b4 @stopping=#Concurrent::AtomicBoolean:0x1c4a3c4c, @last_flush=2016-12-05 13:34:36 +0900, @flush_thread=#<Thread:0x1ecc9c39 run>, @max_size=500, @operations_lock=#Java::JavaUtilConcurrentLocks::ReentrantLock:0x5b71b74, @submit_proc=#Proc:0x11519466@/home/admin/logstash-2.3.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.6.2-java/lib/logstash/outputs/elasticsearch/common.rb:57, @flush_interval=1, @logger=#<Cabin::Channel:0xeb56a28 @subscriber_lock=#Mutex:0x284c6fa3, @data={}, @metrics=#<Cabin::Metrics:0x32354b00 @channel=#<Cabin::Channel:0xeb56a28 ...>, @metrics={}, @metrics_lock=#Mutex:0xfc44c98>, @subscribers={12378=>#<Cabin::Subscriber:0x5a49781d @options={}, @output=#<Cabin::Outputs::IO:0x65edf5a0 @lock=#Mutex:0x69990fa7, @io=#<File:logstash_sz_start.out>>>}, @level=:debug>, @buffer=, @operations_mutex=#Mutex:0x3a6696fb>", :interval=>1, :level=>:debug, :file=>"logstash/outputs/elasticsearch/buffer.rb", :line=>"90", :method=>"interval_flush"}
...

I wanna know why logstash is sleeping.

Logstash runs with many many threads. Based on your output (quoted above) this is only showing a single thread and that thread, when you observe it, is sleeping (waiting on some external condition, perhaps, I dont' know). This doesn't show you what the whole process (many threads) is doing at the time of your observation.

Based on what I see here, something has told Logstash to terminate (indicated bythe closing output delegator text; outputs only close when Logstash is shutting down or the pipeline is reloaded, I think?).

Anyway, your question about "Flushing buffer at interval" -- this is a normal message from Logstash.

What behavior is causing you to investigate these things? This is OK to investigate, but I wonder what problem you are having - with more information we may be able to help better.

Dear jordansissel,

Thanks for your reply.

I've batching small documents(about 60,000 docs, 60Mi size) with a crontab(replacing index alias) by one minute unit.
Sometimes, the same phenomenon occurred.
Here is a part of the elasticsearch debug level log:

[2016-12-12 09:13:20,009][DEBUG][indices.memory ] [N4] recalculating shard indexing buffer, total is [810.8mb] with [3] active shards, each shard set to indexing=[270.2mb], translog=[64kb]
[2016-12-12 09:13:44,012][DEBUG][cluster.service ] [N4] processing [index-aliases]: execute
[2016-12-12 09:13:44,013][DEBUG][cluster.service ] [N4] cluster state updated, version [304], source [index-aliases]
[2016-12-12 09:13:44,013][DEBUG][cluster.service ] [N4] publishing cluster state version [304]
[2016-12-12 09:13:44,014][DEBUG][cluster.service ] [N4] set local cluster state to version 304
[2016-12-12 09:13:44,029][DEBUG][cluster.service ] [N4] processing [index-aliases]: took 16ms done applying updated cluster_state (version: 304, uuid: 0qKe9b4oRKiY0HyDTNgyVg)
[2016-12-12 09:13:44,038][DEBUG][cluster.service ] [N4] processing [delete-index [sz_20161212091101]]: execute
[2016-12-12 09:13:44,038][DEBUG][cluster.metadata ] [N4] [sz_20161212091101] deleting index
[2016-12-12 09:13:44,039][DEBUG][cluster.service ] [N4] cluster state updated, version [305], source [delete-index [sz_20161212091101]]
[2016-12-12 09:13:44,039][DEBUG][cluster.service ] [N4] publishing cluster state version [305]
[2016-12-12 09:13:44,039][DEBUG][cluster.service ] [N4] set local cluster state to version 305
[2016-12-12 09:13:44,040][DEBUG][indices.cluster ] [N4] [sz_20161212091101] cleaning index, no longer part of the metadata
[2016-12-12 09:13:44,040][DEBUG][indices ] [N4] [sz_20161212091101] closing ... (reason [index no longer part of the metadata])
[2016-12-12 09:13:44,040][DEBUG][indices ] [N4] [sz_20161212091101] closing index service (reason [index no longer part of the metadata])
[2016-12-12 09:13:44,040][DEBUG][index ] [N4] [sz_20161212091101] [0] closing... (reason: [index no longer part of the metadata])
[2016-12-12 09:13:44,040][DEBUG][index.shard ] [N4] [sz_20161212091101][0] state: [STARTED]->[CLOSED], reason [index no longer part of the metadata]
[2016-12-12 09:13:44,040][DEBUG][index.shard ] [N4] [sz_20161212091101][0] operations counter reached 0, will not accept any further writes
[2016-12-12 09:13:44,040][DEBUG][index.engine ] [N4] > [sz_20161212091101][0] close now acquiring writeLock
[2016-12-12 09:35:20,164][DEBUG][indices.memory ] [N4] recalculating shard indexing buffer, total is [810.8mb] with [1] active shards, each shard set to indexing=[512mb], translog=[64kb]
[2016-12-12 09:35:50,164][DEBUG][index.shard ] [N4] [sz_20161212093001][0] updating index_buffer_size from [512mb] to [500kb]; IndexWriter now using [0] bytes
[2016-12-12 09:35:50,165][DEBUG][index.shard ] [N4] [sz_20161212093001][0] marking shard as inactive (inactive_time=[300000000000]) indexing wise
[2016-12-12 09:35:50,231][DEBUG][index.engine ] [N4] [sz_20161212093001][0] successfully sync committed. sync id [AVjwdQNKeDHUNSVSe1SS].
[2016-12-12 09:36:05,950][DEBUG][index.fielddata.plain ] [N4] [sz_20161212092901] Global-ordinals[fe_ext][8] took 0 ms

Regards jordansissel

When I read your Elasticsearch log, i don't see anything problematic :\

Sorry, I feel the same way :`(

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.