Thread leaks in logstash-output-s3 when wrong access keys are passed

Hi There,

I've observed thread leaks (S3, Stale factory sweeper) from logstash-output-s3 plugin when wrong access keys are passed in s3 output configuration. I tried same with different versions of logstash but it is the same in all the versions including latest 7.9.0

sh-4.2$ ps -eT | grep -i stale
   68   108 ?        00:00:01 S3, Stale facto
   68   303 ?        00:00:00 S3, Stale facto
   68   308 ?        00:00:00 S3, Stale facto
   68   316 ?        00:00:00 S3, Stale facto
   68   318 ?        00:00:00 S3, Stale facto
   68   320 ?        00:00:00 S3, Stale facto
   68   329 ?        00:00:00 S3, Stale facto
   68   331 ?        00:00:00 S3, Stale facto
   68   332 ?        00:00:00 S3, Stale facto
   68   333 ?        00:00:00 S3, Stale facto
   68   334 ?        00:00:00 S3, Stale facto
sh-4.2$ ps -eT | grep -i stale
   68   108 ?        00:00:01 S3, Stale facto
   68   303 ?        00:00:00 S3, Stale facto
   68   308 ?        00:00:00 S3, Stale facto
   68   316 ?        00:00:00 S3, Stale facto
   68   318 ?        00:00:00 S3, Stale facto
   68   320 ?        00:00:00 S3, Stale facto
   68   329 ?        00:00:00 S3, Stale facto
   68   331 ?        00:00:00 S3, Stale facto
   68   332 ?        00:00:00 S3, Stale facto
   68   333 ?        00:00:00 S3, Stale facto
   68   334 ?        00:00:00 S3, Stale facto
   68   353 ?        00:00:00 S3, Stale facto
   68   354 ?        00:00:00 S3, Stale facto
   68   365 ?        00:00:00 S3, Stale facto
   68   367 ?        00:00:00 S3, Stale facto
   68   368 ?        00:00:00 S3, Stale facto
   68   377 ?        00:00:00 S3, Stale facto
   68   379 ?        00:00:00 S3, Stale facto
   68   381 ?        00:00:00 S3, Stale facto
   68   382 ?        00:00:00 S3, Stale facto
   68   392 ?        00:00:00 S3, Stale facto
   68   393 ?        00:00:00 S3, Stale facto
   68   394 ?        00:00:00 S3, Stale facto
   68   404 ?        00:00:00 S3, Stale facto

When I started to look into the plugin code, the issue is more likely due to ruby concurrent call Class: Concurrent::TimerTask .
If I understand code correctly, a new thread is created after the execution timeout to remove the inactive files from the map of files and those threads are not killed due to some reason

I seem to have found a reported issue in concurrent-ruby

I'm not a programmer in ruby. Any help to avoid thread leaks would be appreciated.

If anyone else facing this kind of issue and knows of some workaround, please reply

Thanks,
Vinay

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.