Logstash getting stopped with error asNo space left on device - /usr/share/logstash/data/plugins/inputs/google_cloud_storage/d

Hi

My elasticsearch taking input from google_cloud_storage and indexing to Elasticsearch index stops consistently with below error

[2024-09-12T07:09:16,306][ERROR][logstash.javapipeline    ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::GoogleCloudStorage bucket_id=>"lv-gcs-prd-twist-observability", interval=>60, id=>"886d3a389f43f348bb8794e8a9c62fff78c21a4635f08b21ce6c9c7597a851d8", file_matches=>".*sfcc_prd_as.*", tags=>["b2c", "sfcc_prd_as"], enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_67c768b3-3d4c-41a5-83f5-29fd0727d37e", enable_metric=>true, charset=>"UTF-8">, file_exclude=>"^$", metadata_key=>"x-goog-meta-ls-gcs-input", delete=>false, unpack_gzip=>true, temp_directory=>"/tmp/ls-in-gcs">
  Error: No space left on device - /usr/share/logstash/data/plugins/inputs/google_cloud_storage/db/8ca/93b3e480b910c25359077afa5c2a9f59a44a8
  Exception: Errno::ENOSPC
  Stack: org/jruby/RubyDir.java:616:in `mkdir'
/usr/share/logstash/vendor/jruby/lib/ruby/stdlib/fileutils.rb:250:in `fu_mkdir'
/usr/share/logstash/vendor/jruby/lib/ruby/stdlib/fileutils.rb:228:in `block in mkdir_p'
org/jruby/RubyArray.java:1947:in `reverse_each'
/usr/share/logstash/vendor/jruby/lib/ruby/stdlib/fileutils.rb:226:in `block in mkdir_p'
org/jruby/RubyArray.java:1865:in `each'
/usr/share/logstash/vendor/jruby/lib/ruby/stdlib/fileutils.rb:211:in `mkdir_p'
/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-input-google_cloud_storage-0.15.0-java/lib/logstash/inputs/cloud_storage/processed_db.rb:26:in `mark_processed'

On restart of the process, its starts
The cluster is deployed in ECK.

below is the input section and output section. COde is deployed using helm chart.

 logstashPipeline:
    googleCloudStorage.conf: |
      input {
 google_cloud_storage  {
         file_matches => ".*sfcc_prd_as.*"
         interval => 60
         bucket_id => "lv-gcs-prd-twist-observability"
         tags => ["b2c", "sfcc_prd_as"]
        }
}
filter{........ code inside}
 output {
         if "b2c" in [tags] {
          elasticsearch {
            hosts => <hosturl>
            user => <username>
            password => <passwrd>
            index => "logstash-b2c-%{+YYYY.MM.dd}"
            document_id => "%{fingerprint_id}"
            ssl => true
            ssl_certificate_verification => false
          }

Hello and welcome,

You need to increase the space for the place where you are running Logstash.

I do not use ECK, so not sure how you do that, but if you are getting no space left on device it means that the data path of logstash does not have enough space to work.

The google cloud storage input needs to download the files to a temporary directory before processing it, so it needs some space.

How can i increase space of data path

logstashJavaOpts: "-Xmx3000M -Xms3000M"

is the above size to be increased

===or below one ======
resources:
requests:
cpu: 4
memory: "6Gi"
ephemeral-storage: 10Gi
limits:
cpu: 4
memory: "6Gi"
persistence:
enabled: true
storageClassName: "standard-rwo"
accessMode: "ReadWriteOnce"
size: "100Gi"

This is the memory used, not the data paht.

I do not use ECK or K8s, so I do not know how you should update this, but it seems to be this option here:

ephemeral-storage: 10Gi

or this one here:

size: "100Gi"

You need to find where the data path of logstash is configured, if it is using the ephemeral-storage or not, this is configured in logstash.yml, but I do not know how this is done on ECK.