No space left on device with large sql imports despite plenty available

Hi,

I created a brand new instance with a 50Gb SSD disk.
2 vCPU and 12Gb RAM.

I'm using logstash to index one large table from a mysql db on another instance.

df -h returns 32Gb disk available:

root@el1:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        39G   12G   27G  31% /
devtmpfs        5.9G     0  5.9G   0% /dev
tmpfs           5.9G     0  5.9G   0% /dev/shm
tmpfs           1.2G  948K  1.2G   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           5.9G     0  5.9G   0% /sys/fs/cgroup
/dev/loop0       56M   56M     0 100% /snap/core18/1944
/dev/loop1      151M  151M     0 100% /snap/google-cloud-sdk/163
/dev/loop2       68M   68M     0 100% /snap/lxd/18150
/dev/loop3       32M   32M     0 100% /snap/snapd/10707
/dev/sda15      105M  3.9M  101M   4% /boot/efi
tmpfs           1.2G     0  1.2G   0% /run/user/1001

Similarly, df -i returns:
/dev/root 5160960 165189 4995771 4% /
No special mounts.

The table I import is 9.2Gb on the mysql instance.

Here is my logstash config:

input {
jdbc {
        jdbc_driver_library => "/usr/share/java/mysql-connector-java-8.0.22.jar"
        jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
        jdbc_connection_string => "jdbc:mysql://10.138.0.2:3306/db?zeroDateTimeBehavior=convertToNull"
        jdbc_user => "elastic1"
        jdbc_password => "xxxxxx"
        schedule => "*/45 * * * *"
        tags => [ "str" ]
        statement => "SELECT * FROM str s where s.date > :sql_last_value order by s.date"
        tracking_column => "date"
        tracking_column_type => "timestamp"
        last_run_metadata_path => "/etc/logstash/conf.d/lastrun/.logstash_jdbc_test_last_run2"
    }
}
output {
elasticsearch {
        hosts => "http://127.0.0.1:9210"
        index => "str_id"
        document_type => "str"
        document_id => "%{str_id}"
        doc_as_upsert => true
        action => "update"
  }
}

However when I run it I get:

[ERROR] 2021-01-11 10:10:20.017 [Ruby-0-Thread-31: :1] jdbc - Java::JavaSql::SQLException: Error writing file '/tmp/MYfd=56' (OS errno 28 - No space left on device): SELECT * FROM str s where s.date > '2020-12-31 00:30:01' order by s.date

[WARN ] 2021-01-11 10:10:20.051 [Ruby-0-Thread-31: :1] jdbc - Exception when executing JDBC query {:exception=>"Java::JavaSql::SQLException: Error writing file '/tmp/MYfd=56' (OS errno 28 - No space left on device)"}

I tried to augment the JVM heap space as I had trouble with it before. I augmented it to 7GB.
Same issue.

Then I tried to preload part of the data using a fixed query, then putting back the dynamic one with a starting point after the preloaded data.
I managed to pre load 3Gb of data.
Still failing.

I triple checked and /tmp is not mounted anywhere special.

I made another split where the data is around 4Gb, and this time it works.

Assuming I only load the 9,2Gb -3gb = 6,2Gb, I have enough disk space, enough RAM and enough heap space.

What am I missing ?

I have several other indexes to load, some even bigger. The initial indexing will be a pain if I have to decompose one by one by blocks of 4Gb.
Would appreciate any help! :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.