Index fail due to java.io.IOException: No space left on device, but we do have free space in disk

I use hadoop to ingest data to ES and i met "java.io.IOException: No space left on device" later. But i do check the space of my disk and i still have more than 70% free space left. So anyone know what happened?

BTW the data.path is /fdata/data1/2/3/4

ap-event-dev-data04:/home/cloud # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-rootlv
7.0G 3.0G 3.7G 45% /
tmpfs 32G 0 32G 0% /dev/shm
/dev/vda1 485M 68M 393M 15% /boot
/dev/mapper/rootvg-homelv
2.0G 68M 1.9G 4% /home
/dev/mapper/rootvg-perflv
485M 143M 317M 31% /perf
/dev/mapper/rootvg-tmplv
1008M 34M 924M 4% /tmp
/dev/mapper/rootvg-varlv
4.0G 412M 3.4G 11% /var
/dev/vdb1 337G 86G 235G 27% /fdata

ap-event-dev-data04:/home/cloud # ls /fdata/
data1 data1, data2 data3 data4 lost+found

Below is part of ES log.

[ap-event-dev-data04-01] [std_events_sem_010616][7] failed engine [merge exception]
org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: No space left on device
at org.elasticsearch.index.merge.scheduler.ConcurrentMergeSchedulerProvider$CustomConcurrentMergeScheduler.handleMergeException(ConcurrentMergeSchedulerProvider.java:133)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:522)
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:390)
at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at org.apache.lucene.store.OutputStreamIndexOutput.writeBytes(OutputStreamIndexOutput.java:51)
at org.apache.lucene.store.RateLimitedIndexOutput.writeBytes(RateLimitedIndexOutput.java:71)
at org.apache.lucene.store.CompoundFileWriter$DirectCFSIndexOutput.writeBytes(CompoundFileWriter.java:356)
at org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:281)
at org.apache.lucene.store.Directory.copy(Directory.java:194)
at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4785)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4266)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3811)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:409)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:107)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:486)
[2016-06-02 09:14:46,799][WARN ][indices.cluster ] [ap-event-dev-data04-01] [[std_events_sem_010616][7]] marking and sending shard failed due to [engine failure, reason [merge exception]]

Hi,

Could you be running out of inode ? (dh -i ? )

Below is the output of "df -i". there i only used 1% inode.

ap-event-dev-data04:/home/cloud # df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/rootvg-rootlv
460560 54992 405568 12% /
tmpfs 8247399 1 8247398 1% /dev/shm
/dev/vda1 128016 45 127971 1% /boot
/dev/mapper/rootvg-homelv
131072 46 131026 1% /home
/dev/mapper/rootvg-perflv
128016 137 127879 1% /perf
/dev/mapper/rootvg-tmplv
65536 16 65520 1% /tmp
/dev/mapper/rootvg-varlv
262144 3433 258711 2% /var
/dev/vdb1 22413312 5900 22407412 1% /fdata

does anyone has met this issue? So far this issue doesn't reproduce.