File read chunk sizes

Hi,

I'm using NIOFS and I want to know what is the chunk size of the read
requests given to the filesystem by ES while any search request is
submitted. Is there any configuration for the same?

Thanks,
Anand

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Lucene provides a BufferedIndexInput class, where chunks are managed for
reading efficiently and moved over the heap (mostly with
System.arraycopy).This is a delicate task because many buffers exist while
segment merging.

With NIO, the buffer creation is executed in a newBuffer(byte[] buf) method
which wraps the byte array into a ByteBuffer.

The method setBufferSize(int newSize) of BufferedIndexInput is not exposed
in Elasticsearch Store API.

http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/store/BufferedIndexInput.java

Do you have issues you could solve by making the buffer size configurable
in ES?

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Jörg,

I'm just tuning the storage for the ES. Does ES sets the buffer size to
something other than 1K, if so then where?

Thanks,
Anand

On Friday, 9 August 2013 15:13:14 UTC+5:30, Jörg Prante wrote:

Lucene provides a BufferedIndexInput class, where chunks are managed for
reading efficiently and moved over the heap (mostly with
System.arraycopy).This is a delicate task because many buffers exist while
segment merging.

With NIO, the buffer creation is executed in a newBuffer(byte buf)
method which wraps the byte array into a ByteBuffer.

The method setBufferSize(int newSize) of BufferedIndexInput is not
exposed in Elasticsearch Store API.

http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/store/BufferedIndexInput.java

Do you have issues you could solve by making the buffer size configurable
in ES?

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

From the code in
http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/store/NIOFSDirectory.java

and
http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/store/FSDirectory.java

I learned that for 64-bit JVM, the limit for setReadChunkSize() is
Integer.MAX_VALUE and can't be modified, so it is effectively ignored (at
least I hope so from reading the code it's not very clear). For 32-bit JVM
it is 100m by default.

ES is using Lucene's directory implementations without modification.

My conclusion is, to get access to the underlying BufferedIndexInput buffer
size of a NIOFSDirectory, a custom NIOFSDirectory implementation would be
required, with a modfied internal class NIOFSIndexInput.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi,

I tried tracing the block read request on the disk containing the ES
indices during query using blktrace. Most of the reads are of 256blocks
(4KB blocks). Any idea, how is this being controlled. I'm running on 64-bit
JVM.

Thanks,
Anand

On Friday, 9 August 2013 17:10:20 UTC+5:30, Jörg Prante wrote:

From the code in
http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/store/NIOFSDirectory.java

and
http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/store/FSDirectory.java

I learned that for 64-bit JVM, the limit for setReadChunkSize() is
Integer.MAX_VALUE and can't be modified, so it is effectively ignored (at
least I hope so from reading the code it's not very clear). For 32-bit JVM
it is 100m by default.

ES is using Lucene's directory implementations without modification.

My conclusion is, to get access to the underlying BufferedIndexInput
buffer size of a NIOFSDirectory, a custom NIOFSDirectory implementation
would be required, with a modfied internal class NIOFSIndexInput.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Do you have mmapfs enabled? Could be the virtual memory OS page size. If
you have your file mapped into virtual memory, it can only be accessed page
by page.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

mmapfs is disabled and OS page size is 4K. I've also ran the test case
after clearing the page cache. got the same results.

-Anand

On Tuesday, 13 August 2013 12:59:36 UTC+5:30, Jörg Prante wrote:

Do you have mmapfs enabled? Could be the virtual memory OS page size. If
you have your file mapped into virtual memory, it can only be accessed page
by page.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Not sure where the 4k buffer is defined. blktrace looks at the OS calls,
probably this is an optimization already implemented in the JVM.

I doubt there will be much gain to increase the underlying buffer in
Lucene. The buffer size is the "block size" of the data the JVM moves over
to the control of the OS, and the OS has rich buffering capabilities.
Increasing the JVM buffer always has the risk of double buffering.

The work of https://issues.apache.org/jira/browse/LUCENE-893 - well, it's
somewhat outdated, but I think still valid - shows Lucene can keep up with
even a 2k buffer.

I think this post of Doug Cutting from 2004 still holds:

"To my thinking, the primary role of file buffering in Lucene is to
minimize the overhead of the system call itself, not to minimize physical
i/o operations. Once the overhead of the system call is made
insignificant, larger buffers offer little measurable improvement."

http://mail-archives.apache.org/mod_mbox/lucene-java-user/200403.mbox/<4069A856.70700@apache.org>

Just my 2 cents.

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.