How to move the whole data to main memory

Hi,
I am trying to leverage max performance out of elasticsearch by moving the
whole data to main memory, my whole data set and index are like 2GB and I
have huge amount of ram, I want to move everything to main memory.

My current setting is index.storage.type as memory and used
gateway.type=fs.

While indexing everything is stored in main memory, but when I restart
elasticsearch, it doesn't move the index to main memory(RAM).

I have the following settings in my elasticsearch.yml

cluster.name: elasticsearchtest
index.storage.type: memory
cache.memory.small_buffer_size: 1mb
cache.memory.large_buffer_size: 10mb
cache.memory.small_cache_size: 1000mb
cache.memory.large_cache_size: 2000mb
gateway.type: fs
gateway.fs.location: /home/algotree/deltaES

--

Hi,

If index.storage.type: memory doesn't do it, you could always do something
like query for : and force the index to be cached by the OS.

Otis

Search Analytics - Cloud Monitoring Tools & Services | Sematext
Scalable Performance Monitoring - Sematext Monitoring | Infrastructure Monitoring Service

On Friday, August 17, 2012 5:38:57 AM UTC-4, Jalal wrote:

Hi,
I am trying to leverage max performance out of elasticsearch by moving
the whole data to main memory, my whole data set and index are like 2GB and
I have huge amount of ram, I want to move everything to main memory.

My current setting is index.storage.type as memory and used
gateway.type=fs.

While indexing everything is stored in main memory, but when I restart
elasticsearch, it doesn't move the index to main memory(RAM).

I have the following settings in my elasticsearch.yml

cluster.name: elasticsearchtest
index.storage.type: memory
cache.memory.small_buffer_size: 1mb
cache.memory.large_buffer_size: 10mb
cache.memory.small_cache_size: 1000mb
cache.memory.large_cache_size: 2000mb
gateway.type: fs
gateway.fs.location: /home/algotree/deltaES

--

On Tuesday, August 21, 2012 4:27:55 AM UTC+5:30, Otis Gospodnetic wrote:

Hi,

If index.storage.type: memory doesn't do it, you could always do something
like query for : and force the index to be cached by the OS.

Otis

Search Analytics - Cloud Monitoring Tools & Services | Sematext
Scalable Performance Monitoring - Sematext Monitoring | Infrastructure Monitoring Service

On Friday, August 17, 2012 5:38:57 AM UTC-4, Jalal wrote:

Hi,
I am trying to leverage max performance out of elasticsearch by moving
the whole data to main memory, my whole data set and index are like 2GB and
I have huge amount of ram, I want to move everything to main memory.

My current setting is index.storage.type as memory and used
gateway.type=fs.

While indexing everything is stored in main memory, but when I restart
elasticsearch, it doesn't move the index to main memory(RAM).

I have the following settings in my elasticsearch.yml

cluster.name: elasticsearchtest
index.storage.type: memory
cache.memory.small_buffer_size: 1mb
cache.memory.large_buffer_size: 10mb
cache.memory.small_cache_size: 1000mb
cache.memory.large_cache_size: 2000mb
gateway.type: fs
gateway.fs.location: /home/algotree/deltaES

Thanks Otis,

I was able to store the entire index to RAM by using the setting below
while indexing the data but now the problem is the RAM usage by
Elasticsearch is almost three times the size of the index.
store.type" : "memory"
gateway.type: fs

--

Lucene will often use up to three times the size of the index due to
the merging of existing segments. Segments are merged in the
background in paralel with the existing index, so more disk (if using
a disk based store) or memory will be used. How many updates are you
doing?

--

Ivan

On Fri, Aug 31, 2012 at 3:04 AM, Jalal JalalM@algotree.com wrote:

Thanks Otis,

I was able to store the entire index to RAM by using the setting below while
indexing the data but now the problem is the RAM usage by Elasticsearch is
almost three times the size of the index.
store.type" : "memory"
gateway.type: fs

--

--

Hi Jalal,

please note that store type "memory" is not the size of the index as you
would expect from a simple cache, it uses a ByteBuffer directory where each
file is segmented into many buffers.

Best regards,

Jörg

--

Thanks ,

So it's better to use something like warmer API, right?

On Tue, Sep 4, 2012 at 1:23 PM, Jörg Prante joergprante@gmail.com wrote:

Hi Jalal,

please note that store type "memory" is not the size of the index as you
would expect from a simple cache, it uses a ByteBuffer directory where each
file is segmented into many buffers.

Best regards,

Jörg

--

--

Yes, the warmer API is great for the case you have queries that run only
fast with all documents or values in memory. When a node just comes up, the
caches are not filled and queries tend to be slow. Instead of filling the
cache by the queries coming in, and user experiencing slow response times,
you can use the warmer API to reduce the numbers of slow queries in the
beginning of a node's lifetime.

Best regards,

Jörg

On Monday, September 17, 2012 2:49:07 PM UTC+2, Jalal wrote:

Thanks ,

So it's better to use something like warmer API, right?

On Tue, Sep 4, 2012 at 1:23 PM, Jörg Prante <joerg...@gmail.com<javascript:>

wrote:

Hi Jalal,

please note that store type "memory" is not the size of the index as you
would expect from a simple cache, it uses a ByteBuffer directory where each
file is segmented into many buffers.

Best regards,

Jörg

--

--