Hi,
I am trying to leverage max performance out of elasticsearch by moving the
whole data to main memory, my whole data set and index are like 2GB and I
have huge amount of ram, I want to move everything to main memory.
My current setting is index.storage.type as memory and used
gateway.type=fs.
While indexing everything is stored in main memory, but when I restart
elasticsearch, it doesn't move the index to main memory(RAM).
I have the following settings in my elasticsearch.yml
On Friday, August 17, 2012 5:38:57 AM UTC-4, Jalal wrote:
Hi,
I am trying to leverage max performance out of elasticsearch by moving
the whole data to main memory, my whole data set and index are like 2GB and
I have huge amount of ram, I want to move everything to main memory.
My current setting is index.storage.type as memory and used
gateway.type=fs.
While indexing everything is stored in main memory, but when I restart
elasticsearch, it doesn't move the index to main memory(RAM).
I have the following settings in my elasticsearch.yml
On Friday, August 17, 2012 5:38:57 AM UTC-4, Jalal wrote:
Hi,
I am trying to leverage max performance out of elasticsearch by moving
the whole data to main memory, my whole data set and index are like 2GB and
I have huge amount of ram, I want to move everything to main memory.
My current setting is index.storage.type as memory and used
gateway.type=fs.
While indexing everything is stored in main memory, but when I restart
elasticsearch, it doesn't move the index to main memory(RAM).
I have the following settings in my elasticsearch.yml
I was able to store the entire index to RAM by using the setting below
while indexing the data but now the problem is the RAM usage by
Elasticsearch is almost three times the size of the index.
store.type" : "memory"
gateway.type: fs
Lucene will often use up to three times the size of the index due to
the merging of existing segments. Segments are merged in the
background in paralel with the existing index, so more disk (if using
a disk based store) or memory will be used. How many updates are you
doing?
I was able to store the entire index to RAM by using the setting below while
indexing the data but now the problem is the RAM usage by Elasticsearch is
almost three times the size of the index.
store.type" : "memory"
gateway.type: fs
please note that store type "memory" is not the size of the index as you
would expect from a simple cache, it uses a ByteBuffer directory where each
file is segmented into many buffers.
please note that store type "memory" is not the size of the index as you
would expect from a simple cache, it uses a ByteBuffer directory where each
file is segmented into many buffers.
Yes, the warmer API is great for the case you have queries that run only
fast with all documents or values in memory. When a node just comes up, the
caches are not filled and queries tend to be slow. Instead of filling the
cache by the queries coming in, and user experiencing slow response times,
you can use the warmer API to reduce the numbers of slow queries in the
beginning of a node's lifetime.
Best regards,
Jörg
On Monday, September 17, 2012 2:49:07 PM UTC+2, Jalal wrote:
Thanks ,
So it's better to use something like warmer API, right?
On Tue, Sep 4, 2012 at 1:23 PM, Jörg Prante <joerg...@gmail.com<javascript:>
wrote:
Hi Jalal,
please note that store type "memory" is not the size of the index as you
would expect from a simple cache, it uses a ByteBuffer directory where each
file is segmented into many buffers.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.