ES version 1.5.2
Arch Linux on Amazon EC2
of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is
continuously increasing (225 MB per day).
Total no of documents is around 800k, 500 MB.
slabtop
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE
NAME
17750313 17750313 100% 0.19K 845253 21 3381012K dentry
So the continuous increase in memory usage is because of the slab usage I
think, If I restart ES, then slab memory is freed. I see that ES still has
some free heap available, but from elastic documentation
Lucene is designed to leverage the underlying OS for caching in-memory
data structures. Lucene segments are stored in individual files. Because
segments are immutable, these files never change. This makes them very
cache friendly, and the underlying OS will happily keep hot segments
resident in memory for faster access.
My question is, should I add more nodes or increase the ram of each node to
let lucene use as much memory as it wants ? how significant performance
difference will be there if I choose to upgrade ES machines to have more
RAM.
Or, can I make some optimizations that decreases the slab usage or clean
slab memory partially?
ES version was actually 1.5.0, I have upgraded to 1.5.2, so restarting the
ES cleared up the dentry cache.
I believe dentry cache is something that is handled by linux, but it seems
like ES/lucene has a role to play how dentry cache is handled. If that is
the case ES/lucene should be able to control how much dentry cache is there.
Dentry cache is continuously increasing, is this unavoidable considering
that the data is increasing every day (though not significant) ? I have an
ELK stack where there are many millions of documents, though there are less
search requests to the cluster, which doesn't have this problem.
On Monday, May 4, 2015 at 4:17:40 PM UTC+5:30, Pradeep Reddy wrote:
ES version 1.5.2
Arch Linux on Amazon EC2
of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is
continuously increasing (225 MB per day).
Total no of documents is around 800k, 500 MB.
slabtop
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE
NAME
17750313 17750313 100% 0.19K 845253 21 3381012K dentry
So the continuous increase in memory usage is because of the slab usage I
think, If I restart ES, then slab memory is freed. I see that ES still has
some free heap available, but from elastic documentation
Lucene is designed to leverage the underlying OS for caching in-memory
data structures. Lucene segments are stored in individual files. Because
segments are immutable, these files never change. This makes them very
cache friendly, and the underlying OS will happily keep hot segments
resident in memory for faster access.
My question is, should I add more nodes or increase the ram of each node
to let lucene use as much memory as it wants ? how significant performance
difference will be there if I choose to upgrade ES machines to have more
RAM.
Or, can I make some optimizations that decreases the slab usage or clean
slab memory partially?
When the underlying lucene engine interacts with a segment the OS will
leverage free system RAM and keep that segment in memory. However
Elasticsearch/lucene has no way to control of OS level caches.
What exactly is the problem here? This caching is what helps provide
performance for ES.
I understand that caching makes ES perform better, and it's normal. What I
don't understand is the unusual size of dentry objects (dentry size
increase at about 200+ mb per day?) for the data size I have. There isn't
this behaviour on the ELK ES where I have many times of data compared to
this.
Does that mean there are unusual no of segments being created?, is there
something that needs to be optimized?
The only thing that is different is that we take hourly snapshots to S3
directly, is it possible that the S3 paths are also part of dentry objects?
is it possible that the no of snapshots has some thing to do with? (I know
that having too many no of snapshots will make snapshotting slower). Note
that when I restart the ES it gets cleared(most of it, may be OS clears up
this cache once it sees that the parent process has been stopped).
On Monday, May 4, 2015 at 4:17:40 PM UTC+5:30, Pradeep Reddy wrote:
ES version 1.5.2
Arch Linux on Amazon EC2
of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is
continuously increasing (225 MB per day).
Total no of documents is around 800k, 500 MB.
slabtop
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE
NAME
17750313 17750313 100% 0.19K 845253 21 3381012K dentry
So the continuous increase in memory usage is because of the slab usage I
think, If I restart ES, then slab memory is freed. I see that ES still has
some free heap available, but from elastic documentation
Lucene is designed to leverage the underlying OS for caching in-memory
data structures. Lucene segments are stored in individual files. Because
segments are immutable, these files never change. This makes them very
cache friendly, and the underlying OS will happily keep hot segments
resident in memory for faster access.
My question is, should I add more nodes or increase the ram of each node
to let lucene use as much memory as it wants ? how significant performance
difference will be there if I choose to upgrade ES machines to have more
RAM.
Or, can I make some optimizations that decreases the slab usage or clean
slab memory partially?
ES version 1.5.2
Arch Linux on Amazon EC2
of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is
continuously increasing (225 MB per day).
Total no of documents is around 800k, 500 MB.
slabtop
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE
NAME
17750313 17750313 100% 0.19K 845253 21 3381012K dentry
So the continuous increase in memory usage is because of the slab usage I
think, If I restart ES, then slab memory is freed. I see that ES still has
some free heap available, but from elastic documentation
Lucene is designed to leverage the underlying OS for caching in-memory
data structures. Lucene segments are stored in individual files. Because
segments are immutable, these files never change. This makes them very
cache friendly, and the underlying OS will happily keep hot segments
resident in memory for faster access.
My question is, should I add more nodes or increase the ram of each node
to let lucene use as much memory as it wants ? how significant performance
difference will be there if I choose to upgrade ES machines to have more
RAM.
Or, can I make some optimizations that decreases the slab usage or clean
slab memory partially?
Yes it is unusual to have such dentry cache, there is definitely something
fishy going on. Stopping ES clears it up, so it is something related ES I
believe.
ES version 1.5.2
Arch Linux on Amazon EC2
of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is
continuously increasing (225 MB per day).
Total no of documents is around 800k, 500 MB.
slabtop
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE
NAME
17750313 17750313 100% 0.19K 845253 21 3381012K dentry
So the continuous increase in memory usage is because of the slab usage I
think, If I restart ES, then slab memory is freed. I see that ES still has
some free heap available, but from elastic documentation
Lucene is designed to leverage the underlying OS for caching in-memory
data structures. Lucene segments are stored in individual files. Because
segments are immutable, these files never change. This makes them very
cache friendly, and the underlying OS will happily keep hot segments
resident in memory for faster access.
My question is, should I add more nodes or increase the ram of each node
to let lucene use as much memory as it wants ? how significant performance
difference will be there if I choose to upgrade ES machines to have more
RAM.
Or, can I make some optimizations that decreases the slab usage or clean
slab memory partially?
Setting NSS_SDB_USE_CACHE=YES has stopped the bloating. I have set this on
one of the three nodes, dentry size hasn't changed a bit (in fact there was
a small decrease) where as other two nodes have an increase of around 200
MB (in 18 hours).
At this point I am not sure which component of th ES is making these curl
requests (may be cloud-aws plugin?)
Setting NSS_SDB_USE_CACHE=YES has stopped the bloating. I have set this on
one of the three nodes, dentry size hasn't changed a bit (in fact there was
a small decrease) where as other two nodes have an increase of around 200
MB (in 18 hours).
At this point I am not sure which component of th ES is making these curl
requests (may be cloud-aws plugin?)
Actually, the problem has appeared again, memory consumption was stable for
couple of days, then it started increasing, env variable was only set for
that particular session or something, I had to set it again by adding it to
/etc/environment , but this doesn't have any affect anymore.. there may be
some other parameter that's affecting the dentry cache.
I have straced elasticsearch for a couple of minutes
strace -fp PID -o file.txt
out of the 40k+ events recorded
2.2k + events have resulted in errors like this
I think this is the reason for the dentry bloating, though I am not sure if
there is some thing wrong with my cluster or not.
On Monday, May 4, 2015 at 4:17:40 PM UTC+5:30, Pradeep Reddy wrote:
ES version 1.5.2
Arch Linux on Amazon EC2
of the available 16 GB, 8 GB is heap (mlocked). Memory consumption is
continuously increasing (225 MB per day).
Total no of documents is around 800k, 500 MB.
slabtop
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE
NAME
17750313 17750313 100% 0.19K 845253 21 3381012K dentry
So the continuous increase in memory usage is because of the slab usage I
think, If I restart ES, then slab memory is freed. I see that ES still has
some free heap available, but from elastic documentation
Lucene is designed to leverage the underlying OS for caching in-memory
data structures. Lucene segments are stored in individual files. Because
segments are immutable, these files never change. This makes them very
cache friendly, and the underlying OS will happily keep hot segments
resident in memory for faster access.
My question is, should I add more nodes or increase the ram of each node
to let lucene use as much memory as it wants ? how significant performance
difference will be there if I choose to upgrade ES machines to have more
RAM.
Or, can I make some optimizations that decreases the slab usage or clean
slab memory partially?
Did you manage to figure this out? We are hitting the similar issue on all our single-node (standalone) installations. dentry cache is bloating memory and causing stop-the-world effect when kernel decides to clean it up.
This appears to be CentOS specific issue. I've run the same setup on Ubuntu and dentry cache does not grow.
BTW, restarting does not release the dentry cache - removing ES data dir does. My ES data is volatile and recreated with each ES restart - that's why ES restart itself made me info believing that it frees the cache.
As per elastic team its not an issue https://github.com/elastic/elasticsearch/issues/11253. We are on ArchLinux, we are just lettting it grow, we don't see any OOM errors. After a point, it stops growing as kernel clears it up. Restarting ES does clear it for us, but that's useless and not advised.
Our nodes have a lots of RAM (for reasons not related to ES). So when kernel decides to free dentry cache containing hundreds of millions entries, it causes stop-the-world pauses for more than a minute. That's how we bumped into the issue.
Nevertheless I think it's pretty well clear now and it's good to have it documented here.
I know I'm a little late here, but I have been looking at the same issue here. However, it really isn't an issue. Basically the dentry cache is available for use if it's needed; in fact, anything in SReclaimable will be free for reuse if needed. In that sense, this memory is a lot like disk cache; you shouldn't count it against used memory.
The only problem, is that this memory is not reported by the free utility (at least as of ubuntu 14.04). This means that if you are running memory checks/alerts that use free to get the data in use, you are going to see a lot of false alarms. For instance, on our 16G hosts, we can end up with 6G of memory as SReclaimable; but this 6G doesn't show up at all in free.
Note that you can free this memory with the following command (as root):
sync;echo 3 > /proc/sys/vm/drop_caches
That will free up page, inode and dentry cache. But there's no real need to do this, and it probably has a short term negative effect on performance. Better to just let the kernel release that memory as needed, and just fix any alerting that relies on free
@Michael_Owings My client said that he will be having about 500MB of log file generated each day . So in order to test whether my ELK stack can handle this I mimiced this requirement as follows - I ran a bash script in infinite loop which was printing to a file. The content of the file were - "{timestamp} local.INFO:{timestamp}" . Whole ELK stack is installed on same machine . Now what I am seeing is that my RAM is increasing continously . Currently doing htop(its ubuntu 14.04 machine) i get the following memory status usage - 2759/3764MB . So what is the reason for this increase in memory ??? Can you elaborate it a bit more clearly ?? Please also specify the remedy for this .
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.