Graceful way to un-overwhelm the ES

Hi,

I am currently running indexing on c1.xlarge (with 4 ephemeral drives
in RAID0 and gateway going to S3). Everything works great except every
several hours ES gets overwhelmed and indexing slows significantly.
From what I can see from bigdesk during “normal” ES operation “Heap
Mem” window has jigsaw pattern but when it gets overwhelmed seems like
no GC happens (no jigsaw pattern in bigdesk) and memory is maxed
( configured at 5120M)

Doing ES service restart (through “bin/service/elasticsearch –
restart”) solves the problem for a few hours but then the problem re-
appears.

I wonder whether restarting ES when it is on such state going to lead
to any data loss so I can put this into a cron job to assure indexing
continues (or whether there are any better ways to address the
problem)

Thanks,

-- Andy

P.S. Some background: I am running ES 19.2 with “refresh interval” set
to zero and ES currently has about 400 million documents in 2 indexes
with about 600G total index size (and I expect about 600M docs more
with around 1T of data. The mapping has _source set to compressed.)
The data processing and insertion into ES is done by multiple threads
on 20 or so m1.xlarge machines (when ES goes down or returns errors
threads back-off with exponential timeout and restart when ES is back
on-line). There are 8-12 threads per machine doing mostly data
processing and if I were to trust that in bigdesk “HTTP channels”
indicate number of active connections, then it means that 30-40
threads are connected to ES at any given time. Indexing rate is about
750 docs per second sometime maxing out at 10,000 docs per second. The
average doc size is about 5000 bytes

We are talking about a single ES node, right? For the amount of data that
you indexed, seems like you are hitting memory limits, 520mb for the amount
of data you have is not enough, probably should go to 3.5 or 4 gb (out of
the 7gb this instance type has) as ES_HEAP_SIZE.

On Thu, May 10, 2012 at 11:38 PM, andym imwellnow@gmail.com wrote:

Hi,

I am currently running indexing on c1.xlarge (with 4 ephemeral drives
in RAID0 and gateway going to S3). Everything works great except every
several hours ES gets overwhelmed and indexing slows significantly.
From what I can see from bigdesk during “normal” ES operation “Heap
Mem” window has jigsaw pattern but when it gets overwhelmed seems like
no GC happens (no jigsaw pattern in bigdesk) and memory is maxed
( configured at 5120M)

Doing ES service restart (through “bin/service/elasticsearch –
restart”) solves the problem for a few hours but then the problem re-
appears.

I wonder whether restarting ES when it is on such state going to lead
to any data loss so I can put this into a cron job to assure indexing
continues (or whether there are any better ways to address the
problem)

Thanks,

-- Andy

P.S. Some background: I am running ES 19.2 with “refresh interval” set
to zero and ES currently has about 400 million documents in 2 indexes
with about 600G total index size (and I expect about 600M docs more
with around 1T of data. The mapping has _source set to compressed.)
The data processing and insertion into ES is done by multiple threads
on 20 or so m1.xlarge machines (when ES goes down or returns errors
threads back-off with exponential timeout and restart when ES is back
on-line). There are 8-12 threads per machine doing mostly data
processing and if I were to trust that in bigdesk “HTTP channels”
indicate number of active connections, then it means that 30-40
threads are connected to ES at any given time. Indexing rate is about
750 docs per second sometime maxing out at 10,000 docs per second. The
average doc size is about 5000 bytes

Yes, it's single node and ES_HEAP_SIZE is as 5120M (not 520M)

You're are right, I'll have to move to bigger machine or split it into
2 machines as it started happening relatively recently (at ~300M docs)

My question is whether these restarts are safe at the moment and do
not lead to data loss in ES (where ES would return "OK" to processing
threads which would mark jobs as completed, but then ES would not
persist them due to restart). ES is currently running with
threadpool.index.type: cached
threadpool.bulk.type: cached

I tried to make these "blocking" but then processing threads were idle
most of the time just waiting for ES to return

On May 10, 4:48 pm, Shay Banon kim...@gmail.com wrote:

We are talking about a single ES node, right? For the amount of data that
you indexed, seems like you are hitting memory limits, 520mb for the amount
of data you have is not enough, probably should go to 3.5 or 4 gb (out of
the 7gb this instance type has) as ES_HEAP_SIZE.

On Thu, May 10, 2012 at 11:38 PM, andym imwell...@gmail.com wrote:

Hi,

I am currently running indexing on c1.xlarge (with 4 ephemeral drives
in RAID0 and gateway going to S3). Everything works great except every
several hours ES gets overwhelmed and indexing slows significantly.
From what I can see from bigdesk during “normal” ES operation “Heap
Mem” window has jigsaw pattern but when it gets overwhelmed seems like
no GC happens (no jigsaw pattern in bigdesk) and memory is maxed
( configured at 5120M)

Doing ES service restart (through “bin/service/elasticsearch –
restart”) solves the problem for a few hours but then the problem re-
appears.

I wonder whether restarting ES when it is on such state going to lead
to any data loss so I can put this into a cron job to assure indexing
continues (or whether there are any better ways to address the
problem)

Thanks,

-- Andy

P.S. Some background: I am running ES 19.2 with “refresh interval” set
to zero and ES currently has about 400 million documents in 2 indexes
with about 600G total index size (and I expect about 600M docs more
with around 1T of data. The mapping has _source set to compressed.)
The data processing and insertion into ES is done by multiple threads
on 20 or so m1.xlarge machines (when ES goes down or returns errors
threads back-off with exponential timeout and restart when ES is back
on-line). There are 8-12 threads per machine doing mostly data
processing and if I were to trust that in bigdesk “HTTP channels”
indicate number of active connections, then it means that 30-40
threads are connected to ES at any given time. Indexing rate is about
750 docs per second sometime maxing out at 10,000 docs per second. The
average doc size is about 5000 bytes

We're on AWS and ran into some very similar problems on 2 nodes. I
ended up using m2xl nodes with 12GB for ES across 2 nodes and indexing
ran very well. We're up to 420m docs.

  • Craig

On Thu, May 10, 2012 at 2:59 PM, andym imwellnow@gmail.com wrote:

Yes, it's single node and ES_HEAP_SIZE is as 5120M (not 520M)

You're are right, I'll have to move to bigger machine or split it into
2 machines as it started happening relatively recently (at ~300M docs)

My question is whether these restarts are safe at the moment and do
not lead to data loss in ES (where ES would return "OK" to processing
threads which would mark jobs as completed, but then ES would not
persist them due to restart). ES is currently running with
threadpool.index.type: cached
threadpool.bulk.type: cached

I tried to make these "blocking" but then processing threads were idle
most of the time just waiting for ES to return

On May 10, 4:48 pm, Shay Banon kim...@gmail.com wrote:

We are talking about a single ES node, right? For the amount of data that
you indexed, seems like you are hitting memory limits, 520mb for the amount
of data you have is not enough, probably should go to 3.5 or 4 gb (out of
the 7gb this instance type has) as ES_HEAP_SIZE.

On Thu, May 10, 2012 at 11:38 PM, andym imwell...@gmail.com wrote:

Hi,

I am currently running indexing on c1.xlarge (with 4 ephemeral drives
in RAID0 and gateway going to S3). Everything works great except every
several hours ES gets overwhelmed and indexing slows significantly.
From what I can see from bigdesk during “normal” ES operation “Heap
Mem” window has jigsaw pattern but when it gets overwhelmed seems like
no GC happens (no jigsaw pattern in bigdesk) and memory is maxed
( configured at 5120M)

Doing ES service restart (through “bin/service/elasticsearch –
restart”) solves the problem for a few hours but then the problem re-
appears.

I wonder whether restarting ES when it is on such state going to lead
to any data loss so I can put this into a cron job to assure indexing
continues (or whether there are any better ways to address the
problem)

Thanks,

-- Andy

P.S. Some background: I am running ES 19.2 with “refresh interval” set
to zero and ES currently has about 400 million documents in 2 indexes
with about 600G total index size (and I expect about 600M docs more
with around 1T of data. The mapping has _source set to compressed.)
The data processing and insertion into ES is done by multiple threads
on 20 or so m1.xlarge machines (when ES goes down or returns errors
threads back-off with exponential timeout and restart when ES is back
on-line). There are 8-12 threads per machine doing mostly data
processing and if I were to trust that in bigdesk “HTTP channels”
indicate number of active connections, then it means that 30-40
threads are connected to ES at any given time. Indexing rate is about
750 docs per second sometime maxing out at 10,000 docs per second. The
average doc size is about 5000 bytes

--

CRAIG BROWN
chief architect
youwho, Inc.

www.youwho.com

T: 801.855. 0921
M: 801.913. 0939

Since you are using the s3 gateway, you are safe up to the last checkpoint
that happened (it hapens periodically but a checkpoint can take time).
Sorry, I missed the size of heap allocated to ES, I typically recommend
using ~50% of the machine memory to the ES_HEAP_SIZE.

One more thing, if you start another machine to form a cluster, you will
now have 2 machines in the cluster. If you created the index / indices with
default number of replicas, then it is set to 1, which means you will have
2 copies of each shard. Once you start the second node, the replicas will
be allocate on it, so you will end up with the same capacity problems.

If you don't care about replicas (less HA), then you can dynamically change
the number of replicas to 0, otherwise, you will need to provision
machines appropriately.

One last thing, I recommend using local (the default) gateway on AWS, not
s3, because of the overhad it comes with the the time it can take to do a
checkpoint. This means each node local drive (or EBS) is used for recovery.

On Thu, May 10, 2012 at 11:59 PM, andym imwellnow@gmail.com wrote:

Yes, it's single node and ES_HEAP_SIZE is as 5120M (not 520M)

You're are right, I'll have to move to bigger machine or split it into
2 machines as it started happening relatively recently (at ~300M docs)

My question is whether these restarts are safe at the moment and do
not lead to data loss in ES (where ES would return "OK" to processing
threads which would mark jobs as completed, but then ES would not
persist them due to restart). ES is currently running with
threadpool.index.type: cached
threadpool.bulk.type: cached

I tried to make these "blocking" but then processing threads were idle
most of the time just waiting for ES to return

On May 10, 4:48 pm, Shay Banon kim...@gmail.com wrote:

We are talking about a single ES node, right? For the amount of data that
you indexed, seems like you are hitting memory limits, 520mb for the
amount
of data you have is not enough, probably should go to 3.5 or 4 gb (out of
the 7gb this instance type has) as ES_HEAP_SIZE.

On Thu, May 10, 2012 at 11:38 PM, andym imwell...@gmail.com wrote:

Hi,

I am currently running indexing on c1.xlarge (with 4 ephemeral drives
in RAID0 and gateway going to S3). Everything works great except every
several hours ES gets overwhelmed and indexing slows significantly.
From what I can see from bigdesk during “normal” ES operation “Heap
Mem” window has jigsaw pattern but when it gets overwhelmed seems like
no GC happens (no jigsaw pattern in bigdesk) and memory is maxed
( configured at 5120M)

Doing ES service restart (through “bin/service/elasticsearch –
restart”) solves the problem for a few hours but then the problem re-
appears.

I wonder whether restarting ES when it is on such state going to lead
to any data loss so I can put this into a cron job to assure indexing
continues (or whether there are any better ways to address the
problem)

Thanks,

-- Andy

P.S. Some background: I am running ES 19.2 with “refresh interval” set
to zero and ES currently has about 400 million documents in 2 indexes
with about 600G total index size (and I expect about 600M docs more
with around 1T of data. The mapping has _source set to compressed.)
The data processing and insertion into ES is done by multiple threads
on 20 or so m1.xlarge machines (when ES goes down or returns errors
threads back-off with exponential timeout and restart when ES is back
on-line). There are 8-12 threads per machine doing mostly data
processing and if I were to trust that in bigdesk “HTTP channels”
indicate number of active connections, then it means that 30-40
threads are connected to ES at any given time. Indexing rate is about
750 docs per second sometime maxing out at 10,000 docs per second. The
average doc size is about 5000 bytes

Andy,

Are just indexing only when the heap memory gets almost all occupied?
Do you have any search queries running?

The reason I asked is because I have ran into this issue before where
we have some searches with large facet field values, because facet does sort
and loads all the field into cache (field cache) our heap was actually
mostly filled
with field caches. And since field caches are not considered garbage, GC
never
collected.

The server crawls until we restart it.

Use bigdesk to check your field cache size against your heap size. In case
you
are running into same issue.

Regards,

--Andrew

On Thursday, May 10, 2012 3:38:28 PM UTC-5, andym wrote:

Hi,

I am currently running indexing on c1.xlarge (with 4 ephemeral drives
in RAID0 and gateway going to S3). Everything works great except every
several hours ES gets overwhelmed and indexing slows significantly.
From what I can see from bigdesk during “normal” ES operation “Heap
Mem” window has jigsaw pattern but when it gets overwhelmed seems like
no GC happens (no jigsaw pattern in bigdesk) and memory is maxed
( configured at 5120M)

Doing ES service restart (through “bin/service/elasticsearch –
restart”) solves the problem for a few hours but then the problem re-
appears.

I wonder whether restarting ES when it is on such state going to lead
to any data loss so I can put this into a cron job to assure indexing
continues (or whether there are any better ways to address the
problem)

Thanks,

-- Andy

P.S. Some background: I am running ES 19.2 with “refresh interval” set
to zero and ES currently has about 400 million documents in 2 indexes
with about 600G total index size (and I expect about 600M docs more
with around 1T of data. The mapping has _source set to compressed.)
The data processing and insertion into ES is done by multiple threads
on 20 or so m1.xlarge machines (when ES goes down or returns errors
threads back-off with exponential timeout and restart when ES is back
on-line). There are 8-12 threads per machine doing mostly data
processing and if I were to trust that in bigdesk “HTTP channels”
indicate number of active connections, then it means that 30-40
threads are connected to ES at any given time. Indexing rate is about
750 docs per second sometime maxing out at 10,000 docs per second. The
average doc size is about 5000 bytes

Hi Andrew,

The server is used for indexing only at the moment without any other
queries going into it and I think it’s hitting the memory limit. The
only other process that makes queries into it is bigdesk.

I don’t think I paid enough attention memory to pattern in bigdesk
since when indexing started as it was oscillating between 1-2G. The
problem seemed to have started with much larger number of docs when
oscillation would go from 1.7G to 3.9G (that was when I had ES
originally configured at 4.1G, and since then I bumped it up to 5.2G
and will move it to a bigger machine or split it as Shay and Craig had
suggested) and sometimes it would just get stuck. It looked like some
indexing was still happening at 10 or so docs per second and there
were no errors returned from ES on new bulk inserts but I haven’t seen
ES recover by doing GC collection even when completely shutting down
processing threads (or maybe I just did not wait long enough).

The point you raise about GC not collecting I suspect might have
something to do with how we set batch sizes – we split the text into
smaller “documents” and send these “documents” to ES in batches but
the batch size depends on the length of the original text. So in the
case when many threads send very large texts at once and ES is close
it the max memory limit, ES might get into state where the these
batches are not garbage (from GC perspective) but ES has no other
memory left to do any additional work.

-- Andy

On May 10, 6:34 pm, "Andrew[.:at:.]DataFeedFile.com"
and...@datafeedfile.com wrote:

Andy,

Are just indexing only when the heap memory gets almost all occupied?
Do you have any search queries running?

The reason I asked is because I have ran into this issue before where
we have some searches with large facet field values, because facet does sort
and loads all the field into cache (field cache) our heap was actually
mostly filled
with field caches. And since field caches are not considered garbage, GC
never
collected.

The server crawls until we restart it.

Use bigdesk to check your field cache size against your heap size. In case
you
are running into same issue.

Regards,

--Andrew

On Thursday, May 10, 2012 3:38:28 PM UTC-5, andym wrote:

Hi,

I am currently running indexing on c1.xlarge (with 4 ephemeral drives
in RAID0 and gateway going to S3). Everything works great except every
several hours ES gets overwhelmed and indexing slows significantly.
From what I can see from bigdesk during “normal” ES operation “Heap
Mem” window has jigsaw pattern but when it gets overwhelmed seems like
no GC happens (no jigsaw pattern in bigdesk) and memory is maxed
( configured at 5120M)

Doing ES service restart (through “bin/service/elasticsearch –
restart”) solves the problem for a few hours but then the problem re-
appears.

I wonder whether restarting ES when it is on such state going to lead
to any data loss so I can put this into a cron job to assure indexing
continues (or whether there are any better ways to address the
problem)

Thanks,

-- Andy

P.S. Some background: I am running ES 19.2 with “refresh interval” set
to zero and ES currently has about 400 million documents in 2 indexes
with about 600G total index size (and I expect about 600M docs more
with around 1T of data. The mapping has _source set to compressed.)
The data processing and insertion into ES is done by multiple threads
on 20 or so m1.xlarge machines (when ES goes down or returns errors
threads back-off with exponential timeout and restart when ES is back
on-line). There are 8-12 threads per machine doing mostly data
processing and if I were to trust that in bigdesk “HTTP channels”
indicate number of active connections, then it means that 30-40
threads are connected to ES at any given time. Indexing rate is about
750 docs per second sometime maxing out at 10,000 docs per second. The
average doc size is about 5000 bytes

andym wrote:

The point you raise about GC not collecting I suspect might have
something to do with how we set batch sizes – we split the text
into smaller “documents” and send these “documents” to ES in
batches but the batch size depends on the length of the original
text. So in the case when many threads send very large texts at
once and ES is close it the max memory limit, ES might get into
state where the these batches are not garbage (from GC perspective)
but ES has no other memory left to do any additional work.

When the JVM runs out of memory, all bets are off. We've seen ES do
interesting things after indexing it into the ground. Sometimes
pings start timing out and it is banished from the cluster. Other
times it OOMEs, leaves, rejoins, and keeps functioning. Most of the
time, though, it does what you describe -- reaches a state where the
node stays around and all you see are fruitless attempts at GC in the
log.

You have to size your bulk requests by bytes instead of doc count so
you can better predict what kind of batch you're throwing at the
cluster. If you're really running at the margins, you should make
sure your shard allocation is spread over the cluster really well, or
index into separate indices whose shards are spread out. We run our
ES JVM heaps 70% RAM and monitor for heap and RSS usage (the JVM
leaks). We also have a sibling plugin to our elasticsearch-jetty
project that adds a throttling filter for extra safety here. It
rejects bulk requests when a node has met certain mem, cpu, disk, &
request thresholds. We are close to open-sourcing it.

Best thing to do is monitor exactly what you're sending to ES
overlayed with metrics from the data nodes and try to gather very
specific information about your problem.

-Drew

Hi Andy,

Yes, sounds like you are on the right track on investigating what is going
on with
the heap during indexing.

The batch size for bulk index is critical. I found out the hard way also
that indexing
10,000 docs at a time may not be a good idea. At 1000 at a time is much
better.

BigDesk is great for initial quick look of the heap, the master version for
0.19+ is much better
than then 1.0.0. Even better is sematext's ES tool to give you even more
info.

Good luck

--Andrew

On Thursday, May 10, 2012 9:09:50 PM UTC-5, andym wrote:

Hi Andrew,

The server is used for indexing only at the moment without any other
queries going into it and I think it’s hitting the memory limit. The
only other process that makes queries into it is bigdesk.

I don’t think I paid enough attention memory to pattern in bigdesk
since when indexing started as it was oscillating between 1-2G. The
problem seemed to have started with much larger number of docs when
oscillation would go from 1.7G to 3.9G (that was when I had ES
originally configured at 4.1G, and since then I bumped it up to 5.2G
and will move it to a bigger machine or split it as Shay and Craig had
suggested) and sometimes it would just get stuck. It looked like some
indexing was still happening at 10 or so docs per second and there
were no errors returned from ES on new bulk inserts but I haven’t seen
ES recover by doing GC collection even when completely shutting down
processing threads (or maybe I just did not wait long enough).

The point you raise about GC not collecting I suspect might have
something to do with how we set batch sizes – we split the text into
smaller “documents” and send these “documents” to ES in batches but
the batch size depends on the length of the original text. So in the
case when many threads send very large texts at once and ES is close
it the max memory limit, ES might get into state where the these
batches are not garbage (from GC perspective) but ES has no other
memory left to do any additional work.

-- Andy

On May 10, 6:34 pm, "Andrew[.:at:.]DataFeedFile.com"
and...@datafeedfile.com wrote:

Andy,

Are just indexing only when the heap memory gets almost all occupied?
Do you have any search queries running?

The reason I asked is because I have ran into this issue before where
we have some searches with large facet field values, because facet does
sort
and loads all the field into cache (field cache) our heap was actually
mostly filled
with field caches. And since field caches are not considered garbage, GC
never
collected.

The server crawls until we restart it.

Use bigdesk to check your field cache size against your heap size. In
case
you
are running into same issue.

Regards,

--Andrew

On Thursday, May 10, 2012 3:38:28 PM UTC-5, andym wrote:

Hi,

I am currently running indexing on c1.xlarge (with 4 ephemeral drives
in RAID0 and gateway going to S3). Everything works great except every
several hours ES gets overwhelmed and indexing slows significantly.
From what I can see from bigdesk during “normal” ES operation “Heap
Mem” window has jigsaw pattern but when it gets overwhelmed seems like
no GC happens (no jigsaw pattern in bigdesk) and memory is maxed
( configured at 5120M)

Doing ES service restart (through “bin/service/elasticsearch –
restart”) solves the problem for a few hours but then the problem re-
appears.

I wonder whether restarting ES when it is on such state going to lead
to any data loss so I can put this into a cron job to assure indexing
continues (or whether there are any better ways to address the
problem)

Thanks,

-- Andy

P.S. Some background: I am running ES 19.2 with “refresh interval” set
to zero and ES currently has about 400 million documents in 2 indexes
with about 600G total index size (and I expect about 600M docs more
with around 1T of data. The mapping has _source set to compressed.)
The data processing and insertion into ES is done by multiple threads
on 20 or so m1.xlarge machines (when ES goes down or returns errors
threads back-off with exponential timeout and restart when ES is back
on-line). There are 8-12 threads per machine doing mostly data
processing and if I were to trust that in bigdesk “HTTP channels”
indicate number of active connections, then it means that 30-40
threads are connected to ES at any given time. Indexing rate is about
750 docs per second sometime maxing out at 10,000 docs per second. The
average doc size is about 5000 bytes