Error while indexing -java heap space

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj anuj.sharma@orkash.com a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj <anuj....@orkash.com <javascript:>> a
écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj anuj.sharma@orkash.com a écrit :

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj anuj....@orkash.com a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato david@pilato.fr a écrit :

Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj anuj.sharma@orkash.com a écrit :

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj anuj....@orkash.com a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

--

No I have not tired ulimit

Is my configuration is correct?or I have to make some changes?
Please suggest me the approach so that I can resolve my issues.

On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato <da...@pilato.fr <javascript:>> a
écrit :

Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj <anuj....@orkash.com <javascript:>> a
écrit :

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj anuj....@orkash.com a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

--

--

Hi David,

I again start indexing my data with same configuration as mentioned in
above post but this time I am getting too many files opened exception.

On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato <da...@pilato.fr <javascript:>> a
écrit :

Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj <anuj....@orkash.com <javascript:>> a
écrit :

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj anuj....@orkash.com a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

--

--

Are you sure that there is only one ES node running on this instance?

Le 26 septembre 2012 à 09:11, Anuj anuj.sharma@orkash.com a écrit :

Hi David,

I again start indexing my data with same configuration as mentioned in above
post but this time I am getting too many files opened exception.

On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato < da...@pilato.fr> a écrit :

Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a écrit :

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh
http://elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:
> > > What are your memory options?

 Are you sure you gave 8 Gb to ES ?

 --
 David ;-)
 Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


 Le 26 sept. 2012 à 07:36, Anuj < anuj....@orkash.com> a écrit :


 Hi ,

 I am creating index using ES on a server (16GB RAM, 8core). I have

allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using
bulk api to index my data. So I send 200 docs in one batch , my total
batch is 2000. But I get error each time I try to index my data, I get
java heap space exception every time. I have tried to reduce/increase RAM
for ES and different memory parameters in Elasticsearch.yaml but nothing
has worked for me.

 Plz any one suggest me how can I solve this issue.

 Thanks in advance



 Anuj



 --

--

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

Hi,
well the too many files opened issue, you need to set the open files limit
in you platform/os on which you are running the ES. So basically if you are
running ES on linux/unix. You need to set the ulimit -l unlimited.
As David had mentioned.

This configuration has different syntax for different platform( like
windows, linux ...etc) and you need to find out yours :slight_smile:

This issue occurs when you try to create to many indexes and as a result
the server create that many files/opens that many channels for the indexes.

I hope this helps

Thanks
Amit

On Wed, Sep 26, 2012 at 1:06 PM, David Pilato david@pilato.fr wrote:

**
Are you sure that there is only one ES node running on this instance?

Le 26 septembre 2012 à 09:11, Anuj anuj.sharma@orkash.com a écrit :

Hi David,

I again start indexing my data with same configuration as mentioned in
above post but this time I am getting too many files opened exception.

On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato < da...@pilato.fr> a écrit :

Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a écrit :

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj < anuj....@orkash.com> a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

yes I am sure.I am using bigdesk and head plugin for analysis.

On Wednesday, 26 September 2012 13:06:55 UTC+5:30, David Pilato wrote:

Are you sure that there is only one ES node running on this instance?

Le 26 septembre 2012 à 09:11, Anuj <anuj....@orkash.com <javascript:>> a
écrit :

Hi David,

I again start indexing my data with same configuration as mentioned in
above post but this time I am getting too many files opened exception.

On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato < da...@pilato.fr> a écrit :

Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a écrit :

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj < anuj....@orkash.com> a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

Thanks Amit for your reply.
To raise the limit I follow the following steps:-

edit /etc/security/limits.conf the lines:

elasticsearch soft nofile 32000
elasticsearch hard nofile 32000

after making these changes,I again start my ES but bigdesk shows max
opened file to 1024.
Can you please explain me why bigdesk showing me 1024 max file as I have
set limit to 32000?

On Wednesday, 26 September 2012 13:31:06 UTC+5:30, Amit Singh wrote:

Hi,
well the too many files opened issue, you need to set the open files limit
in you platform/os on which you are running the ES. So basically if you are
running ES on linux/unix. You need to set the ulimit -l unlimited.
As David had mentioned.

This configuration has different syntax for different platform( like
windows, linux ...etc) and you need to find out yours :slight_smile:

This issue occurs when you try to create to many indexes and as a result
the server create that many files/opens that many channels for the indexes.

I hope this helps

Thanks
Amit

On Wed, Sep 26, 2012 at 1:06 PM, David Pilato <da...@pilato.fr<javascript:>

wrote:

**
Are you sure that there is only one ES node running on this instance?

Le 26 septembre 2012 à 09:11, Anuj <anuj....@orkash.com <javascript:>> a
écrit :

Hi David,

I again start indexing my data with same configuration as mentioned in
above post but this time I am getting too many files opened exception.

On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato < da...@pilato.fr> a écrit :

Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a écrit :

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj < anuj....@orkash.com> a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

Anuj,
After changing the limits.conf file, you have to logout and login again.
Then restart ES.

Also, I saw that you are setting cache.memory.direct to false. If that is
the case, the cache memory will be used from your Java heap. If you set it
to true, then that memory will be outside the JVM heap, so you will be less
like to get an OOM. [1]

And I am not sure how exactly the flush heuristics in ES works, but as a
last attempt you may also try calling flush[2] explicitly after indexing
some batches. That would free up some cache.

Thanks,
Jaideep
[1] - Elasticsearch Platform — Find real-time answers at scale | Elastic
[2] -
Elasticsearch Platform — Find real-time answers at scale | Elastic

On Wed, Sep 26, 2012 at 1:58 PM, Anuj anuj.sharma@orkash.com wrote:

Thanks Amit for your reply.
To raise the limit I follow the following steps:-

edit /etc/security/limits.conf the lines:

elasticsearch soft nofile 32000
elasticsearch hard nofile 32000

after making these changes,I again start my ES but bigdesk shows max
opened file to 1024.
Can you please explain me why bigdesk showing me 1024 max file as I have
set limit to 32000?

On Wednesday, 26 September 2012 13:31:06 UTC+5:30, Amit Singh wrote:

Hi,
well the too many files opened issue, you need to set the open files
limit in you platform/os on which you are running the ES. So basically if
you are running ES on linux/unix. You need to set the ulimit -l
unlimited.
As David had mentioned.

This configuration has different syntax for different platform( like
windows, linux ...etc) and you need to find out yours :slight_smile:

This issue occurs when you try to create to many indexes and as a result
the server create that many files/opens that many channels for the indexes.

I hope this helps

Thanks
Amit

On Wed, Sep 26, 2012 at 1:06 PM, David Pilato da...@pilato.fr wrote:

**
Are you sure that there is only one ES node running on this instance?

Le 26 septembre 2012 à 09:11, Anuj anuj....@orkash.com a écrit :

Hi David,

I again start indexing my data with same configuration as mentioned in
above post but this time I am getting too many files opened exception.

On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato < da...@pilato.fr> a écrit :

Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a écrit :

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj < anuj....@orkash.com> a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--
Jaideep Dhok

--

Did you re log as elasticsearch user before restarting ES?
Did your restart your machine?

See here if it helps:
http://www.walkernews.net/2011/05/02/how-to-apply-limits-conf-settings-immediately-without-reboot-linux-system/
http://www.walkernews.net/2011/05/02/how-to-apply-limits-conf-settings-immediately-without-reboot-linux-system/

David.

Le 26 septembre 2012 à 10:28, Anuj anuj.sharma@orkash.com a écrit :

Thanks Amit for your reply.
To raise the limit I follow the following steps:-

edit /etc/security/limits.conf the lines:

 elasticsearch soft nofile 32000
 elasticsearch hard nofile 32000

after making these changes,I again start my ES but bigdesk shows max opened
file to 1024.
Can you please explain me why bigdesk showing me 1024 max file as I have set
limit to 32000?

On Wednesday, 26 September 2012 13:31:06 UTC+5:30, Amit Singh wrote:

Hi,
well the too many files opened issue, you need to set the open files
limit in you platform/os on which you are running the ES. So basically if
you are running ES on linux/unix. You need to set the ulimit -l unlimited.
As David had mentioned.

This configuration has different syntax for different platform( like
windows, linux ...etc) and you need to find out yours :slight_smile:

This issue occurs when you try to create to many indexes and as a result
the server create that many files/opens that many channels for the indexes.

I hope this helps

Thanks
Amit

On Wed, Sep 26, 2012 at 1:06 PM, David Pilato da...@pilato.fr wrote:
> > > Are you sure that there is only one ES node running on this
> > > instance?

 Le 26 septembre 2012 à 09:11, Anuj < anuj....@orkash.com> a écrit :

  > > > > Hi David,
  I again start indexing my data with same configuration as

mentioned in above post but this time I am getting too many files opened
exception.

  On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato

wrote:
> > > > > Sorry forget the ulimit

    --
    David ;-)
    Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


    Le 26 sept. 2012 à 09:03, David Pilato < da...@pilato.fr> a

écrit :
Did you enter
ulimit -l unlimited

    When the error occurs? After the first inserts?


    --
    David ;-)
    Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


    Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a écrit

:
Hi David,

    I am setting  ES_MIN_MEM and  ES_MAX_MEM in elasticseach.in.sh

http://elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

    and in elasticsearch.yml  I am setting the following

properties:-

    index.number_of_shards: 4
    index.number_of_replicas: 0

    bootstrap.mlockall: false
    cache.memory.direct: false



    On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David

Pilato wrote:
> > > > > > What are your memory options?

      Are you sure you gave 8 Gb to ES ?

      --
      David ;-)
      Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


      Le 26 sept. 2012 à 07:36, Anuj < anuj....@orkash.com> a

écrit :

      Hi ,

      I am creating index using ES on a server (16GB RAM,

8core). I have allocated 8GB RAM to ES. It has only one node with 4
Shards. I am using bulk api to index my data. So I send 200 docs in
one batch , my total batch is 2000. But I get error each time I try
to index my data, I get java heap space exception every time. I have
tried to reduce/increase RAM for ES and different memory parameters
in Elasticsearch.yaml but nothing has worked for me.

      Plz any one suggest me how can I solve this issue.

      Thanks in advance



      Anuj



      --


    > > > > > 
    --





    --


  > > > > 
  --



 > > > 
 --
 David Pilato
 http://www.scrutmydocs.org/ <http://www.scrutmydocs.org/>
 http://dev.david.pilato.fr/ <http://dev.david.pilato.fr/>
 Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs



 --

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

Jaideep,

Thanks for your reply.

I made some changes in my configuration .
Following is the configuration :-

node.master: true
node.data: true
index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: true
cache.memory.direct: true

gateway.type: local

I also set the limit to max open files to 32000.

Now , I allocate 6GB RAM to ES and using bluk api to index my data.In one
batch I send 200 docs and my total batch size is 200.
But the problem is indexing is really slow this time.In 1 hour ES indexed
only 10K docs.
I also get following warning in my console :-

[2012-09-27 10:45:44,902][WARN ][monitor.jvm ] [ Node1]
[gc][ParNew][4760][1394] duration [1.9s], collections [1]/[2.5s], total
[1.9s]/[52.4s], memory [4gb]->[3.9gb]/[5.9gb], all_pools {[Code Cache]
[5.5mb]->[5.5mb]/[48mb]}{[Par Eden Space]
[90.3mb]->[78.3kb]/[133.1mb]}{[Par Survivor Space]
[7.8mb]->[9.4mb]/[16.6mb]}{[CMS Old Gen] [3.9gb]->[3.9gb]/[5.8gb]}{[CMS
Perm Gen] [36.9mb]->[36.9mb]/[84mb]}

Can you please suggest me what I am doing wrong.

Thanks
Anuj

On Wednesday, 26 September 2012 14:17:35 UTC+5:30, jaideep dhok wrote:

Anuj,
After changing the limits.conf file, you have to logout and login again.
Then restart ES.

Also, I saw that you are setting cache.memory.direct to false. If that is
the case, the cache memory will be used from your Java heap. If you set it
to true, then that memory will be outside the JVM heap, so you will be less
like to get an OOM. [1]

And I am not sure how exactly the flush heuristics in ES works, but as a
last attempt you may also try calling flush[2] explicitly after indexing
some batches. That would free up some cache.

Thanks,
Jaideep
[1] -
Elasticsearch Platform — Find real-time answers at scale | Elastic
[2] -
Elasticsearch Platform — Find real-time answers at scale | Elastic

On Wed, Sep 26, 2012 at 1:58 PM, Anuj <anuj....@orkash.com <javascript:>>wrote:

Thanks Amit for your reply.
To raise the limit I follow the following steps:-

edit /etc/security/limits.conf the lines:

elasticsearch soft nofile 32000
elasticsearch hard nofile 32000

after making these changes,I again start my ES but bigdesk shows max
opened file to 1024.
Can you please explain me why bigdesk showing me 1024 max file as I have
set limit to 32000?

On Wednesday, 26 September 2012 13:31:06 UTC+5:30, Amit Singh wrote:

Hi,
well the too many files opened issue, you need to set the open files
limit in you platform/os on which you are running the ES. So basically if
you are running ES on linux/unix. You need to set the ulimit -l
unlimited.
As David had mentioned.

This configuration has different syntax for different platform( like
windows, linux ...etc) and you need to find out yours :slight_smile:

This issue occurs when you try to create to many indexes and as a result
the server create that many files/opens that many channels for the indexes.

I hope this helps

Thanks
Amit

On Wed, Sep 26, 2012 at 1:06 PM, David Pilato da...@pilato.fr wrote:

**
Are you sure that there is only one ES node running on this instance?

Le 26 septembre 2012 à 09:11, Anuj anuj....@orkash.com a écrit :

Hi David,

I again start indexing my data with same configuration as mentioned in
above post but this time I am getting too many files opened exception.

On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato < da...@pilato.fr> a écrit :

Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a écrit :

Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj < anuj....@orkash.com> a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--
Jaideep Dhok

--

David,
Thanks for your reply.

I made some changes in my configuration .
Following is the configuration :-

node.master: true
node.data: true
index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: true
cache.memory.direct: true

gateway.type: local

I also set the limit to max open files to 32000.

Now , I allocate 6GB RAM to ES and using bulk api to index my data.In one
batch I send 200 docs and my total batch size is 200.
But the problem is indexing is really slow this time.In 1 hour ES indexed
only 10K docs and for next 1 hour ES indexed only 4K docs.Performance
keep's decreasing.

I also get following warning in my console :-

[2012-09-27 10:45:44,902][WARN ][monitor.jvm ] [Node1]
[gc][ParNew][4760][1394] duration [1.9s], collections [1]/[2.5s], total
[1.9s]/[52.4s], memory [4gb]->[3.9gb]/[5.9gb], all_pools {[Code Cache]
[5.5mb]->[5.5mb]/[48mb]}{[Par Eden Space]
[90.3mb]->[78.3kb]/[133.1mb]}{[Par Survivor Space]
[7.8mb]->[9.4mb]/[16.6mb]}{[CMS Old Gen] [3.9gb]->[3.9gb]/[5.8gb]}{[CMS
Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:04:37,736][INFO ][monitor.jvm ] [Node1]
[gc][ParNew][5890][1678] duration [904ms], collections [1]/[1s], total
[904ms]/[1m], memory [3gb]->[2.9gb]/[5.9gb], all_pools {[Code Cache]
[5.7mb]->[5.7mb]/[48mb]}{[Par Eden Space]
[119.3mb]->[5.1mb]/[133.1mb]}{[Par Survivor Space]
[8.4mb]->[9.1mb]/[16.6mb]}{[CMS Old Gen] [2.9gb]->[2.9gb]/[5.8gb]}{[CMS
Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:09:24,030][WARN ][monitor.jvm ] [Node1]
[gc][ParNew][6175][1748] duration [1s], collections [1]/[1.1s], total
[1s]/[1.1m], memory [4.4gb]->[4.3gb]/[5.9gb], all_pools {[Code Cache]
[5.6mb]->[5.6mb]/[48mb]}{[Par Eden Space]
[124.8mb]->[613.6kb]/[133.1mb]}{[Par Survivor Space]
[9.4mb]->[9mb]/[16.6mb]}{[CMS Old Gen] [4.3gb]->[4.3gb]/[5.8gb]}{[CMS Perm
Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:09:27,229][INFO ][monitor.jvm ] [Node1]
[gc][ParNew][6177][1749] duration [905ms], collections [1]/[1.6s], total
[905ms]/[1.1m], memory [4.4gb]->[4.3gb]/[5.9gb], all_pools {[Code Cache]
[5.6mb]->[5.6mb]/[48mb]}{[Par Eden Space]
[111.9mb]->[484.3kb]/[133.1mb]}{[Par Survivor Space]
[9mb]->[8.4mb]/[16.6mb]}{[CMS Old Gen] [4.3gb]->[4.3gb]/[5.8gb]}{[CMS Perm
Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:30:55,911][WARN ][monitor.jvm ] [ Node1]
[gc][ParNew][7463][2054] duration [1.8s], collections [1]/[2.1s], total
[1.8s]/[1.3m], memory [1.8gb]->[1.7gb]/[5.9gb], all_pools {[Code Cache]
[5.8mb]->[5.8mb]/[48mb]}{[Par Eden Space]
[120.9mb]->[1.1mb]/[133.1mb]}{[Par Survivor Space]
[9.3mb]->[9.5mb]/[16.6mb]}{[CMS Old Gen] [1.6gb]->[1.6gb]/[5.8gb]}{[CMS
Perm Gen] [36.9mb]->[36.9mb]/[84mb]}

Can you please suggest me what I am doing wrong.

Thanks
Anuj

On Wednesday, 26 September 2012 15:02:49 UTC+5:30, David Pilato wrote:

Did you re log as elasticsearch user before restarting ES?
Did your restart your machine?

See here if it helps:
http://www.walkernews.net/2011/05/02/how-to-apply-limits-conf-settings-immediately-without-reboot-linux-system/

David.

Le 26 septembre 2012 à 10:28, Anuj <anuj....@orkash.com <javascript:>> a
écrit :

Thanks Amit for your reply.
To raise the limit I follow the following steps:-

edit /etc/security/limits.conf the lines:

elasticsearch soft nofile 32000
elasticsearch hard nofile 32000

after making these changes,I again start my ES but bigdesk shows max
opened file to 1024.
Can you please explain me why bigdesk showing me 1024 max file as I have
set limit to 32000?

On Wednesday, 26 September 2012 13:31:06 UTC+5:30, Amit Singh wrote:

Hi,
well the too many files opened issue, you need to set the open files limit
in you platform/os on which you are running the ES. So basically if you are
running ES on linux/unix. You need to set the ulimit -l unlimited.
As David had mentioned.

This configuration has different syntax for different platform( like
windows, linux ...etc) and you need to find out yours :slight_smile:

This issue occurs when you try to create to many indexes and as a result
the server create that many files/opens that many channels for the indexes.

I hope this helps

Thanks
Amit

On Wed, Sep 26, 2012 at 1:06 PM, David Pilato da...@pilato.fr wrote:

Are you sure that there is only one ES node running on this instance?

Le 26 septembre 2012 à 09:11, Anuj < anuj....@orkash.com> a écrit :

Hi David,

I again start indexing my data with same configuration as mentioned in
above post but this time I am getting too many files opened exception.

On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato < da...@pilato.fr> a écrit :
Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a écrit :
Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj < anuj....@orkash.com> a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

Very strange. What does your documents look like?
I suspect something wrong with your bulk. Do you reopen a new bulk after each
iteration or do you reuse the first one?
If you reuse the first Bulk instance, that's your issue.

Can you gist your code?

David.

Le 27 septembre 2012 à 08:24, Anuj anuj.sharma@orkash.com a écrit :

David,
Thanks for your reply.

I made some changes in my configuration .
Following is the configuration :-

node.master: true
node.data: true
index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: true
cache.memory.direct: true

gateway.type: local

I also set the limit to max open files to 32000.

Now , I allocate 6GB RAM to ES and using bulk api to index my data.In one
batch I send 200 docs and my total batch size is 200.
But the problem is indexing is really slow this time.In 1 hour ES indexed
only 10K docs and for next 1 hour ES indexed only 4K docs.Performance keep's
decreasing.

I also get following warning in my console :-

[2012-09-27 10:45:44,902][WARN ][monitor.jvm ] [Node1]
[gc][ParNew][4760][1394] duration [1.9s], collections [1]/[2.5s], total
[1.9s]/[52.4s], memory [4gb]->[3.9gb]/[5.9gb], all_pools {[Code Cache]
[5.5mb]->[5.5mb]/[48mb]}{[Par Eden Space] [90.3mb]->[78.3kb]/[133.1mb]}{[Par
Survivor Space] [7.8mb]->[9.4mb]/[16.6mb]}{[CMS Old Gen]
[3.9gb]->[3.9gb]/[5.8gb]}{[CMS Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:04:37,736][INFO ][monitor.jvm ] [Node1]
[gc][ParNew][5890][1678] duration [904ms], collections [1]/[1s], total
[904ms]/[1m], memory [3gb]->[2.9gb]/[5.9gb], all_pools {[Code Cache]
[5.7mb]->[5.7mb]/[48mb]}{[Par Eden Space] [119.3mb]->[5.1mb]/[133.1mb]}{[Par
Survivor Space] [8.4mb]->[9.1mb]/[16.6mb]}{[CMS Old Gen]
[2.9gb]->[2.9gb]/[5.8gb]}{[CMS Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:09:24,030][WARN ][monitor.jvm ] [Node1]
[gc][ParNew][6175][1748] duration [1s], collections [1]/[1.1s], total
[1s]/[1.1m], memory [4.4gb]->[4.3gb]/[5.9gb], all_pools {[Code Cache]
[5.6mb]->[5.6mb]/[48mb]}{[Par Eden Space] [124.8mb]->[613.6kb]/[133.1mb]}{[Par
Survivor Space] [9.4mb]->[9mb]/[16.6mb]}{[CMS Old Gen]
[4.3gb]->[4.3gb]/[5.8gb]}{[CMS Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:09:27,229][INFO ][monitor.jvm ] [Node1]
[gc][ParNew][6177][1749] duration [905ms], collections [1]/[1.6s], total
[905ms]/[1.1m], memory [4.4gb]->[4.3gb]/[5.9gb], all_pools {[Code Cache]
[5.6mb]->[5.6mb]/[48mb]}{[Par Eden Space] [111.9mb]->[484.3kb]/[133.1mb]}{[Par
Survivor Space] [9mb]->[8.4mb]/[16.6mb]}{[CMS Old Gen]
[4.3gb]->[4.3gb]/[5.8gb]}{[CMS Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:30:55,911][WARN ][monitor.jvm ] [ Node1]
[gc][ParNew][7463][2054] duration [1.8s], collections [1]/[2.1s], total
[1.8s]/[1.3m], memory [1.8gb]->[1.7gb]/[5.9gb], all_pools {[Code Cache]
[5.8mb]->[5.8mb]/[48mb]}{[Par Eden Space] [120.9mb]->[1.1mb]/[133.1mb]}{[Par
Survivor Space] [9.3mb]->[9.5mb]/[16.6mb]}{[CMS Old Gen]
[1.6gb]->[1.6gb]/[5.8gb]}{[CMS Perm Gen] [36.9mb]->[36.9mb]/[84mb]}

Can you please suggest me what I am doing wrong.

Thanks
Anuj

On Wednesday, 26 September 2012 15:02:49 UTC+5:30, David Pilato wrote:

Did you re log as elasticsearch user before restarting ES?
Did your restart your machine?

See here if it helps:
http://www.walkernews.net/2011/05/02/how-to-apply-limits-conf-settings-immediately-without-reboot-linux-system/
http://www.walkernews.net/2011/05/02/how-to-apply-limits-conf-settings-immediately-without-reboot-linux-system/

David.

Le 26 septembre 2012 à 10:28, Anuj <
http://www.walkernews.net/2011/05/02/how-to-apply-limits-conf-settings-immediately-without-reboot-linux-system/
anuj....@orkash.com> a écrit :

> > > Thanks Amit for your reply.
To raise the limit I follow the following steps:-

edit /etc/security/limits.conf the lines:

    elasticsearch soft nofile 32000
    elasticsearch hard nofile 32000


after making these changes,I again start my ES but  bigdesk shows max

opened file to 1024.
Can you please explain me why bigdesk showing me 1024 max file as I
have set limit to 32000?

On Wednesday, 26 September 2012 13:31:06 UTC+5:30, Amit Singh wrote:
  > > > > Hi,
  well the too many files opened issue, you need to set the open

files limit in you platform/os on which you are running the ES. So
basically if you are running ES on linux/unix. You need to set the
ulimit -l unlimited.
As David had mentioned.

  This configuration has different syntax for different platform(

like windows, linux ...etc) and you need to find out yours :slight_smile:

  This issue occurs when you try to create to many indexes and as a

result the server create that many files/opens that many channels for
the indexes.

  I hope this helps

  Thanks
  Amit

  On Wed, Sep 26, 2012 at 1:06 PM, David Pilato <da...@pilato.fr>

wrote:
> > > > > Are you sure that there is only one ES node
> > > > > running on this instance?

    Le 26 septembre 2012 à 09:11, Anuj < anuj....@orkash.com> a

écrit :

     > > > > > > Hi David,
     I again start indexing my data with same configuration as

mentioned in above post but this time I am getting too many files
opened exception.

     On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David

Pilato wrote:
> > > > > > > Sorry forget the ulimit

       --
       David ;-)
       Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


       Le 26 sept. 2012 à 09:03, David Pilato <

da...@pilato.fr> a écrit :
Did you enter
ulimit -l unlimited

       When the error occurs? After the first inserts?


       --
       David ;-)
       Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


       Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a

écrit :
Hi David,

       I am setting  ES_MIN_MEM and  ES_MAX_MEM in

elasticseach.in.sh http://elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

       and in elasticsearch.yml  I am setting the following

properties:-

       index.number_of_shards: 4
       index.number_of_replicas: 0

       bootstrap.mlockall: false
       cache.memory.direct: false



       On Wednesday, 26 September 2012 11:15:20 UTC+5:30,

David Pilato wrote:
> > > > > > > > What are your memory
> > > > > > > > options?

         Are you sure you gave 8 Gb to ES ?

         --
         David ;-)
         Twitter : @dadoonet / @elasticsearchfr /

@scrutmydocs

         Le 26 sept. 2012 à 07:36, Anuj <

anuj....@orkash.com> a écrit :

         Hi ,

         I am creating index using ES on a server (16GB RAM,

8core). I have allocated 8GB RAM to ES. It has only one node
with 4 Shards. I am using bulk api to index my data. So I send
200 docs in one batch , my total batch is 2000. But I get error
each time I try to index my data, I get java heap space
exception every time. I have tried to reduce/increase RAM for ES
and different memory parameters in Elasticsearch.yaml but
nothing has worked for me.

         Plz any one suggest me how can I solve this issue.

         Thanks in advance



         Anuj



         --


       > > > > > > > 
       --





       --


     > > > > > > 
     --



    > > > > > 
    --
    David Pilato
    http://www.scrutmydocs.org/ <http://www.scrutmydocs.org/>
    http://dev.david.pilato.fr/ <http://dev.david.pilato.fr/>
    Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs



    --


  > > > >     > > > 
--

--
David Pilato
http://www.scrutmydocs.org/ http://www.scrutmydocs.org/
http://dev.david.pilato.fr/ http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

David,

I am creating 1 Bulk instance for 1 batch (means 1 bulk object for 200
docs) but memory is not released.
I am making flush request but I am getting following exception:-

org.elasticsearch.index.engine.FlushNotAllowedEngineException: [poidev]
[1] Already flushing...
at
org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:
661)
at
org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:

  1. at
    org.elasticsearch.action.admin.indices.flush.TransportFlushAction.shardOperation(TransportFlushAction.java:

  2. at
    org.elasticsearch.action.admin.indices.flush.TransportFlushAction.shardOperation(TransportFlushAction.java:

On Thursday, 27 September 2012 17:40:03 UTC+5:30, David Pilato wrote:

Very strange. What does your documents look like?
I suspect something wrong with your bulk. Do you reopen a new bulk after
each iteration or do you reuse the first one?
If you reuse the first Bulk instance, that's your issue.

Can you gist your code?

David.

Le 27 septembre 2012 à 08:24, Anuj <anuj....@orkash.com <javascript:>> a
écrit :

David,
Thanks for your reply.

I made some changes in my configuration .
Following is the configuration :-

node.master: true
node.data: true
index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: true
cache.memory.direct: true

gateway.type: local

I also set the limit to max open files to 32000.

Now , I allocate 6GB RAM to ES and using bulk api to index my data.In one
batch I send 200 docs and my total batch size is 200.
But the problem is indexing is really slow this time.In 1 hour ES indexed
only 10K docs and for next 1 hour ES indexed only 4K docs.Performance
keep's decreasing.

I also get following warning in my console :-

[2012-09-27 10:45:44,902][WARN ][monitor.jvm ] [Node1]
[gc][ParNew][4760][1394] duration [1.9s], collections [1]/[2.5s], total
[1.9s]/[52.4s], memory [4gb]->[3.9gb]/[5.9gb], all_pools {[Code Cache]
[5.5mb]->[5.5mb]/[48mb]}{[Par Eden Space]
[90.3mb]->[78.3kb]/[133.1mb]}{[Par Survivor Space]
[7.8mb]->[9.4mb]/[16.6mb]}{[CMS Old Gen] [3.9gb]->[3.9gb]/[5.8gb]}{[CMS
Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:04:37,736][INFO ][monitor.jvm ] [Node1]
[gc][ParNew][5890][1678] duration [904ms], collections [1]/[1s], total
[904ms]/[1m], memory [3gb]->[2.9gb]/[5.9gb], all_pools {[Code Cache]
[5.7mb]->[5.7mb]/[48mb]}{[Par Eden Space]
[119.3mb]->[5.1mb]/[133.1mb]}{[Par Survivor Space]
[8.4mb]->[9.1mb]/[16.6mb]}{[CMS Old Gen] [2.9gb]->[2.9gb]/[5.8gb]}{[CMS
Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:09:24,030][WARN ][monitor.jvm ] [Node1]
[gc][ParNew][6175][1748] duration [1s], collections [1]/[1.1s], total
[1s]/[1.1m], memory [4.4gb]->[4.3gb]/[5.9gb], all_pools {[Code Cache]
[5.6mb]->[5.6mb]/[48mb]}{[Par Eden Space]
[124.8mb]->[613.6kb]/[133.1mb]}{[Par Survivor Space]
[9.4mb]->[9mb]/[16.6mb]}{[CMS Old Gen] [4.3gb]->[4.3gb]/[5.8gb]}{[CMS Perm
Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:09:27,229][INFO ][monitor.jvm ] [Node1]
[gc][ParNew][6177][1749] duration [905ms], collections [1]/[1.6s], total
[905ms]/[1.1m], memory [4.4gb]->[4.3gb]/[5.9gb], all_pools {[Code Cache]
[5.6mb]->[5.6mb]/[48mb]}{[Par Eden Space]
[111.9mb]->[484.3kb]/[133.1mb]}{[Par Survivor Space]
[9mb]->[8.4mb]/[16.6mb]}{[CMS Old Gen] [4.3gb]->[4.3gb]/[5.8gb]}{[CMS Perm
Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:30:55,911][WARN ][monitor.jvm ] [ Node1]
[gc][ParNew][7463][2054] duration [1.8s], collections [1]/[2.1s], total
[1.8s]/[1.3m], memory [1.8gb]->[1.7gb]/[5.9gb], all_pools {[Code Cache]
[5.8mb]->[5.8mb]/[48mb]}{[Par Eden Space]
[120.9mb]->[1.1mb]/[133.1mb]}{[Par Survivor Space]
[9.3mb]->[9.5mb]/[16.6mb]}{[CMS Old Gen] [1.6gb]->[1.6gb]/[5.8gb]}{[CMS
Perm Gen] [36.9mb]->[36.9mb]/[84mb]}

Can you please suggest me what I am doing wrong.

Thanks
Anuj

On Wednesday, 26 September 2012 15:02:49 UTC+5:30, David Pilato wrote:

Did you re log as elasticsearch user before restarting ES?
Did your restart your machine?

See here if it helps:
http://www.walkernews.net/2011/05/02/how-to-apply-limits-conf-settings-immediately-without-reboot-linux-system/

David.

Le 26 septembre 2012 à 10:28, Anuj <http://www.walkernews.net/2011/05/02/how-to-apply-limits-conf-settings-immediately-without-reboot-linux-system/
anuj....@orkash.com> a écrit :

Thanks Amit for your reply.
To raise the limit I follow the following steps:-

edit /etc/security/limits.conf the lines:

elasticsearch soft nofile 32000
elasticsearch hard nofile 32000

after making these changes,I again start my ES but bigdesk shows max
opened file to 1024.
Can you please explain me why bigdesk showing me 1024 max file as I have
set limit to 32000?

On Wednesday, 26 September 2012 13:31:06 UTC+5:30, Amit Singh wrote:

Hi,
well the too many files opened issue, you need to set the open files limit
in you platform/os on which you are running the ES. So basically if you are
running ES on linux/unix. You need to set the ulimit -l unlimited.
As David had mentioned.

This configuration has different syntax for different platform( like
windows, linux ...etc) and you need to find out yours :slight_smile:

This issue occurs when you try to create to many indexes and as a result
the server create that many files/opens that many channels for the indexes.

I hope this helps

Thanks
Amit

On Wed, Sep 26, 2012 at 1:06 PM, David Pilato da...@pilato.fr wrote:

Are you sure that there is only one ES node running on this instance?

Le 26 septembre 2012 à 09:11, Anuj < anuj....@orkash.com> a écrit :

Hi David,

I again start indexing my data with same configuration as mentioned in
above post but this time I am getting too many files opened exception.

On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:

Sorry forget the ulimit

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 09:03, David Pilato < da...@pilato.fr> a écrit :
Did you enter
ulimit -l unlimited

When the error occurs? After the first inserts?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a écrit :
Hi David,

I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g

and in elasticsearch.yml I am setting the following properties:-

index.number_of_shards: 4
index.number_of_replicas: 0

bootstrap.mlockall: false
cache.memory.direct: false

On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:

What are your memory options?
Are you sure you gave 8 Gb to ES ?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 26 sept. 2012 à 07:36, Anuj < anuj....@orkash.com> a écrit :

Hi ,

I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.

Plz any one suggest me how can I solve this issue.

Thanks in advance

Anuj

--

--

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--