Very strange. What does your documents look like?
I suspect something wrong with your bulk. Do you reopen a new bulk after
each iteration or do you reuse the first one?
If you reuse the first Bulk instance, that's your issue.
Can you gist your code?
David.
Le 27 septembre 2012 à 08:24, Anuj <anuj....@orkash.com <javascript:>> a
écrit :
David,
Thanks for your reply.
I made some changes in my configuration .
Following is the configuration :-
node.master: true
node.data: true
index.number_of_shards: 4
index.number_of_replicas: 0
bootstrap.mlockall: true
cache.memory.direct: true
gateway.type: local
I also set the limit to max open files to 32000.
Now , I allocate 6GB RAM to ES and using bulk api to index my data.In one
batch I send 200 docs and my total batch size is 200.
But the problem is indexing is really slow this time.In 1 hour ES indexed
only 10K docs and for next 1 hour ES indexed only 4K docs.Performance
keep's decreasing.
I also get following warning in my console :-
[2012-09-27 10:45:44,902][WARN ][monitor.jvm ] [Node1]
[gc][ParNew][4760][1394] duration [1.9s], collections [1]/[2.5s], total
[1.9s]/[52.4s], memory [4gb]->[3.9gb]/[5.9gb], all_pools {[Code Cache]
[5.5mb]->[5.5mb]/[48mb]}{[Par Eden Space]
[90.3mb]->[78.3kb]/[133.1mb]}{[Par Survivor Space]
[7.8mb]->[9.4mb]/[16.6mb]}{[CMS Old Gen] [3.9gb]->[3.9gb]/[5.8gb]}{[CMS
Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:04:37,736][INFO ][monitor.jvm ] [Node1]
[gc][ParNew][5890][1678] duration [904ms], collections [1]/[1s], total
[904ms]/[1m], memory [3gb]->[2.9gb]/[5.9gb], all_pools {[Code Cache]
[5.7mb]->[5.7mb]/[48mb]}{[Par Eden Space]
[119.3mb]->[5.1mb]/[133.1mb]}{[Par Survivor Space]
[8.4mb]->[9.1mb]/[16.6mb]}{[CMS Old Gen] [2.9gb]->[2.9gb]/[5.8gb]}{[CMS
Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:09:24,030][WARN ][monitor.jvm ] [Node1]
[gc][ParNew][6175][1748] duration [1s], collections [1]/[1.1s], total
[1s]/[1.1m], memory [4.4gb]->[4.3gb]/[5.9gb], all_pools {[Code Cache]
[5.6mb]->[5.6mb]/[48mb]}{[Par Eden Space]
[124.8mb]->[613.6kb]/[133.1mb]}{[Par Survivor Space]
[9.4mb]->[9mb]/[16.6mb]}{[CMS Old Gen] [4.3gb]->[4.3gb]/[5.8gb]}{[CMS Perm
Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:09:27,229][INFO ][monitor.jvm ] [Node1]
[gc][ParNew][6177][1749] duration [905ms], collections [1]/[1.6s], total
[905ms]/[1.1m], memory [4.4gb]->[4.3gb]/[5.9gb], all_pools {[Code Cache]
[5.6mb]->[5.6mb]/[48mb]}{[Par Eden Space]
[111.9mb]->[484.3kb]/[133.1mb]}{[Par Survivor Space]
[9mb]->[8.4mb]/[16.6mb]}{[CMS Old Gen] [4.3gb]->[4.3gb]/[5.8gb]}{[CMS Perm
Gen] [36.9mb]->[36.9mb]/[84mb]}
[2012-09-27 11:30:55,911][WARN ][monitor.jvm ] [ Node1]
[gc][ParNew][7463][2054] duration [1.8s], collections [1]/[2.1s], total
[1.8s]/[1.3m], memory [1.8gb]->[1.7gb]/[5.9gb], all_pools {[Code Cache]
[5.8mb]->[5.8mb]/[48mb]}{[Par Eden Space]
[120.9mb]->[1.1mb]/[133.1mb]}{[Par Survivor Space]
[9.3mb]->[9.5mb]/[16.6mb]}{[CMS Old Gen] [1.6gb]->[1.6gb]/[5.8gb]}{[CMS
Perm Gen] [36.9mb]->[36.9mb]/[84mb]}
Can you please suggest me what I am doing wrong.
Thanks
Anuj
On Wednesday, 26 September 2012 15:02:49 UTC+5:30, David Pilato wrote:
Did you re log as elasticsearch user before restarting ES?
Did your restart your machine?
See here if it helps:
http://www.walkernews.net/2011/05/02/how-to-apply-limits-conf-settings-immediately-without-reboot-linux-system/
David.
Le 26 septembre 2012 à 10:28, Anuj <http://www.walkernews.net/2011/05/02/how-to-apply-limits-conf-settings-immediately-without-reboot-linux-system/
anuj....@orkash.com> a écrit :
Thanks Amit for your reply.
To raise the limit I follow the following steps:-
edit /etc/security/limits.conf the lines:
elasticsearch soft nofile 32000
elasticsearch hard nofile 32000
after making these changes,I again start my ES but bigdesk shows max
opened file to 1024.
Can you please explain me why bigdesk showing me 1024 max file as I have
set limit to 32000?
On Wednesday, 26 September 2012 13:31:06 UTC+5:30, Amit Singh wrote:
Hi,
well the too many files opened issue, you need to set the open files limit
in you platform/os on which you are running the ES. So basically if you are
running ES on linux/unix. You need to set the ulimit -l unlimited.
As David had mentioned.
This configuration has different syntax for different platform( like
windows, linux ...etc) and you need to find out yours
This issue occurs when you try to create to many indexes and as a result
the server create that many files/opens that many channels for the indexes.
I hope this helps
Thanks
Amit
On Wed, Sep 26, 2012 at 1:06 PM, David Pilato da...@pilato.fr wrote:
Are you sure that there is only one ES node running on this instance?
Le 26 septembre 2012 à 09:11, Anuj < anuj....@orkash.com> a écrit :
Hi David,
I again start indexing my data with same configuration as mentioned in
above post but this time I am getting too many files opened exception.
On Wednesday, 26 September 2012 12:34:21 UTC+5:30, David Pilato wrote:
Sorry forget the ulimit
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 26 sept. 2012 à 09:03, David Pilato < da...@pilato.fr> a écrit :
Did you enter
ulimit -l unlimited
When the error occurs? After the first inserts?
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 26 sept. 2012 à 08:27, Anuj < anuj....@orkash.com> a écrit :
Hi David,
I am setting ES_MIN_MEM and ES_MAX_MEM in elasticseach.in.sh file
ES_MIN_MEM=8g
ES_MAX_MEM=8g
and in elasticsearch.yml I am setting the following properties:-
index.number_of_shards: 4
index.number_of_replicas: 0
bootstrap.mlockall: false
cache.memory.direct: false
On Wednesday, 26 September 2012 11:15:20 UTC+5:30, David Pilato wrote:
What are your memory options?
Are you sure you gave 8 Gb to ES ?
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 26 sept. 2012 à 07:36, Anuj < anuj....@orkash.com> a écrit :
Hi ,
I am creating index using ES on a server (16GB RAM, 8core). I have allocated 8GB RAM to ES. It has only one node with 4 Shards. I am using bulk api to index my data. So I send 200 docs in one batch , my total batch is 2000. But I get error each time I try to index my data, I get java heap space exception every time. I have tried to reduce/increase RAM for ES and different memory parameters in Elasticsearch.yaml but nothing has worked for me.
Plz any one suggest me how can I solve this issue.
Thanks in advance
Anuj
--
--
--
--
--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
--
--
--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
--
--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs