Migration errors 0.20.1 to 0.90

Hi,
i have problem with importing data into new ElasticSearch version 0.90.
When i was importing data into ES ver 0.20.1 there was no problem, but when
i change to ver 0.90 (also upgrade jar files in my Java importer), after
few (130) documents added i get this error:

Exception in thread "calculations"
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at
org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:84)
at
org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:310)
at
org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)
at com.pureholidayhomes.lib.es.ESConnector.send(ESConnector.java:30)
at com.pureholidayhomes.lib.es.Calculations.run(Calculations.java:398)

also ElasticSearch print:

[2013-05-20 16:24:50,735][INFO ][monitor.jvm ] [Exterminator]
[gc][ConcurrentMarkSweep][473][30] duration [9s], collections [1]
/[9.9s], total [9s]/[26.5s], memory [1.4gb]->[1gb]/[1.9gb], all_pools
{[Code Cache] [5.2mb]->[5.2mb]/[48mb]}{[Par Eden Space] [323.2mb]->[6.
1mb]/[532.5mb]}{[Par Survivor Space] [57.1mb]->[0b]/[66.5mb]}{[CMS Old Gen]
[1gb]->[1gb]/[1.3gb]}{[CMS Perm Gen] [31.9mb]->[31.9mb]/[82mb]}

First ElasticSearch runs on 1g ram, and this error occurred after I import
about 60 documents.
Second ElasticSearch with 2g ram, and this error occurred after I import
about 130 documents.

What should i do?
Is it known issue?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

This has nothing to do with migration errors. Your JVM performs a very
long GC of 9 seconds which exceeds the default ping timeout of 5
seconds, so ES dropped the connection ,assuming your JVM is just too
busy. Try again if you can reproduce it. If yes, increase the timeout to
something like 10 seconds, or consider to update your Java version.

Jörg

Am 20.05.13 16:53, schrieb Mte Per:

Hi,
i have problem with importing data into new Elasticsearch version 0.90.
When i was importing data into ES ver 0.20.1 there was no problem, but
when i change to ver 0.90 (also upgrade jar files in my Java
importer), after few (130) documents added i get this error:

Exception in thread "calculations"
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)
at
org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)
at
org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:84)
at
org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:310)
at
org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)
at
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)
at com.pureholidayhomes.lib.es.ESConnector.send(ESConnector.java:30)
at com.pureholidayhomes.lib.es.Calculations.run(Calculations.java:398)

also Elasticsearch print:

[2013-05-20 16:24:50,735][INFO ][monitor.jvm ]
[Exterminator] [gc][ConcurrentMarkSweep][473][30] duration [9s],
collections [1]
/[9.9s], total [9s]/[26.5s], memory [1.4gb]->[1gb]/[1.9gb], all_pools
{[Code Cache] [5.2mb]->[5.2mb]/[48mb]}{[Par Eden Space] [323.2mb]->[6.
1mb]/[532.5mb]}{[Par Survivor Space] [57.1mb]->[0b]/[66.5mb]}{[CMS Old
Gen] [1gb]->[1gb]/[1.3gb]}{[CMS Perm Gen] [31.9mb]->[31.9mb]/[82mb]}

First Elasticsearch runs on 1g ram, and this error occurred after I
import about 60 documents.
Second Elasticsearch with 2g ram, and this error occurred after I
import about 130 documents.

What should i do?
Is it known issue?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I've updated jvm to x64 7 u21, and nothing changed.

W dniu poniedziałek, 20 maja 2013 18:01:06 UTC+2 użytkownik Jörg Prante
napisał:

This has nothing to do with migration errors. Your JVM performs a very
long GC of 9 seconds which exceeds the default ping timeout of 5
seconds, so ES dropped the connection ,assuming your JVM is just too
busy. Try again if you can reproduce it. If yes, increase the timeout to
something like 10 seconds, or consider to update your Java version.

Jörg

Am 20.05.13 16:53, schrieb Mte Per:

Hi,
i have problem with importing data into new Elasticsearch version 0.90.
When i was importing data into ES ver 0.20.1 there was no problem, but
when i change to ver 0.90 (also upgrade jar files in my Java
importer), after few (130) documents added i get this error:

Exception in thread "calculations"
org.elasticsearch.client.transport.NoNodeAvailableException: No node
available
at

org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:202)

at 

org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:106)

at 

org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:84)

at 

org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:310)

at 

org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:315)

at 

org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:62)

at 

org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:57)

at com.pureholidayhomes.lib.es.ESConnector.send(ESConnector.java:30) 
at 

com.pureholidayhomes.lib.es.Calculations.run(Calculations.java:398)

also Elasticsearch print:

[2013-05-20 16:24:50,735][INFO ][monitor.jvm ]
[Exterminator] [gc][ConcurrentMarkSweep][473][30] duration [9s],
collections [1]
/[9.9s], total [9s]/[26.5s], memory [1.4gb]->[1gb]/[1.9gb], all_pools
{[Code Cache] [5.2mb]->[5.2mb]/[48mb]}{[Par Eden Space] [323.2mb]->[6.
1mb]/[532.5mb]}{[Par Survivor Space] [57.1mb]->[0b]/[66.5mb]}{[CMS Old
Gen] [1gb]->[1gb]/[1.3gb]}{[CMS Perm Gen] [31.9mb]->[31.9mb]/[82mb]}

First Elasticsearch runs on 1g ram, and this error occurred after I
import about 60 documents.
Second Elasticsearch with 2g ram, and this error occurred after I
import about 130 documents.

What should i do?
Is it known issue?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

CMS pauses that long usually are caused by one of two things. The first is
that the heap is fragmented enough (or too full) for the CMS collector to
work effectively so it'll do a full stop the world collection. Often this
either means you're not allocating enough memory to the JVM for ES to run
but it can also point to a memory leak. The second cause is if you're
running ES inside of a virtual environment where the VM that ES is running
within is being starved of CPU cycles - in other words the host machine is
overallocated.

My guess is that something isn't releasing resources that should be
released ... 130 docs doesn't seem like much unless they're really, really
large documents.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.