OutOfMemory insexing documents

I'm trying to index 4000 documents, however I allways get an
OutOfMemory exception.
My simple code is a loop around :
public void indicizzaDocumento(String xpIndice, InputStream
xpContent, HashMap<String,Object> xpParametri) throws IOException {
HashMap<String,Object> mappa = (HashMap<String, Object>)
xpParametri.clone();
long inizio = new Date().getTime();
byte[] contiene = IOUtils.toByteArray(xpContent);
String contenuto = new String(JsonUtils.encode(contiene));
mappa.put(CONTENUTO, contenuto);
StringWriter sw = new StringWriter();
mapper.writeValue(sw, mappa);
String json = sw.getBuffer().toString();

client.prepareIndex(INDICE,TIPO,xpIndice).setSource(json).execute().actionGet();
long gap = new Date().getTime() - inizio;
System.out.println("millis for "+xpIndice+" "+gap);
}

What's wrong ?
I tried to profile the code with no relevant informations.
Tks
Tullio

How big is a single doc? Do you start elasticsearch with any specific
settings?

On Tue, Jun 26, 2012 at 2:28 PM, tullio0106 tbettinazzi@axioma.it wrote:

I'm trying to index 4000 documents, however I allways get an
OutOfMemory exception.
My simple code is a loop around :
public void indicizzaDocumento(String xpIndice, InputStream
xpContent, HashMap<String,Object> xpParametri) throws IOException {
HashMap<String,Object> mappa = (HashMap<String, Object>)
xpParametri.clone();
long inizio = new Date().getTime();
byte contiene = IOUtils.toByteArray(xpContent);
String contenuto = new String(JsonUtils.encode(contiene));
mappa.put(CONTENUTO, contenuto);
StringWriter sw = new StringWriter();
mapper.writeValue(sw, mappa);
String json = sw.getBuffer().toString();

client.prepareIndex(INDICE,TIPO,xpIndice).setSource(json).execute().actionGet();
long gap = new Date().getTime() - inizio;
System.out.println("millis for "+xpIndice+" "+gap);
}

What's wrong ?
I tried to profile the code with no relevant informations.
Tks
Tullio

The size is more or less 1Mb each (mean).
Noting relevant I believe:
. shards 5
. replicas 1
. type local
. index.mapping.attachment.indexed_chars -1
Tks
Tullio

Tullio,

Try increasing your -Xmx parameter.

You may want to get SPM ( Sematext Monitoring | Infrastructure Monitoring Service ) so you can
visually see the size of your JVM heap and JVM Garbage Collection, among
other things.

Otis

Search Analytics - Cloud Monitoring Tools & Services | Sematext
Scalable Performance Monitoring - Sematext Monitoring | Infrastructure Monitoring Service

On Tuesday, June 26, 2012 12:17:39 PM UTC-4, tullio0106 wrote:

The size is more or less 1Mb each (mean).
Noting relevant I believe:
. shards 5
. replicas 1
. type local
. index.mapping.attachment.indexed_chars -1
Tks
Tullio

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/OutOfMemory-insexing-documents-tp4019764p4019774.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.