Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread

Hi guys,

I'm trying to replace Lucene and Tika in the existing application. I
started from the service that should index the document user uploaded and
I'm using elasticsearch-mapper-attachments for that. However, I'm stuck on
the some exception that I can't find the cause

Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "elasticsearch[Prosolo
Node][generic][T#3]"

My elasticsearch server is on the localhost. I tried to increase
indices.memory.index_buffer_size, but it didn't work.
This is the code that I'm using to index the file:

public void indexFile(String absolutePath, long id) throws IOException{
String idxName = "documents";
String idxType = "attachment";

      String encodedFile=Base64.encodeFromFile(absolutePath);
    // Client 

client=ElasticSearchFactory.prepareCreateIndexResponseForAttachment(idxName);
Client client=ElasticSearchFactory.getClient();
XContentBuilder source = jsonBuilder().startObject()
.field("doc", encodedFile).endObject();
IndexResponse idxResp = client.prepareIndex().setIndex(idxName
).setType(idxType).setId(String.valueOf(id))
.setSource(source).setRefresh(true).execute().actionGet();
client.close();
}

and this is the log I got, but I can't figure out anything from this log:

2013-06-17 23:02:43,209 DEBUG org.elasticsearch.common.logging.log4j.
Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node]processing
[zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]:
execute
2013-06-17 23:02:43,210 DEBUG
org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)

  • [Prosolo Node] cluster state updated, version [17], source
    [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]
    2013-06-17 23:02:43,211 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents] creating index
    2013-06-17 23:02:43,212 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] creating Index [documents], shards [5]/[1]
    2013-06-17 23:02:43,356 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using dynamic[true], default mapping: default_mapping_location[
    null], loaded_from[jar:file:/home/zoran/.m2/repository/org/elasticsearch/
    elasticsearch/0.90.1/elasticsearch-0.90.1.jar!/org/elasticsearch/index/
    mapper/default-mapping.json] and source[{
    "default":{
    }
    }]
    2013-06-17 23:02:43,357 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using [resident] query cache with max_size [100], expire [null]
    2013-06-17 23:02:43,363 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using index.store.throttle.type [node], with index.store.throttle
    .max_bytes_per_sec [0b]
    2013-06-17 23:02:43,398 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] adding mapping [attachment], source [{"attachment":{"properties"
    :{"doc":{"type":"string"}}}}]
    2013-06-17 23:02:43,434 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] creating shard
    2013-06-17 23:02:43,435 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] creating shard_id [1]
    2013-06-17 23:02:43,501 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] Using [keep_only_last] deletion policy
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [tiered] merge policy with expunge_deletes_allowed[10.0
    ], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30
    ], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[
    2.0], async_merge[true]
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [concurrent] merge scheduler with max_thread_count[3]
    2013-06-17 23:02:43,506 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]
    2013-06-17 23:02:43,507 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] interval [5s], flush_threshold_ops [5000],flush_threshold_size
    [200mb], flush_threshold_period [30m]
    2013-06-17 23:02:43,511 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]->[RECOVERING], reason [from [Growing Man][
    8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]]
    2013-06-17 23:02:43,512 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] processing [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: done applying
    updated cluster_state
    2013-06-17 23:02:43,706 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents][1] starting engine
    Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#3]"
    2013-06-17 23:02:59.133:WARN::Error for /prosolo/index.xhtml
    java.lang.OutOfMemoryError: PermGen space
    Exception in thread "elasticsearch[Prosolo Node][generic][T#2]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#2]"
    2013-06-17 23:03:04.041:WARN::/prosolo/index.xhtml: java.lang.
    OutOfMemoryError: PermGen space

Could you please give me some advice what could be the problem here?

Thanks,
Zoran

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

What are your memory settings? Interesting that you ran out of PermGen
space.

--
Ivan

On Mon, Jun 17, 2013 at 11:12 PM, Zoran Jeremic zoran.jeremic@gmail.comwrote:

Hi guys,

I'm trying to replace Lucene and Tika in the existing application. I
started from the service that should index the document user uploaded and
I'm using elasticsearch-mapper-attachments for that. However, I'm stuck on
the some exception that I can't find the cause

Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "elasticsearch[Prosolo
Node][generic][T#3]"

My elasticsearch server is on the localhost. I tried to increase
indices.memory.index_buffer_size, but it didn't work.
This is the code that I'm using to index the file:

public void indexFile(String absolutePath, long id) throws IOException{
String idxName = "documents";
String idxType = "attachment";

      String encodedFile=Base64.encodeFromFile(absolutePath);
    // Client

client=ElasticSearchFactory.prepareCreateIndexResponseForAttachment(idxName);
Client client=ElasticSearchFactory.getClient();
XContentBuilder source = jsonBuilder().startObject()
.field("doc", encodedFile).endObject();
IndexResponse idxResp = client.prepareIndex().setIndex(
idxName).setType(idxType).setId(String.valueOf(id))
.setSource(source).setRefresh(true).execute().actionGet();
client.close();
}

and this is the log I got, but I can't figure out anything from this log:

2013-06-17 23:02:43,209 DEBUG org.elasticsearch.common.logging.log4j.
Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node]processing
[zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet
[/192.168.1.65:9300]]])]: execute
2013-06-17 23:02:43,210 DEBUG
org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)

  • [Prosolo Node] cluster state updated, version [17], source
    [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]
    2013-06-17 23:02:43,211 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents] creating index
    2013-06-17 23:02:43,212 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] creating Index [documents], shards [5]/[1]
    2013-06-17 23:02:43,356 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using dynamic[true], default mapping: default_mapping_location[
    null], loaded_from[jar:file:/home/zoran/.m2/repository/org/elasticsearch/
    elasticsearch/0.90.1/elasticsearch-0.90.1.jar!/org/elasticsearch/index/
    mapper/default-mapping.json] and source[{
    "default":{
    }
    }]
    2013-06-17 23:02:43,357 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using [resident] query cache with max_size [100], expire [null]
    2013-06-17 23:02:43,363 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using index.store.throttle.type [node], with index.store.
    throttle.max_bytes_per_sec [0b]
    2013-06-17 23:02:43,398 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] adding mapping [attachment], source [{"attachment":{
    "properties":{"doc":{"type":"string"}}}}]
    2013-06-17 23:02:43,434 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] creating shard
    2013-06-17 23:02:43,435 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] creating shard_id [1]
    2013-06-17 23:02:43,501 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] Using [keep_only_last] deletion policy
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [tiered] merge policy with expunge_deletes_allowed[
    10.0], floor_segment[2mb], max_merge_at_once[10],max_merge_at_once_explicit
    [30], max_merged_segment[5gb], segments_per_tier[10.0],reclaim_deletes_weight
    [2.0], async_merge[true]
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [concurrent] merge scheduler with max_thread_count[3]
    2013-06-17 23:02:43,506 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]
    2013-06-17 23:02:43,507 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] interval [5s], flush_threshold_ops [5000],flush_threshold_size
    [200mb], flush_threshold_period [30m]
    2013-06-17 23:02:43,511 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]->[RECOVERING], reason [from [Growing Man][
    8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]]
    2013-06-17 23:02:43,512 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] processing [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: done applying
    updated cluster_state
    2013-06-17 23:02:43,706 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents][1] starting engine
    Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#3]"
    2013-06-17 23:02:59.133:WARN::Error for /prosolo/index.xhtml
    java.lang.OutOfMemoryError: PermGen space
    Exception in thread "elasticsearch[Prosolo Node][generic][T#2]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#2]"
    2013-06-17 23:03:04.041:WARN::/prosolo/index.xhtml: java.lang.
    OutOfMemoryError: PermGen space

Could you please give me some advice what could be the problem here?

Thanks,
Zoran

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Ivan,

I'm running my application with the following settings:
MAVEN_OPTS=-Xms512m -Xmx2048m -XX:PermSize=2048m -XX:MaxPermSize=2048m

Prior to adding elastic search I was running my application with these
settings and never had this problem
MAVEN_OPTS=-Xms512m -Xmx2048m -XX:PermSize=256m -XX:MaxPermSize=256m

Zoran

On Monday, 17 June 2013 23:12:49 UTC-7, Zoran Jeremic wrote:

Hi guys,

I'm trying to replace Lucene and Tika in the existing application. I
started from the service that should index the document user uploaded and
I'm using elasticsearch-mapper-attachments for that. However, I'm stuck on
the some exception that I can't find the cause

Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "elasticsearch[Prosolo
Node][generic][T#3]"

My elasticsearch server is on the localhost. I tried to increase
indices.memory.index_buffer_size, but it didn't work.
This is the code that I'm using to index the file:

public void indexFile(String absolutePath, long id) throws IOException{
String idxName = "documents";
String idxType = "attachment";

      String encodedFile=Base64.encodeFromFile(absolutePath);
    // Client 

client=ElasticSearchFactory.prepareCreateIndexResponseForAttachment(idxName);
Client client=ElasticSearchFactory.getClient();
XContentBuilder source = jsonBuilder().startObject()
.field("doc", encodedFile).endObject();
IndexResponse idxResp = client.prepareIndex().setIndex(
idxName).setType(idxType).setId(String.valueOf(id))
.setSource(source).setRefresh(true).execute().actionGet();
client.close();
}

and this is the log I got, but I can't figure out anything from this log:

2013-06-17 23:02:43,209 DEBUG org.elasticsearch.common.logging.log4j.
Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node]processing
[zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet
[/192.168.1.65:9300]]])]: execute
2013-06-17 23:02:43,210 DEBUG
org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)

  • [Prosolo Node] cluster state updated, version [17], source
    [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]
    2013-06-17 23:02:43,211 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents] creating index
    2013-06-17 23:02:43,212 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] creating Index [documents], shards [5]/[1]
    2013-06-17 23:02:43,356 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using dynamic[true], default mapping: default_mapping_location[
    null], loaded_from[jar:file:/home/zoran/.m2/repository/org/elasticsearch/
    elasticsearch/0.90.1/elasticsearch-0.90.1.jar!/org/elasticsearch/index/
    mapper/default-mapping.json] and source[{
    "default":{
    }
    }]
    2013-06-17 23:02:43,357 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using [resident] query cache with max_size [100], expire [null]
    2013-06-17 23:02:43,363 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using index.store.throttle.type [node], with index.store.
    throttle.max_bytes_per_sec [0b]
    2013-06-17 23:02:43,398 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] adding mapping [attachment], source [{"attachment":{
    "properties":{"doc":{"type":"string"}}}}]
    2013-06-17 23:02:43,434 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] creating shard
    2013-06-17 23:02:43,435 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] creating shard_id [1]
    2013-06-17 23:02:43,501 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] Using [keep_only_last] deletion policy
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [tiered] merge policy with expunge_deletes_allowed[
    10.0], floor_segment[2mb], max_merge_at_once[10],max_merge_at_once_explicit
    [30], max_merged_segment[5gb], segments_per_tier[10.0],reclaim_deletes_weight
    [2.0], async_merge[true]
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [concurrent] merge scheduler with max_thread_count[3]
    2013-06-17 23:02:43,506 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]
    2013-06-17 23:02:43,507 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] interval [5s], flush_threshold_ops [5000],flush_threshold_size
    [200mb], flush_threshold_period [30m]
    2013-06-17 23:02:43,511 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]->[RECOVERING], reason [from [Growing Man][
    8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]]
    2013-06-17 23:02:43,512 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] processing [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: done applying
    updated cluster_state
    2013-06-17 23:02:43,706 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents][1] starting engine
    Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#3]"
    2013-06-17 23:02:59.133:WARN::Error for /prosolo/index.xhtml
    java.lang.OutOfMemoryError: PermGen space
    Exception in thread "elasticsearch[Prosolo Node][generic][T#2]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#2]"
    2013-06-17 23:03:04.041:WARN::/prosolo/index.xhtml: java.lang.
    OutOfMemoryError: PermGen space

Could you please give me some advice what could be the problem here?

Thanks,
Zoran

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

So you are running an Elasticsearch instance embedded in your app and you run everything from Maven, right?
Is it for testing purpose (JUnit or something like that)?
Is it a Webapp? Do you use Jetty to run it from Maven?

I don't understand the full picture here.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 19 juin 2013 à 05:31, Zoran Jeremic zoran.jeremic@gmail.com a écrit :

Hi Ivan,

I'm running my application with the following settings:
MAVEN_OPTS=-Xms512m -Xmx2048m -XX:PermSize=2048m -XX:MaxPermSize=2048m

Prior to adding elastic search I was running my application with these settings and never had this problem
MAVEN_OPTS=-Xms512m -Xmx2048m -XX:PermSize=256m -XX:MaxPermSize=256m

Zoran

On Monday, 17 June 2013 23:12:49 UTC-7, Zoran Jeremic wrote:

Hi guys,

I'm trying to replace Lucene and Tika in the existing application. I started from the service that should index the document user uploaded and I'm using elasticsearch-mapper-attachments for that. However, I'm stuck on the some exception that I can't find the cause

Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[Prosolo Node][generic][T#3]"

My elasticsearch server is on the localhost. I tried to increase indices.memory.index_buffer_size, but it didn't work.
This is the code that I'm using to index the file:

public void indexFile(String absolutePath, long id) throws IOException{
String idxName = "documents";
String idxType = "attachment";

      String encodedFile=Base64.encodeFromFile(absolutePath);
    // Client client=ElasticSearchFactory.prepareCreateIndexResponseForAttachment(idxName);
     Client client=ElasticSearchFactory.getClient();
     XContentBuilder source = jsonBuilder().startObject()
            .field("doc", encodedFile).endObject();
          IndexResponse idxResp = client.prepareIndex().setIndex(idxName).setType(idxType).setId(String.valueOf(id))
            .setSource(source).setRefresh(true).execute().actionGet();
      client.close();
}

and this is the log I got, but I can't figure out anything from this log:

2013-06-17 23:02:43,209 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] processing [zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: execute
2013-06-17 23:02:43,210 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] cluster state updated, version [17], source [zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]
2013-06-17 23:02:43,211 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] creating index
2013-06-17 23:02:43,212 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] creating Index [documents], shards [5]/[1]
2013-06-17 23:02:43,356 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/home/zoran/.m2/repository/org/elasticsearch/elasticsearch/0.90.1/elasticsearch-0.90.1.jar!/org/elasticsearch/index/mapper/default-mapping.json] and source[{
"default":{
}
}]
2013-06-17 23:02:43,357 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] using [resident] query cache with max_size [100], expire [null]
2013-06-17 23:02:43,363 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]
2013-06-17 23:02:43,398 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] adding mapping [attachment], source [{"attachment":{"properties":{"doc":{"type":"string"}}}}]
2013-06-17 23:02:43,434 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] creating shard
2013-06-17 23:02:43,435 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] creating shard_id [1]
2013-06-17 23:02:43,501 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] Using [keep_only_last] deletion policy
2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] using [concurrent] merge scheduler with max_thread_count[3]
2013-06-17 23:02:43,506 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] state: [CREATED]
2013-06-17 23:02:43,507 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
2013-06-17 23:02:43,511 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] state: [CREATED]->[RECOVERING], reason [from [Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]]
2013-06-17 23:02:43,512 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] processing [zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: done applying updated cluster_state
2013-06-17 23:02:43,706 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] starting engine
Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[Prosolo Node][generic][T#3]"
2013-06-17 23:02:59.133:WARN::Error for /prosolo/index.xhtml
java.lang.OutOfMemoryError: PermGen space
Exception in thread "elasticsearch[Prosolo Node][generic][T#2]"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[Prosolo Node][generic][T#2]"
2013-06-17 23:03:04.041:WARN::/prosolo/index.xhtml: java.lang.OutOfMemoryError: PermGen space

Could you please give me some advice what could be the problem here?

Thanks,
Zoran
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David,

It is web application (JSF, Spring, Hibernate, Maven). Previously I was
using Lucene and Hibernate Search but I removed it as ES had a conflicts
with it.
I was running the application from eclipse using Jetty server and tried to
upload document as user. I tried embed ES at the beginning, but then I
installed the ES server and run the ES client that access the node on the
server.
Each time, the problem is happening when the following line is executed

IndexResponse idxResp = client.prepareIndex().setIndex(idxName).setType(
idxType).setId(String.valueOf(id))
.setSource(source).setRefresh(true).execute().actionGet();

Other then code I posted previously, I have only this related to the
elasticsearch:

       ImmutableSettings.Builder settings =  

ImmutableSettings.settingsBuilder().loadFromClasspath("elasticsearch.yml");
Node node =
NodeBuilder.nodeBuilder().settings(settings).build().start();
client = new TransportClient(settings).addTransportAddress(new
InetSocketTransportAddress("localhost", 9300));

I suppose that there should be some problem in this client as the server
works fine and modified version of this application
https://github.com/shairontoledo/elasticsearch-attachment-tests works fine
with ES server and doesn't produce this problem.

Zoran

On Monday, 17 June 2013 23:12:49 UTC-7, Zoran Jeremic wrote:

Hi guys,

I'm trying to replace Lucene and Tika in the existing application. I
started from the service that should index the document user uploaded and
I'm using elasticsearch-mapper-attachments for that. However, I'm stuck on
the some exception that I can't find the cause

Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "elasticsearch[Prosolo
Node][generic][T#3]"

My elasticsearch server is on the localhost. I tried to increase
indices.memory.index_buffer_size, but it didn't work.
This is the code that I'm using to index the file:

public void indexFile(String absolutePath, long id) throws IOException{
String idxName = "documents";
String idxType = "attachment";

      String encodedFile=Base64.encodeFromFile(absolutePath);
    // Client 

client=ElasticSearchFactory.prepareCreateIndexResponseForAttachment(idxName);
Client client=ElasticSearchFactory.getClient();
XContentBuilder source = jsonBuilder().startObject()
.field("doc", encodedFile).endObject();
IndexResponse idxResp = client.prepareIndex().setIndex(
idxName).setType(idxType).setId(String.valueOf(id))
.setSource(source).setRefresh(true).execute().actionGet();
client.close();
}

and this is the log I got, but I can't figure out anything from this log:

2013-06-17 23:02:43,209 DEBUG org.elasticsearch.common.logging.log4j.
Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node]processing
[zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet
[/192.168.1.65:9300]]])]: execute
2013-06-17 23:02:43,210 DEBUG
org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)

  • [Prosolo Node] cluster state updated, version [17], source
    [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]
    2013-06-17 23:02:43,211 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents] creating index
    2013-06-17 23:02:43,212 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] creating Index [documents], shards [5]/[1]
    2013-06-17 23:02:43,356 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using dynamic[true], default mapping: default_mapping_location[
    null], loaded_from[jar:file:/home/zoran/.m2/repository/org/elasticsearch/
    elasticsearch/0.90.1/elasticsearch-0.90.1.jar!/org/elasticsearch/index/
    mapper/default-mapping.json] and source[{
    "default":{
    }
    }]
    2013-06-17 23:02:43,357 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using [resident] query cache with max_size [100], expire [null]
    2013-06-17 23:02:43,363 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using index.store.throttle.type [node], with index.store.
    throttle.max_bytes_per_sec [0b]
    2013-06-17 23:02:43,398 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] adding mapping [attachment], source [{"attachment":{
    "properties":{"doc":{"type":"string"}}}}]
    2013-06-17 23:02:43,434 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] creating shard
    2013-06-17 23:02:43,435 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] creating shard_id [1]
    2013-06-17 23:02:43,501 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] Using [keep_only_last] deletion policy
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [tiered] merge policy with expunge_deletes_allowed[
    10.0], floor_segment[2mb], max_merge_at_once[10],max_merge_at_once_explicit
    [30], max_merged_segment[5gb], segments_per_tier[10.0],reclaim_deletes_weight
    [2.0], async_merge[true]
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [concurrent] merge scheduler with max_thread_count[3]
    2013-06-17 23:02:43,506 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]
    2013-06-17 23:02:43,507 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] interval [5s], flush_threshold_ops [5000],flush_threshold_size
    [200mb], flush_threshold_period [30m]
    2013-06-17 23:02:43,511 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]->[RECOVERING], reason [from [Growing Man][
    8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]]
    2013-06-17 23:02:43,512 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] processing [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: done applying
    updated cluster_state
    2013-06-17 23:02:43,706 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents][1] starting engine
    Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#3]"
    2013-06-17 23:02:59.133:WARN::Error for /prosolo/index.xhtml
    java.lang.OutOfMemoryError: PermGen space
    Exception in thread "elasticsearch[Prosolo Node][generic][T#2]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#2]"
    2013-06-17 23:03:04.041:WARN::/prosolo/index.xhtml: java.lang.
    OutOfMemoryError: PermGen space

Could you please give me some advice what could be the problem here?

Thanks,
Zoran

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I see the whole picture now. Thanks!

Are you sure you have same elasticsearch version on both sides (Maven and standalone nodes)?
What is your document size?

About this:

ImmutableSettings.Builder settings = ImmutableSettings.settingsBuilder().loadFromClasspath("elasticsearch.yml");
Node node = NodeBuilder.nodeBuilder().settings(settings).build().start();
client = new TransportClient(settings).addTransportAddress(new InetSocketTransportAddress("localhost", 9300));

You don't need anymore to start a Node as you use a transport client to connect to an external node.
Note that if you only need to set cluster name from elasticsearch.yml, you can build it also like this:
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "myClusterName").build();
Client client = new TransportClient(settings).addTransportAddress(new InetSocketTransportAddress("localhost", 9300));

Also, if you are running everything from eclipse, be aware that due to a very old guava issue, hot class reload does not work well and some elasticsearch threads are not stopped properly when your web app restart within eclipse. So, basically, you'll get some OOM issues…

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 19 juin 2013 à 09:55, Zoran Jeremic zoran.jeremic@gmail.com a écrit :

Hi David,

It is web application (JSF, Spring, Hibernate, Maven). Previously I was using Lucene and Hibernate Search but I removed it as ES had a conflicts with it.
I was running the application from eclipse using Jetty server and tried to upload document as user. I tried embed ES at the beginning, but then I installed the ES server and run the ES client that access the node on the server.
Each time, the problem is happening when the following line is executed

IndexResponse idxResp = client.prepareIndex().setIndex(idxName).setType(idxType).setId(String.valueOf(id))
.setSource(source).setRefresh(true).execute().actionGet();

Other then code I posted previously, I have only this related to the elasticsearch:

       ImmutableSettings.Builder settings =  ImmutableSettings.settingsBuilder().loadFromClasspath("elasticsearch.yml");
       Node node = NodeBuilder.nodeBuilder().settings(settings).build().start(); 
       client = new TransportClient(settings).addTransportAddress(new InetSocketTransportAddress("localhost", 9300));

I suppose that there should be some problem in this client as the server works fine and modified version of this application https://github.com/shairontoledo/elasticsearch-attachment-tests works fine with ES server and doesn't produce this problem.

Zoran

On Monday, 17 June 2013 23:12:49 UTC-7, Zoran Jeremic wrote:
Hi guys,

I'm trying to replace Lucene and Tika in the existing application. I started from the service that should index the document user uploaded and I'm using elasticsearch-mapper-attachments for that. However, I'm stuck on the some exception that I can't find the cause

Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[Prosolo Node][generic][T#3]"

My elasticsearch server is on the localhost. I tried to increase indices.memory.index_buffer_size, but it didn't work.
This is the code that I'm using to index the file:

public void indexFile(String absolutePath, long id) throws IOException{
String idxName = "documents";
String idxType = "attachment";

      String encodedFile=Base64.encodeFromFile(absolutePath);
    // Client client=ElasticSearchFactory.prepareCreateIndexResponseForAttachment(idxName);
     Client client=ElasticSearchFactory.getClient();
     XContentBuilder source = jsonBuilder().startObject()
            .field("doc", encodedFile).endObject();
          IndexResponse idxResp = client.prepareIndex().setIndex(idxName).setType(idxType).setId(String.valueOf(id))
            .setSource(source).setRefresh(true).execute().actionGet();
      client.close();
}

and this is the log I got, but I can't figure out anything from this log:

2013-06-17 23:02:43,209 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] processing [zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: execute
2013-06-17 23:02:43,210 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] cluster state updated, version [17], source [zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]
2013-06-17 23:02:43,211 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] creating index
2013-06-17 23:02:43,212 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] creating Index [documents], shards [5]/[1]
2013-06-17 23:02:43,356 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/home/zoran/.m2/repository/org/elasticsearch/elasticsearch/0.90.1/elasticsearch-0.90.1.jar!/org/elasticsearch/index/mapper/default-mapping.json] and source[{
"default":{
}
}]
2013-06-17 23:02:43,357 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] using [resident] query cache with max_size [100], expire [null]
2013-06-17 23:02:43,363 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]
2013-06-17 23:02:43,398 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] adding mapping [attachment], source [{"attachment":{"properties":{"doc":{"type":"string"}}}}]
2013-06-17 23:02:43,434 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] creating shard
2013-06-17 23:02:43,435 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] creating shard_id [1]
2013-06-17 23:02:43,501 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] Using [keep_only_last] deletion policy
2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] using [concurrent] merge scheduler with max_thread_count[3]
2013-06-17 23:02:43,506 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] state: [CREATED]
2013-06-17 23:02:43,507 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
2013-06-17 23:02:43,511 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] state: [CREATED]->[RECOVERING], reason [from [Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]]
2013-06-17 23:02:43,512 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] processing [zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: done applying updated cluster_state
2013-06-17 23:02:43,706 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] starting engine
Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[Prosolo Node][generic][T#3]"
2013-06-17 23:02:59.133:WARN::Error for /prosolo/index.xhtml
java.lang.OutOfMemoryError: PermGen space
Exception in thread "elasticsearch[Prosolo Node][generic][T#2]"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[Prosolo Node][generic][T#2]"
2013-06-17 23:03:04.041:WARN::/prosolo/index.xhtml: java.lang.OutOfMemoryError: PermGen space

Could you please give me some advice what could be the problem here?

Thanks,
Zoran

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David,

Thank you for your efforts about this.

Are you sure you have same elasticsearch version on both sides (Maven and
standalone nodes)?
Yes. I'm sure about it.

{
"ok" : true,
"status" : 200,
"name" : "Hrimhari",
"version" : {
"number" : "0.90.1",
"snapshot_build" : false,
"lucene_version" : "4.3"
},
"tagline" : "You Know, for Search"
}

	<dependency>
		<groupId>org.elasticsearch</groupId>
		<artifactId>elasticsearch</artifactId>
		<version>0.90.1</version>
	</dependency>
	<dependency>
		<groupId>org.elasticsearch</groupId>
		<artifactId>elasticsearch-mapper-attachments</artifactId>
		<version>1.7.0</version>
	</dependency>

What is your document size?

I tried different document sizes and types (pdf starting from 40kb - 500kbs, txt 1.5kbs).

You don't need anymore to start a Node as you use a transport client to
connect to an external node.

Note that if you only need to set cluster name from elasticsearch.yml,
you can build it also like this:

I did as you suggested. I also increased PermSize to 4gb, used small documents in indexing, tried both from Eclipse and terminal and now
I don't have OutOfMemoryException anymore. However, the error still persists, but I have the following message now.

error:[documents][2] [2] shardIt, [0] active : Timeout waiting for [1m], request: index {[documents][attachment][1835012], source[{"doc":"MS04ODgtNDcyLTIyMjIgIG9yICAxLTg2Ni04MjItMzIzMgoK"}]}

IndexResponse is null after this error. This is happening for the most of the time. Sometimes, I got "error:No node available" here.

Several times, but not very often, the indexing was performed successfully.

Also, if you are running everything from eclipse, be aware that due to a
very old guava issue, hot class reload does not work well and some
elasticsearch threads are not stopped properly when your web >>app restart
within eclipse. So, basically, you'll get some OOM issues…

I have some guava dependencies in the project. Could it be the issue maybe?

<!-- Guava dependencies -->
    <dependency>
        <groupId>com.google.guava</groupId>
        <artifactId>guava</artifactId>
        <version>14.0.1</version>
    </dependency>

Thanks,
Zoran

On Monday, 17 June 2013 23:12:49 UTC-7, Zoran Jeremic wrote:

Hi guys,

I'm trying to replace Lucene and Tika in the existing application. I
started from the service that should index the document user uploaded and
I'm using elasticsearch-mapper-attachments for that. However, I'm stuck on
the some exception that I can't find the cause

Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "elasticsearch[Prosolo
Node][generic][T#3]"

My elasticsearch server is on the localhost. I tried to increase
indices.memory.index_buffer_size, but it didn't work.
This is the code that I'm using to index the file:

public void indexFile(String absolutePath, long id) throws IOException{
String idxName = "documents";
String idxType = "attachment";

      String encodedFile=Base64.encodeFromFile(absolutePath);
    // Client 

client=ElasticSearchFactory.prepareCreateIndexResponseForAttachment(idxName);
Client client=ElasticSearchFactory.getClient();
XContentBuilder source = jsonBuilder().startObject()
.field("doc", encodedFile).endObject();
IndexResponse idxResp = client.prepareIndex().setIndex(
idxName).setType(idxType).setId(String.valueOf(id))
.setSource(source).setRefresh(true).execute().actionGet();
client.close();
}

and this is the log I got, but I can't figure out anything from this log:

2013-06-17 23:02:43,209 DEBUG org.elasticsearch.common.logging.log4j.
Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node]processing
[zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet
[/192.168.1.65:9300]]])]: execute
2013-06-17 23:02:43,210 DEBUG
org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)

  • [Prosolo Node] cluster state updated, version [17], source
    [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]
    2013-06-17 23:02:43,211 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents] creating index
    2013-06-17 23:02:43,212 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] creating Index [documents], shards [5]/[1]
    2013-06-17 23:02:43,356 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using dynamic[true], default mapping: default_mapping_location[
    null], loaded_from[jar:file:/home/zoran/.m2/repository/org/elasticsearch/
    elasticsearch/0.90.1/elasticsearch-0.90.1.jar!/org/elasticsearch/index/
    mapper/default-mapping.json] and source[{
    "default":{
    }
    }]
    2013-06-17 23:02:43,357 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using [resident] query cache with max_size [100], expire [null]
    2013-06-17 23:02:43,363 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using index.store.throttle.type [node], with index.store.
    throttle.max_bytes_per_sec [0b]
    2013-06-17 23:02:43,398 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] adding mapping [attachment], source [{"attachment":{
    "properties":{"doc":{"type":"string"}}}}]
    2013-06-17 23:02:43,434 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] creating shard
    2013-06-17 23:02:43,435 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] creating shard_id [1]
    2013-06-17 23:02:43,501 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] Using [keep_only_last] deletion policy
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [tiered] merge policy with expunge_deletes_allowed[
    10.0], floor_segment[2mb], max_merge_at_once[10],max_merge_at_once_explicit
    [30], max_merged_segment[5gb], segments_per_tier[10.0],reclaim_deletes_weight
    [2.0], async_merge[true]
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [concurrent] merge scheduler with max_thread_count[3]
    2013-06-17 23:02:43,506 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]
    2013-06-17 23:02:43,507 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] interval [5s], flush_threshold_ops [5000],flush_threshold_size
    [200mb], flush_threshold_period [30m]
    2013-06-17 23:02:43,511 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]->[RECOVERING], reason [from [Growing Man][
    8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]]
    2013-06-17 23:02:43,512 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] processing [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: done applying
    updated cluster_state
    2013-06-17 23:02:43,706 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents][1] starting engine
    Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#3]"
    2013-06-17 23:02:59.133:WARN::Error for /prosolo/index.xhtml
    java.lang.OutOfMemoryError: PermGen space
    Exception in thread "elasticsearch[Prosolo Node][generic][T#2]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#2]"
    2013-06-17 23:03:04.041:WARN::/prosolo/index.xhtml: java.lang.
    OutOfMemoryError: PermGen space

Could you please give me some advice what could be the problem here?

Thanks,
Zoran

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David,

I finally solved this issue. As usually it was a stupid mistake on my side.
I didn't properly set the MAVEN_OPTS at maven plugin in eclipse. Instead of
JRE variable I setup environmental variable. Once I started profiler on the
process I realized that dedicated values are default instead of those I've
setup.

Thanks for your help,
Zoran

On Monday, 17 June 2013 23:12:49 UTC-7, Zoran Jeremic wrote:

Hi guys,

I'm trying to replace Lucene and Tika in the existing application. I
started from the service that should index the document user uploaded and
I'm using elasticsearch-mapper-attachments for that. However, I'm stuck on
the some exception that I can't find the cause

Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "elasticsearch[Prosolo
Node][generic][T#3]"

My elasticsearch server is on the localhost. I tried to increase
indices.memory.index_buffer_size, but it didn't work.
This is the code that I'm using to index the file:

public void indexFile(String absolutePath, long id) throws IOException{
String idxName = "documents";
String idxType = "attachment";

      String encodedFile=Base64.encodeFromFile(absolutePath);
    // Client 

client=ElasticSearchFactory.prepareCreateIndexResponseForAttachment(idxName);
Client client=ElasticSearchFactory.getClient();
XContentBuilder source = jsonBuilder().startObject()
.field("doc", encodedFile).endObject();
IndexResponse idxResp = client.prepareIndex().setIndex(
idxName).setType(idxType).setId(String.valueOf(id))
.setSource(source).setRefresh(true).execute().actionGet();
client.close();
}

and this is the log I got, but I can't figure out anything from this log:

2013-06-17 23:02:43,209 DEBUG org.elasticsearch.common.logging.log4j.
Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node]processing
[zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet
[/192.168.1.65:9300]]])]: execute
2013-06-17 23:02:43,210 DEBUG
org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)

  • [Prosolo Node] cluster state updated, version [17], source
    [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]
    2013-06-17 23:02:43,211 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents] creating index
    2013-06-17 23:02:43,212 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] creating Index [documents], shards [5]/[1]
    2013-06-17 23:02:43,356 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using dynamic[true], default mapping: default_mapping_location[
    null], loaded_from[jar:file:/home/zoran/.m2/repository/org/elasticsearch/
    elasticsearch/0.90.1/elasticsearch-0.90.1.jar!/org/elasticsearch/index/
    mapper/default-mapping.json] and source[{
    "default":{
    }
    }]
    2013-06-17 23:02:43,357 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using [resident] query cache with max_size [100], expire [null]
    2013-06-17 23:02:43,363 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using index.store.throttle.type [node], with index.store.
    throttle.max_bytes_per_sec [0b]
    2013-06-17 23:02:43,398 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] adding mapping [attachment], source [{"attachment":{
    "properties":{"doc":{"type":"string"}}}}]
    2013-06-17 23:02:43,434 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] creating shard
    2013-06-17 23:02:43,435 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] creating shard_id [1]
    2013-06-17 23:02:43,501 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] Using [keep_only_last] deletion policy
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [tiered] merge policy with expunge_deletes_allowed[
    10.0], floor_segment[2mb], max_merge_at_once[10],max_merge_at_once_explicit
    [30], max_merged_segment[5gb], segments_per_tier[10.0],reclaim_deletes_weight
    [2.0], async_merge[true]
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [concurrent] merge scheduler with max_thread_count[3]
    2013-06-17 23:02:43,506 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]
    2013-06-17 23:02:43,507 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] interval [5s], flush_threshold_ops [5000],flush_threshold_size
    [200mb], flush_threshold_period [30m]
    2013-06-17 23:02:43,511 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]->[RECOVERING], reason [from [Growing Man][
    8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]]
    2013-06-17 23:02:43,512 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] processing [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: done applying
    updated cluster_state
    2013-06-17 23:02:43,706 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents][1] starting engine
    Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#3]"
    2013-06-17 23:02:59.133:WARN::Error for /prosolo/index.xhtml
    java.lang.OutOfMemoryError: PermGen space
    Exception in thread "elasticsearch[Prosolo Node][generic][T#2]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#2]"
    2013-06-17 23:03:04.041:WARN::/prosolo/index.xhtml: java.lang.
    OutOfMemoryError: PermGen space

Could you please give me some advice what could be the problem here?

Thanks,
Zoran

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Cool. Thanks for the follow up.

David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 20 juin 2013 à 08:01, Zoran Jeremic zoran.jeremic@gmail.com a écrit :

Hi David,

I finally solved this issue. As usually it was a stupid mistake on my side. I didn't properly set the MAVEN_OPTS at maven plugin in eclipse. Instead of JRE variable I setup environmental variable. Once I started profiler on the process I realized that dedicated values are default instead of those I've setup.

Thanks for your help,
Zoran

On Monday, 17 June 2013 23:12:49 UTC-7, Zoran Jeremic wrote:
Hi guys,

I'm trying to replace Lucene and Tika in the existing application. I started from the service that should index the document user uploaded and I'm using elasticsearch-mapper-attachments for that. However, I'm stuck on the some exception that I can't find the cause

Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[Prosolo Node][generic][T#3]"

My elasticsearch server is on the localhost. I tried to increase indices.memory.index_buffer_size, but it didn't work.
This is the code that I'm using to index the file:

public void indexFile(String absolutePath, long id) throws IOException{
String idxName = "documents";
String idxType = "attachment";

      String encodedFile=Base64.encodeFromFile(absolutePath);
    // Client client=ElasticSearchFactory.prepareCreateIndexResponseForAttachment(idxName);
     Client client=ElasticSearchFactory.getClient();
     XContentBuilder source = jsonBuilder().startObject()
            .field("doc", encodedFile).endObject();
          IndexResponse idxResp = client.prepareIndex().setIndex(idxName).setType(idxType).setId(String.valueOf(id))
            .setSource(source).setRefresh(true).execute().actionGet();
      client.close();
}

and this is the log I got, but I can't figure out anything from this log:

2013-06-17 23:02:43,209 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] processing [zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: execute
2013-06-17 23:02:43,210 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] cluster state updated, version [17], source [zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]
2013-06-17 23:02:43,211 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] creating index
2013-06-17 23:02:43,212 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] creating Index [documents], shards [5]/[1]
2013-06-17 23:02:43,356 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] using dynamic[true], default mapping: default_mapping_location[null], loaded_from[jar:file:/home/zoran/.m2/repository/org/elasticsearch/elasticsearch/0.90.1/elasticsearch-0.90.1.jar!/org/elasticsearch/index/mapper/default-mapping.json] and source[{
"default":{
}
}]
2013-06-17 23:02:43,357 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] using [resident] query cache with max_size [100], expire [null]
2013-06-17 23:02:43,363 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] using index.store.throttle.type [node], with index.store.throttle.max_bytes_per_sec [0b]
2013-06-17 23:02:43,398 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] adding mapping [attachment], source [{"attachment":{"properties":{"doc":{"type":"string"}}}}]
2013-06-17 23:02:43,434 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] creating shard
2013-06-17 23:02:43,435 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents] creating shard_id [1]
2013-06-17 23:02:43,501 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] Using [keep_only_last] deletion policy
2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] using [tiered] merge policy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0], async_merge[true]
2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] using [concurrent] merge scheduler with max_thread_count[3]
2013-06-17 23:02:43,506 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] state: [CREATED]
2013-06-17 23:02:43,507 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] interval [5s], flush_threshold_ops [5000], flush_threshold_size [200mb], flush_threshold_period [30m]
2013-06-17 23:02:43,511 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] state: [CREATED]->[RECOVERING], reason [from [Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]]
2013-06-17 23:02:43,512 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] processing [zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: done applying updated cluster_state
2013-06-17 23:02:43,706 DEBUG org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [documents][1] starting engine
Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[Prosolo Node][generic][T#3]"
2013-06-17 23:02:59.133:WARN::Error for /prosolo/index.xhtml
java.lang.OutOfMemoryError: PermGen space
Exception in thread "elasticsearch[Prosolo Node][generic][T#2]"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "elasticsearch[Prosolo Node][generic][T#2]"
2013-06-17 23:03:04.041:WARN::/prosolo/index.xhtml: java.lang.OutOfMemoryError: PermGen space

Could you please give me some advice what could be the problem here?

Thanks,
Zoran

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David,

I got the same error (0.90.2), in-memory node, ant instead of maven. No
documents in the index. Just created some index during the junit tests and
deleted them afterwards.
ant settings: -Xmx1g -XX:MaxPermSize=1g

private static Node localMemoryNode = null;
private static Client client = null;

    public static synchronized Client getSearchClient() {
        if (localMemoryNode != null && !localMemoryNode.isClosed()) {
            if (client == null) {
                client = localMemoryNode.client();
            }
            return client;
        }
        throw new IllegalStateException("local in-memory search node 

was not started, not returning a client");
};

    public static synchronized Node initSearchNode() {
        if (localMemoryNode == null) {
            Node node = nodeBuilder()
                    .clusterName(CLUSTER_NAME)
                    .settings(internalNodeSettings(

"localUnitTestDataNode"))
.local(true)
.data(true)
.build().start();
localMemoryNode = node;
return node;
}
return localMemoryNode;
}

    private static Builder internalNodeSettings(String nodeName) {
        return builder()
                .put("name", nodeName)
                .put("discovery.zen.ping.multicast.enabled", "false")
                .put("discovery.zen.ping.unicast.hosts", "localhost")
                .put("index.store.type", "memory")
                .put("discovery.initial_state_timeout", "5s");
    }

Any ideas?

Best regards,
Daniel

Am Donnerstag, 20. Juni 2013 21:34:59 UTC+2 schrieb David Pilato:

Cool. Thanks for the follow up.

David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 20 juin 2013 à 08:01, Zoran Jeremic <zoran....@gmail.com <javascript:>>
a écrit :

Hi David,

I finally solved this issue. As usually it was a stupid mistake on my
side. I didn't properly set the MAVEN_OPTS at maven plugin in eclipse.
Instead of JRE variable I setup environmental variable. Once I started
profiler on the process I realized that dedicated values are default
instead of those I've setup.

Thanks for your help,
Zoran

On Monday, 17 June 2013 23:12:49 UTC-7, Zoran Jeremic wrote:

Hi guys,

I'm trying to replace Lucene and Tika in the existing application. I
started from the service that should index the document user uploaded and
I'm using elasticsearch-mapper-attachments for that. However, I'm stuck on
the some exception that I can't find the cause

Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "elasticsearch[Prosolo
Node][generic][T#3]"

My elasticsearch server is on the localhost. I tried to increase
indices.memory.index_buffer_size, but it didn't work.
This is the code that I'm using to index the file:

public void indexFile(String absolutePath, long id) throws IOException
{
String idxName = "documents";
String idxType = "attachment";

      String encodedFile=Base64.encodeFromFile(absolutePath);
    // Client 

client=ElasticSearchFactory.prepareCreateIndexResponseForAttachment(idxName);
Client client=ElasticSearchFactory.getClient();
XContentBuilder source = jsonBuilder().startObject()
.field("doc", encodedFile).endObject();
IndexResponse idxResp = client.prepareIndex().setIndex(
idxName).setType(idxType).setId(String.valueOf(id))
.setSource(source).setRefresh(true).execute().actionGet
();
client.close();
}

and this is the log I got, but I can't figure out anything from this log:

2013-06-17 23:02:43,209 DEBUG org.elasticsearch.common.logging.log4j.
Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node]processing
[zen-disco-receive(from master [[Growing Man][8c9IXH8_Qcy80LgSexjB-Q][
inet[/192.168.1.65:9300]]])]: execute
2013-06-17 23:02:43,210 DEBUG
org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)

  • [Prosolo Node] cluster state updated, version [17], source
    [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]
    2013-06-17 23:02:43,211 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents] creating index
    2013-06-17 23:02:43,212 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] creating Index [documents], shards [5]/[1]
    2013-06-17 23:02:43,356 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using dynamic[true], default mapping: default_mapping_location
    [null], loaded_from[jar:file:/home/zoran/.m2/repository/org/elasticsearch
    /elasticsearch/0.90.1/elasticsearch-0.90.1.jar!/org/elasticsearch/index/
    mapper/default-mapping.json] and source[{
    "default":{
    }
    }]
    2013-06-17 23:02:43,357 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using [resident] query cache with max_size [100], expire [null
    ]
    2013-06-17 23:02:43,363 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] using index.store.throttle.type [node], with index.store.
    throttle.max_bytes_per_sec [0b]
    2013-06-17 23:02:43,398 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] adding mapping [attachment], source [{"attachment":{
    "properties":{"doc":{"type":"string"}}}}]
    2013-06-17 23:02:43,434 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] creating shard
    2013-06-17 23:02:43,435 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents] creating shard_id [1]
    2013-06-17 23:02:43,501 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] Using [keep_only_last] deletion policy
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [tiered] merge policy with expunge_deletes_allowed[
    10.0], floor_segment[2mb], max_merge_at_once[10],max_merge_at_once_explicit
    [30], max_merged_segment[5gb], segments_per_tier[10.0],reclaim_deletes_weight
    [2.0], async_merge[true]
    2013-06-17 23:02:43,503 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] using [concurrent] merge scheduler with max_thread_count[3]
    2013-06-17 23:02:43,506 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]
    2013-06-17 23:02:43,507 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] interval [5s], flush_threshold_ops [5000],flush_threshold_size
    [200mb], flush_threshold_period [30m]
    2013-06-17 23:02:43,511 DEBUG org.elasticsearch.common.logging.log4j.
    Log4jESLogger.internalDebug(Log4jESLogger.java:94) - [Prosolo Node] [
    documents][1] state: [CREATED]->[RECOVERING], reason [from [Growing Man][
    8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]]
    2013-06-17 23:02:43,512 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] processing [zen-disco-receive(from master [[Growing
    Man][8c9IXH8_Qcy80LgSexjB-Q][inet[/192.168.1.65:9300]]])]: done applying
    updated cluster_state
    2013-06-17 23:02:43,706 DEBUG
    org.elasticsearch.common.logging.log4j.Log4jESLogger.internalDebug(Log4jESLogger.java:94)
  • [Prosolo Node] [documents][1] starting engine
    Exception in thread "elasticsearch[Prosolo Node][generic][T#3]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#3]"
    2013-06-17 23:02:59.133:WARN::Error for /prosolo/index.xhtml
    java.lang.OutOfMemoryError: PermGen space
    Exception in thread "elasticsearch[Prosolo Node][generic][T#2]"
    Exception: java.lang.OutOfMemoryError thrown from the
    UncaughtExceptionHandler in thread "elasticsearch[Prosolo
    Node][generic][T#2]"
    2013-06-17 23:03:04.041:WARN::/prosolo/index.xhtml: java.lang.
    OutOfMemoryError: PermGen space

Could you please give me some advice what could be the problem here?

Thanks,
Zoran

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

By the way: why does it take 10s to close such a node?

Thx,
Daniel

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The error happens only if an index is created

Am Mittwoch, 17. Juli 2013 18:41:21 UTC+2 schrieb daniel koller:

By the way: why does it take 10s to close such a node?

Thx,
Daniel

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.