Failed to update master on updated mapping for index ( unable to create new native thread )

Hi there,

We have the next problem (or misconfiguration) with elasticsearch :

We have installed the latest version of elasticsearch ( 0.90.5 ) from deb
package, and we used it at multiple clients, and this type of error only
occurs on this one customer's system, and doesn't occur for other customers.
The only difference beetween the servers of clients is the configuration of
elasticsearch, so I thought that the configuration file may be wrong.

Here is the working configuration althought sometimes an OutOfMemoryError
execption is occurred.. but that's an another topic..


cluster.name: "xxxxxxxx"
thrift.port: 9500

index.cache.field.type: soft
index.cache.field.max_size: 50000

indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 5mb

indices.memory.index_buffer_size: 50%
index.refresh_interval: 30
index.translog.flush_threshold_ops: 50000
index.store.compress.stored: true

Search pool

threadpool.search.type: fixed
threadpool.search.size: 50
threadpool.search.queue_size: 100

Bulk pool

threadpool.bulk.type: fixed
threadpool.bulk.size: 50
threadpool.bulk.queue_size: 300

Index pool

threadpool.index.type: fixed
threadpool.index.size: 50
threadpool.index.queue_size: 100

Here is the other configuration which contains bigger values because the
memory is much more (120GB) than the others, and we give 30GB to
elasticsearch.


cluster.name: xxx
thrift.port: 9500
bootstrap.mlockall: true
index.cache.field.type: soft
index.cache.field.max_size: 200000
index.cache.field.expire: 15m
indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 25mbcustomers
indices.memory.index_buffer_size: 30%
index.refresh_interval: 30
index.translog.flush_threshold_ops: 200000
index.store.compress.stored: true
index.merge.policy.max_merged_segment: 10g
indices.fielddata.cache.size: 30%
indices.fielddata.cache.expire: 20m
index.merge.policy.max_merge_size: 5g

#Search pool
threadpool.search.type: fixed
threadpool.search.size: 10000
threadpool.search.queue_size: 50000

#Bulk pool
threadpool.bulk.type: fixed
threadpool.bulk.size: 10000
threadpool.bulk.queue_size: 30000

#Index pool
threadpool.index.type: fixed
threadpool.index.size: 10000
threadpool.index.queue_size: 50000

I don't know what is causing the OutOfMemoryError exception.

[2013-10-11 10:58:29,409][WARN ][action.bulk ] [Rama-Tut]
failed to update master on updated mapping for index
[normalized-2013-10-11], type [normalized] and source
[{"normalized":{"properties":{"__uidentifer":{"type":"long"},"_day":{"type":"date","format":"dateOptionalTime"},"_event_id":{"type":"long"},"_gparent_id":{"type":"long"},"_container_id":{"type":"long"},"_subcat_id":{"type":"long"},"_time_is":{"type":"date","format":"dateOptionalTime"},"action":{"type":"string"},"date":{"type":"date","format":"dateOptionalTime"},"thesource":{"type":"string"},"message":{"type":"string","analyzer":"standard"},"service":{"type":"string"},"username":{"type":"string"}}}}]
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:693)
at
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor.execute(PrioritizedEsThreadPoolExecutor.java:95)
at
org.elasticsearch.cluster.service.InternalClusterService.submitStateUpdateTask(InternalClusterService.java:237)
at
org.elasticsearch.cluster.metadata.MetaDataMappingService.updateMapping(MetaDataMappingService.java:281)
at
org.elasticsearch.cluster.action.index.MappingUpdatedAction.masterOperation(MappingUpdatedAction.java:79)
at
org.elasticsearch.cluster.action.index.MappingUpdatedAction.masterOperation(MappingUpdatedAction.java:45)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$2.run(TransportMasterNodeOperationAction.java:144)
at
org.elasticsearch.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.innerExecute(TransportMasterNodeOperationAction.java:140)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.doExecute(TransportMasterNodeOperationAction.java:94)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.doExecute(TransportMasterNodeOperationAction.java:42)
at
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.execute(TransportMasterNodeOperationAction.java:89)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.updateMappingOnMaster(TransportShardBulkAction.java:612)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:339)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:521)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

I would be very happy if someone there is who has had a similar propblem
with the elasticsearch, and explain it what would I have done with it.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

You have a very high threadpool size settings for fixed. This will exhaust
all the threads a JVM can create and it will OOM then.

Jörg
Am 11.10.2013 14:17 schrieb "onthefloorr" onthefloorr@gmail.com:

Hi there,

We have the next problem (or misconfiguration) with elasticsearch :

We have installed the latest version of elasticsearch ( 0.90.5 ) from deb
package, and we used it at multiple clients, and this type of error only
occurs on this one customer's system, and doesn't occur for other customers.
The only difference beetween the servers of clients is the configuration
of elasticsearch, so I thought that the configuration file may be wrong.

Here is the working configuration althought sometimes an OutOfMemoryError
execption is occurred.. but that's an another topic..


cluster.name: "xxxxxxxx"
thrift.port: 9500

index.cache.field.type: soft
index.cache.field.max_size: 50000

indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 5mb

indices.memory.index_buffer_size: 50%
index.refresh_interval: 30
index.translog.flush_threshold_ops: 50000
index.store.compress.stored: true

Search pool

threadpool.search.type: fixed
threadpool.search.size: 50
threadpool.search.queue_size: 100

Bulk pool

threadpool.bulk.type: fixed
threadpool.bulk.size: 50
threadpool.bulk.queue_size: 300

Index pool

threadpool.index.type: fixed
threadpool.index.size: 50
threadpool.index.queue_size: 100

Here is the other configuration which contains bigger values because the
memory is much more (120GB) than the others, and we give 30GB to
elasticsearch.


cluster.name: xxx
thrift.port: 9500
bootstrap.mlockall: true
index.cache.field.type: soft
index.cache.field.max_size: 200000
index.cache.field.expire: 15m
indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 25mbcustomers
indices.memory.index_buffer_size: 30%
index.refresh_interval: 30
index.translog.flush_threshold_ops: 200000
index.store.compress.stored: true
index.merge.policy.max_merged_segment: 10g
indices.fielddata.cache.size: 30%
indices.fielddata.cache.expire: 20m
index.merge.policy.max_merge_size: 5g

search pool
threadpool.search.type: fixed
threadpool.search.size: 10000
threadpool.search.queue_size: 50000

#Bulk pool
threadpool.bulk.type: fixed
threadpool.bulk.size: 10000
threadpool.bulk.queue_size: 30000

#Index pool
threadpool.index.type: fixed
threadpool.index.size: 10000
threadpool.index.queue_size: 50000

I don't know what is causing the OutOfMemoryError exception.

[2013-10-11 10:58:29,409][WARN ][action.bulk ] [Rama-Tut]
failed to update master on updated mapping for index
[normalized-2013-10-11], type [normalized] and source
[{"normalized":{"properties":{"__uidentifer":{"type":"long"},"_day":{"type":"date","format":"dateOptionalTime"},"_event_id":{"type":"long"},"_gparent_id":{"type":"long"},"_container_id":{"type":"long"},"_subcat_id":{"type":"long"},"_time_is":{"type":"date","format":"dateOptionalTime"},"action":{"type":"string"},"date":{"type":"date","format":"dateOptionalTime"},"thesource":{"type":"string"},"message":{"type":"string","analyzer":"standard"},"service":{"type":"string"},"username":{"type":"string"}}}}]
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:693)
at
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor.execute(PrioritizedEsThreadPoolExecutor.java:95)
at
org.elasticsearch.cluster.service.InternalClusterService.submitStateUpdateTask(InternalClusterService.java:237)
at
org.elasticsearch.cluster.metadata.MetaDataMappingService.updateMapping(MetaDataMappingService.java:281)
at
org.elasticsearch.cluster.action.index.MappingUpdatedAction.masterOperation(MappingUpdatedAction.java:79)
at
org.elasticsearch.cluster.action.index.MappingUpdatedAction.masterOperation(MappingUpdatedAction.java:45)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$2.run(TransportMasterNodeOperationAction.java:144)
at
org.elasticsearch.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.innerExecute(TransportMasterNodeOperationAction.java:140)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.doExecute(TransportMasterNodeOperationAction.java:94)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.doExecute(TransportMasterNodeOperationAction.java:42)
at
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.execute(TransportMasterNodeOperationAction.java:89)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.updateMappingOnMaster(TransportShardBulkAction.java:612)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:339)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:521)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

I would be very happy if someone there is who has had a similar propblem
with the elasticsearch, and explain it what would I have done with it.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

is this a typo of did you really fix the bulk threadpool to 10000 threads?
This is waaaay to high, you should maybe just stick to the defaults we set
here, if you set the size that high you will keep all those threads around
which can take a lot of resources. I don't think you should go higher than
the number of cores in the box for bulk as well as indexing?

Can you elaborate a bit on the hardware like how many cores does the box
have?

simon

On Friday, October 11, 2013 2:17:52 PM UTC+2, onthefloorr wrote:

Hi there,

We have the next problem (or misconfiguration) with elasticsearch :

We have installed the latest version of elasticsearch ( 0.90.5 ) from deb
package, and we used it at multiple clients, and this type of error only
occurs on this one customer's system, and doesn't occur for other customers.
The only difference beetween the servers of clients is the configuration
of elasticsearch, so I thought that the configuration file may be wrong.

Here is the working configuration althought sometimes an OutOfMemoryError
execption is occurred.. but that's an another topic..


cluster.name: "xxxxxxxx"
thrift.port: 9500

index.cache.field.type: soft
index.cache.field.max_size: 50000

indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 5mb

indices.memory.index_buffer_size: 50%
index.refresh_interval: 30
index.translog.flush_threshold_ops: 50000
index.store.compress.stored: true

Search pool

threadpool.search.type: fixed
threadpool.search.size: 50
threadpool.search.queue_size: 100

Bulk pool

threadpool.bulk.type: fixed
threadpool.bulk.size: 50
threadpool.bulk.queue_size: 300

Index pool

threadpool.index.type: fixed
threadpool.index.size: 50
threadpool.index.queue_size: 100

Here is the other configuration which contains bigger values because the
memory is much more (120GB) than the others, and we give 30GB to
elasticsearch.


cluster.name: xxx
thrift.port: 9500
bootstrap.mlockall: true
index.cache.field.type: soft
index.cache.field.max_size: 200000
index.cache.field.expire: 15m
indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 25mbcustomers
indices.memory.index_buffer_size: 30%
index.refresh_interval: 30
index.translog.flush_threshold_ops: 200000
index.store.compress.stored: true
index.merge.policy.max_merged_segment: 10g
indices.fielddata.cache.size: 30%
indices.fielddata.cache.expire: 20m
index.merge.policy.max_merge_size: 5g

search pool
threadpool.search.type: fixed
threadpool.search.size: 10000
threadpool.search.queue_size: 50000

#Bulk pool
threadpool.bulk.type: fixed
threadpool.bulk.size: 10000
threadpool.bulk.queue_size: 30000

#Index pool
threadpool.index.type: fixed
threadpool.index.size: 10000
threadpool.index.queue_size: 50000

I don't know what is causing the OutOfMemoryError exception.

[2013-10-11 10:58:29,409][WARN ][action.bulk ] [Rama-Tut]
failed to update master on updated mapping for index
[normalized-2013-10-11], type [normalized] and source
[{"normalized":{"properties":{"__uidentifer":{"type":"long"},"_day":{"type":"date","format":"dateOptionalTime"},"_event_id":{"type":"long"},"_gparent_id":{"type":"long"},"_container_id":{"type":"long"},"_subcat_id":{"type":"long"},"_time_is":{"type":"date","format":"dateOptionalTime"},"action":{"type":"string"},"date":{"type":"date","format":"dateOptionalTime"},"thesource":{"type":"string"},"message":{"type":"string","analyzer":"standard"},"service":{"type":"string"},"username":{"type":"string"}}}}]
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:693)
at
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor.execute(PrioritizedEsThreadPoolExecutor.java:95)
at
org.elasticsearch.cluster.service.InternalClusterService.submitStateUpdateTask(InternalClusterService.java:237)
at
org.elasticsearch.cluster.metadata.MetaDataMappingService.updateMapping(MetaDataMappingService.java:281)
at
org.elasticsearch.cluster.action.index.MappingUpdatedAction.masterOperation(MappingUpdatedAction.java:79)
at
org.elasticsearch.cluster.action.index.MappingUpdatedAction.masterOperation(MappingUpdatedAction.java:45)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$2.run(TransportMasterNodeOperationAction.java:144)
at
org.elasticsearch.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.innerExecute(TransportMasterNodeOperationAction.java:140)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.doExecute(TransportMasterNodeOperationAction.java:94)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.doExecute(TransportMasterNodeOperationAction.java:42)
at
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.execute(TransportMasterNodeOperationAction.java:89)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.updateMappingOnMaster(TransportShardBulkAction.java:612)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:339)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:521)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

I would be very happy if someone there is who has had a similar propblem
with the elasticsearch, and explain it what would I have done with it.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thank you for your answers.

I also think that the problem was caused by too high a value of the
threads, and I haven't read the documentation of elasticsearch completely..
I thought that the size of the {threadpool.size} is the same as the size of
the queue (threadpool.queue_size).

In any case, I will try it with a smaller value next week.

Thanks again for your help and have a great day .

On Friday, October 11, 2013 2:17:52 PM UTC+2, onthefloorr wrote:

Hi there,

We have the next problem (or misconfiguration) with elasticsearch :

We have installed the latest version of elasticsearch ( 0.90.5 ) from deb
package, and we used it at multiple clients, and this type of error only
occurs on this one customer's system, and doesn't occur for other customers.
The only difference beetween the servers of clients is the configuration
of elasticsearch, so I thought that the configuration file may be wrong.

Here is the working configuration althought sometimes an OutOfMemoryError
execption is occurred.. but that's an another topic..


cluster.name: "xxxxxxxx"
thrift.port: 9500

index.cache.field.type: soft
index.cache.field.max_size: 50000

indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 5mb

indices.memory.index_buffer_size: 50%
index.refresh_interval: 30
index.translog.flush_threshold_ops: 50000
index.store.compress.stored: true

Search pool

threadpool.search.type: fixed
threadpool.search.size: 50
threadpool.search.queue_size: 100

Bulk pool

threadpool.bulk.type: fixed
threadpool.bulk.size: 50
threadpool.bulk.queue_size: 300

Index pool

threadpool.index.type: fixed
threadpool.index.size: 50
threadpool.index.queue_size: 100

Here is the other configuration which contains bigger values because the
memory is much more (120GB) than the others, and we give 30GB to
elasticsearch.


cluster.name: xxx
thrift.port: 9500
bootstrap.mlockall: true
index.cache.field.type: soft
index.cache.field.max_size: 200000
index.cache.field.expire: 15m
indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 25mbcustomers
indices.memory.index_buffer_size: 30%
index.refresh_interval: 30
index.translog.flush_threshold_ops: 200000
index.store.compress.stored: true
index.merge.policy.max_merged_segment: 10g
indices.fielddata.cache.size: 30%
indices.fielddata.cache.expire: 20m
index.merge.policy.max_merge_size: 5g

search pool
threadpool.search.type: fixed
threadpool.search.size: 10000
threadpool.search.queue_size: 50000

#Bulk pool
threadpool.bulk.type: fixed
threadpool.bulk.size: 10000
threadpool.bulk.queue_size: 30000

#Index pool
threadpool.index.type: fixed
threadpool.index.size: 10000
threadpool.index.queue_size: 50000

I don't know what is causing the OutOfMemoryError exception.

[2013-10-11 10:58:29,409][WARN ][action.bulk ] [Rama-Tut]
failed to update master on updated mapping for index
[normalized-2013-10-11], type [normalized] and source
[{"normalized":{"properties":{"__uidentifer":{"type":"long"},"_day":{"type":"date","format":"dateOptionalTime"},"_event_id":{"type":"long"},"_gparent_id":{"type":"long"},"_container_id":{"type":"long"},"_subcat_id":{"type":"long"},"_time_is":{"type":"date","format":"dateOptionalTime"},"action":{"type":"string"},"date":{"type":"date","format":"dateOptionalTime"},"thesource":{"type":"string"},"message":{"type":"string","analyzer":"standard"},"service":{"type":"string"},"username":{"type":"string"}}}}]
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:693)
at
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor.execute(PrioritizedEsThreadPoolExecutor.java:95)
at
org.elasticsearch.cluster.service.InternalClusterService.submitStateUpdateTask(InternalClusterService.java:237)
at
org.elasticsearch.cluster.metadata.MetaDataMappingService.updateMapping(MetaDataMappingService.java:281)
at
org.elasticsearch.cluster.action.index.MappingUpdatedAction.masterOperation(MappingUpdatedAction.java:79)
at
org.elasticsearch.cluster.action.index.MappingUpdatedAction.masterOperation(MappingUpdatedAction.java:45)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$2.run(TransportMasterNodeOperationAction.java:144)
at
org.elasticsearch.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.innerExecute(TransportMasterNodeOperationAction.java:140)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.doExecute(TransportMasterNodeOperationAction.java:94)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.doExecute(TransportMasterNodeOperationAction.java:42)
at
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
at
org.elasticsearch.action.support.master.TransportMasterNodeOperationAction.execute(TransportMasterNodeOperationAction.java:89)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.updateMappingOnMaster(TransportShardBulkAction.java:612)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:339)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:521)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

I would be very happy if someone there is who has had a similar propblem
with the elasticsearch, and explain it what would I have done with it.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.