I'm getting closer to opening an issue against ES 0.90.3. But first, I'd
like to see what others think. Here's my scenario:
I have a test driver that can define N update threads and N query threads.
The update threads automatically generate a sequence of unique documents
and update them. Then they add their IDs to a bigqueue instance, wherein
the query threads read each document's ID and issue a query. Each document
has a 10-second time to live. The driver tracks all stats (elapsed time,
total updates, total queries, total elapsed time for all updates and
queries, errors, and so on).
When I run this on my MacBook, with 8 update threads and 8 query threads, I
see an update rate of over 60/second. When I run it on a Linux laptop (same
quad core i7 CPUs, very similar laptop-class disk drive), I see an update
rate of 268/second. Cool. No errors in either case.
When I pointed the driver (running on the MacBook) to a remote 3-node
cluster, added all 3 node addresses to the TransportClient in the driver,
and ran it, it got update errors galore. For example, I get lots of these
on the client (test driver) side:
org.elasticsearch.transport.TransportSerializationException: Failed to
deserialize exception response from stream
And on one of the servers, I see the errors such as the following in the ES
log for the cluster:
Failed to execute [index
{[rtctest][connection][3136393030303031363940766572697A6F6E2E636F6D],
source[{"onet":"xxxx","orig":"1690000169@xxxx.com","term":"1700000170@xxxx.com"}]}]
org.elasticsearch.index.mapper.MapperParsingException: failed to parse
[_ttl]
at
org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:396)
at
org.elasticsearch.index.mapper.internal.TTLFieldMapper.postParse(TTLFieldMapper.java:167)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:525)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:451)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:329)
at
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:203)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:521)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
I then re-ran the test to the remote cluster, but this time I only added
the address of one of the nodes to the TransportClient in the driver, the
test ran fine with no update failures at all. It's as if issuing updates
with more than one host address added to the TransportClient causes
failures.
There is only one document type, the fields are not indexed (for maximum
update performance, all queries are get-by-id, the default TTL is given as
10s in the mappings (which is honored and processed, which is kind of cool,
since this makes the test case self-restarting!) There is one replica
defined.
And the cluster state stays green throughout all tests, even for the
failures.
Brian
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.