Commit failure using remote cluster

Hello,

I'm trying to use remote ElasticSearch from TitanDB, but get the following
error on commit in Gremlin:

Could not commit transaction due to exception during persistence
Display stack trace? [yN] y
com.thinkaurelius.titan.core.TitanException: Could not commit transaction
due to exception during persistence
at
com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:848)
at
com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.commit(TitanBlueprintsGraph.java:41)
at com.tinkerpop.blueprints.TransactionalGraph$commit.call(Unknown Source)
at
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
...
Caused by: com.thinkaurelius.titan.diskstorage.PermanentStorageException:
Unknown exception while executing index operation
at
com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.convert(ElasticSearchIndex.java:169)
at
com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:305)
at
com.thinkaurelius.titan.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:88)
at
com.thinkaurelius.titan.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:70)
at
com.thinkaurelius.titan.diskstorage.BackendTransaction.commit(BackendTransaction.java:71)
at
com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:841)
... 48 more
Caused by:
org.elasticsearch.common.util.concurrent.UncategorizedExecutionException:
Failed execution
at
org.elasticsearch.action.support.AdapterActionFuture.rethrowExecutionException(AdapterActionFuture.java:88)
at
org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:49)
at
com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:298)
... 52 more
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 272
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(AdapterStreamInput.java:35)
at
org.elasticsearch.common.io.stream.StreamInput.readBoolean(StreamInput.java:252)
at
org.elasticsearch.action.update.UpdateRequest.readFrom(UpdateRequest.java:566)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:204)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:108)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
gremlin>

The number at "Readable byte limit exceeded:" varies each time I try this,
but is never much more than the 272 above.

Context:
gremlin> g =
TitanFactory.open("/root/titan-all-0.3.1/config/RemoteES.properties")
gremlin> g.createKeyIndex('Type' , Vertex.class)
gremlin> g.createKeyIndex('Code' , Vertex.class)
gremlin>
g.makeType().name("Geo_Loc").dataType(Geoshape.class).unique(OUT).indexed("esremote",
Vertex.class).makePropertyKey()
gremlin> g.addVertex(null, [Name:"alice", Boro:"Bronx", Type:"station",
Lat:40.889, Lon:-73.898, Code:"1.01"])
gremlin> g.addVertex(null, [Name:"bob", Boro:"Bronx", Type:"station",
Lat:40.884, Lon:-73.9, Code:"1.02"])
gremlin> g.addVertex(null, [Name:"chuck", Boro:"Bronx", Type:"station",
Lat:40.878, Lon:-73.904, Code:"1.03"])
gremlin> g.addVertex(null, [Name:"dave", Boro:"Bronx", Type:"station",
Lat:40.874, Lon:-73.909, Code:"1.04"])
gremlin> g.addVertex(null, [Name:"emily", Boro:"Manhattan", Type:"station",
Lat:40.869, Lon:-73.915, Code:"1.05"])
gremlin> g.commit()
gremlin> g.V("Code","1.01").sideEffect{it.Geo_Loc =
Geoshape.point(it.Lat,it.Lon)}
gremlin> g.commit()

It succeeds if the vertex being updated is the only thing in the graph.

This all works just fine with embedded ElasticSearch updating hundreds of
vertices at once out of thousands.

Versions:
ElasticSearch: 0.90.2
Titan: 0.3.1 (bundles v. 0.90.0 of ES)

Any thoughts on what's going wrong here would be appreciated.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey,

do you use the same JVM versions on titandb and elasticsearch? If not that
could be worth a try (even the same minor number). Usually minor releases
should be compatible to each other, but apart from that you could test it
with a 0.90.0 release just to see if you can prevent the exceptions with
that.

--Alex

On Sat, Jul 27, 2013 at 7:31 AM, Gordon Collins gordon.collins@e-cgb.comwrote:

Hello,

I'm trying to use remote Elasticsearch from TitanDB, but get the following
error on commit in Gremlin:

Could not commit transaction due to exception during persistence
Display stack trace? [yN] y
com.thinkaurelius.titan.core.TitanException: Could not commit transaction
due to exception during persistence
at
com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:848)
at
com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.commit(TitanBlueprintsGraph.java:41)
at com.tinkerpop.blueprints.TransactionalGraph$commit.call(Unknown Source)
at
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
...
Caused by: com.thinkaurelius.titan.diskstorage.PermanentStorageException:
Unknown exception while executing index operation
at
com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.convert(ElasticSearchIndex.java:169)
at
com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:305)
at
com.thinkaurelius.titan.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:88)
at
com.thinkaurelius.titan.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:70)
at
com.thinkaurelius.titan.diskstorage.BackendTransaction.commit(BackendTransaction.java:71)
at
com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:841)
... 48 more
Caused by:
org.elasticsearch.common.util.concurrent.UncategorizedExecutionException:
Failed execution
at
org.elasticsearch.action.support.AdapterActionFuture.rethrowExecutionException(AdapterActionFuture.java:88)
at
org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:49)
at
com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:298)
... 52 more
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 272
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(AdapterStreamInput.java:35)
at
org.elasticsearch.common.io.stream.StreamInput.readBoolean(StreamInput.java:252)
at
org.elasticsearch.action.update.UpdateRequest.readFrom(UpdateRequest.java:566)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:204)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:108)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
gremlin>

The number at "Readable byte limit exceeded:" varies each time I try this,
but is never much more than the 272 above.

Context:
gremlin> g =
TitanFactory.open("/root/titan-all-0.3.1/config/RemoteES.properties")
gremlin> g.createKeyIndex('Type' , Vertex.class)
gremlin> g.createKeyIndex('Code' , Vertex.class)
gremlin>
g.makeType().name("Geo_Loc").dataType(Geoshape.class).unique(OUT).indexed("esremote",
Vertex.class).makePropertyKey()
gremlin> g.addVertex(null, [Name:"alice", Boro:"Bronx", Type:"station",
Lat:40.889, Lon:-73.898, Code:"1.01"])
gremlin> g.addVertex(null, [Name:"bob", Boro:"Bronx", Type:"station",
Lat:40.884, Lon:-73.9, Code:"1.02"])
gremlin> g.addVertex(null, [Name:"chuck", Boro:"Bronx", Type:"station",
Lat:40.878, Lon:-73.904, Code:"1.03"])
gremlin> g.addVertex(null, [Name:"dave", Boro:"Bronx", Type:"station",
Lat:40.874, Lon:-73.909, Code:"1.04"])
gremlin> g.addVertex(null, [Name:"emily", Boro:"Manhattan",
Type:"station", Lat:40.869, Lon:-73.915, Code:"1.05"])
gremlin> g.commit()
gremlin> g.V("Code","1.01").sideEffect{it.Geo_Loc =
Geoshape.point(it.Lat,it.Lon)}
gremlin> g.commit()

It succeeds if the vertex being updated is the only thing in the graph.

This all works just fine with embedded Elasticsearch updating hundreds of
vertices at once out of thousands.

Versions:
Elasticsearch: 0.90.2
Titan: 0.3.1 (bundles v. 0.90.0 of ES)

Any thoughts on what's going wrong here would be appreciated.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks for the suggestion. The JVMs are indeed different:
Titan: 1.6
Elasticsearch: 1.7
We started with the same CentOS baseline on both; apparently the yum
install of ES pulled in 1.7. I'll get that put back to 1.6 and retest.

So now I wonder which versions of Elasticsearch work with 1.6 or if we'll
need to compile from source....

On Sunday, July 28, 2013 4:13:25 AM UTC-4, Alexander Reelsen wrote:

Hey,

do you use the same JVM versions on titandb and elasticsearch? If not that
could be worth a try (even the same minor number). Usually minor releases
should be compatible to each other, but apart from that you could test it
with a 0.90.0 release just to see if you can prevent the exceptions with
that.

--Alex

On Sat, Jul 27, 2013 at 7:31 AM, Gordon Collins <gordon....@e-cgb.com<javascript:>

wrote:

Hello,

I'm trying to use remote Elasticsearch from TitanDB, but get the
following error on commit in Gremlin:

Could not commit transaction due to exception during persistence
Display stack trace? [yN] y
com.thinkaurelius.titan.core.TitanException: Could not commit transaction
due to exception during persistence
at
com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:848)
at
com.thinkaurelius.titan.graphdb.blueprints.TitanBlueprintsGraph.commit(TitanBlueprintsGraph.java:41)
at com.tinkerpop.blueprints.TransactionalGraph$commit.call(Unknown
Source)
at
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
...
Caused by: com.thinkaurelius.titan.diskstorage.PermanentStorageException:
Unknown exception while executing index operation
at
com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.convert(ElasticSearchIndex.java:169)
at
com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:305)
at
com.thinkaurelius.titan.diskstorage.indexing.IndexTransaction.flushInternal(IndexTransaction.java:88)
at
com.thinkaurelius.titan.diskstorage.indexing.IndexTransaction.commit(IndexTransaction.java:70)
at
com.thinkaurelius.titan.diskstorage.BackendTransaction.commit(BackendTransaction.java:71)
at
com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.commit(StandardTitanTx.java:841)
... 48 more
Caused by:
org.elasticsearch.common.util.concurrent.UncategorizedExecutionException:
Failed execution
at
org.elasticsearch.action.support.AdapterActionFuture.rethrowExecutionException(AdapterActionFuture.java:88)
at
org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:49)
at
com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex.mutate(ElasticSearchIndex.java:298)
... 52 more
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 272
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(AdapterStreamInput.java:35)
at
org.elasticsearch.common.io.stream.StreamInput.readBoolean(StreamInput.java:252)
at
org.elasticsearch.action.update.UpdateRequest.readFrom(UpdateRequest.java:566)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:204)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:108)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
gremlin>

The number at "Readable byte limit exceeded:" varies each time I try
this, but is never much more than the 272 above.

Context:
gremlin> g =
TitanFactory.open("/root/titan-all-0.3.1/config/RemoteES.properties")
gremlin> g.createKeyIndex('Type' , Vertex.class)
gremlin> g.createKeyIndex('Code' , Vertex.class)
gremlin>
g.makeType().name("Geo_Loc").dataType(Geoshape.class).unique(OUT).indexed("esremote",
Vertex.class).makePropertyKey()
gremlin> g.addVertex(null, [Name:"alice", Boro:"Bronx", Type:"station",
Lat:40.889, Lon:-73.898, Code:"1.01"])
gremlin> g.addVertex(null, [Name:"bob", Boro:"Bronx", Type:"station",
Lat:40.884, Lon:-73.9, Code:"1.02"])
gremlin> g.addVertex(null, [Name:"chuck", Boro:"Bronx", Type:"station",
Lat:40.878, Lon:-73.904, Code:"1.03"])
gremlin> g.addVertex(null, [Name:"dave", Boro:"Bronx", Type:"station",
Lat:40.874, Lon:-73.909, Code:"1.04"])
gremlin> g.addVertex(null, [Name:"emily", Boro:"Manhattan",
Type:"station", Lat:40.869, Lon:-73.915, Code:"1.05"])
gremlin> g.commit()
gremlin> g.V("Code","1.01").sideEffect{it.Geo_Loc =
Geoshape.point(it.Lat,it.Lon)}
gremlin> g.commit()

It succeeds if the vertex being updated is the only thing in the graph.

This all works just fine with embedded Elasticsearch updating hundreds of
vertices at once out of thousands.

Versions:
Elasticsearch: 0.90.2
Titan: 0.3.1 (bundles v. 0.90.0 of ES)

Any thoughts on what's going wrong here would be appreciated.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Followup: Installing Elasticsearch 0.90.0 onto a fresh VM (which pulled in
Java 1.7) works fine. So apparently it's important to match the version of
Elasticsearch that's embedded within Titan when using the remote service.

Thanks,
Gordon.

On Monday, July 29, 2013 10:48:22 AM UTC-4, Gordon Collins wrote:

Thanks for the suggestion. The JVMs are indeed different:
Titan: 1.6
Elasticsearch: 1.7
We started with the same CentOS baseline on both; apparently the yum
install of ES pulled in 1.7. I'll get that put back to 1.6 and retest.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey,

regarding your other question: All Elasticsearch versions work with java
1.7 and 1.6, no need to worry about that currently (even though I prefer to
run with java 1.7 as java 1.6 is EOL)

--Alex

On Tue, Jul 30, 2013 at 7:16 AM, Gordon Collins gordon.collins@e-cgb.comwrote:

Followup: Installing Elasticsearch 0.90.0 onto a fresh VM (which pulled
in Java 1.7) works fine. So apparently it's important to match the version
of Elasticsearch that's embedded within Titan when using the remote service.

Thanks,
Gordon.

On Monday, July 29, 2013 10:48:22 AM UTC-4, Gordon Collins wrote:

Thanks for the suggestion. The JVMs are indeed different:
Titan: 1.6
Elasticsearch: 1.7
We started with the same CentOS baseline on both; apparently the yum
install of ES pulled in 1.7. I'll get that put back to 1.6 and retest.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.