But when I try to index some entry without providing a mapping, I got a null
pointer exception. The entry as simple as the following:
{"name":"sezgin"}
Here is the trace:
org.elasticsearch.transport.RemoteTransportException: [Jim
Hammond][inet[/192.168.1.102:9300]][indices/index/shard/index]
Caused by: org.elasticsearch.action.support.replication.ReplicationShardOperationFailedException:
[topo_topology_09280306][4]
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:392)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.access$400(TransportShardReplicationOperationAction.java:208)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:278)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.NullPointerException
at org.elasticsearch.common.lucene.all.AllTokenStream.incrementToken(AllTokenStream.java:59)
at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:137)
at org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java:246)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:821)
at org.apache.lucene.index.DocumentsWriter.addDocument(DocumentsWriter.java:797)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1981)
at org.elasticsearch.index.engine.robin.RobinEngine.create(RobinEngine.java:189)
at org.elasticsearch.index.shard.service.InternalIndexShard.innerCreate(InternalIndexShard.java:213)
at org.elasticsearch.index.shard.service.InternalIndexShard.create(InternalIndexShard.java:201)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:135)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:60)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:381)
... 5 more
This exception comes from the _all field which also ends up using the
keyword analyzer. I have pushed a fix for it, but note that it does not make
sense to have the _all field use the keyword analyzer (think of the _all
field as long text of all the json values in the document). You can either
disable the _all field if what you want is just keyword based analyzed
fields, or configure it to use a different analyzer.
But when I try to index some entry without providing a mapping, I got a
null pointer exception. The entry as simple as the following:
{"name":"sezgin"}
Here is the trace:
org.elasticsearch.transport.RemoteTransportException: [Jim Hammond][inet[/192.168.1.102:9300]][indices/index/shard/index]
Caused by: org.elasticsearch.action.support.replication.ReplicationShardOperationFailedException: [topo_topology_09280306][4]
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:392)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.access$400(TransportShardReplicationOperationAction.java:208)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:278)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.NullPointerException
at org.elasticsearch.common.lucene.all.AllTokenStream.incrementToken(AllTokenStream.java:59)
at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:137)
at org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java:246)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:821)
at org.apache.lucene.index.DocumentsWriter.addDocument(DocumentsWriter.java:797)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1981)
at org.elasticsearch.index.engine.robin.RobinEngine.create(RobinEngine.java:189)
at org.elasticsearch.index.shard.service.InternalIndexShard.innerCreate(InternalIndexShard.java:213)
at org.elasticsearch.index.shard.service.InternalIndexShard.create(InternalIndexShard.java:201)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:135)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:60)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:381)
... 5 more
This exception comes from the _all field which also ends up using the
keyword analyzer. I have pushed a fix for it, but note that it does not make
sense to have the _all field use the keyword analyzer (think of the _all
field as long text of all the json values in the document). You can either
disable the _all field if what you want is just keyword based analyzed
fields, or configure it to use a different analyzer.
But when I try to index some entry without providing a mapping, I got a
null pointer exception. The entry as simple as the following:
{"name":"sezgin"}
Here is the trace:
org.elasticsearch.transport.RemoteTransportException: [Jim Hammond][inet[/192.168.1.102:9300]][indices/index/shard/index]
Caused by: org.elasticsearch.action.support.replication.ReplicationShardOperationFailedException: [topo_topology_09280306][4]
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:392)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.access$400(TransportShardReplicationOperationAction.java:208)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:278)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.NullPointerException
at org.elasticsearch.common.lucene.all.AllTokenStream.incrementToken(AllTokenStream.java:59)
at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:137)
at org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java:246)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:821)
at org.apache.lucene.index.DocumentsWriter.addDocument(DocumentsWriter.java:797)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1981)
at org.elasticsearch.index.engine.robin.RobinEngine.create(RobinEngine.java:189)
at org.elasticsearch.index.shard.service.InternalIndexShard.innerCreate(InternalIndexShard.java:213)
at org.elasticsearch.index.shard.service.InternalIndexShard.create(InternalIndexShard.java:201)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:135)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:60)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:381)
... 5 more
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.