NPE in Couch River dynamic mapping

Hi there, I am trying to move to ES 0.17.6 and I am getting a lot of
the stack traces below when using the CouchDB River with a default
mapping. Effectively nothing is getting indexed and I'm not sure how
to proceed. Any suggestions where to look for the problem?

java.lang.NullPointerException
at org.elasticsearch.index.mapper.FieldMapper
$Names.(FieldMapper.java:55)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper
$Builder.buildNames(AbstractFieldMapper.java:186)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:73)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:53)
at
org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
632)
at
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
440)
at
org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
565)
at
org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray(ObjectMapper.java:
556)
at
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
432)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
566)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
490)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:
289)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:
130)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:
428)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
$AsyncShardOperationAction
$1.run(TransportShardReplicationOperationAction.java:341)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

Can you try and recreate it with the sample data you index using curl, and
gist it?

On Tue, Aug 16, 2011 at 12:18 PM, Robert Rees robert@wazoku.com wrote:

Hi there, I am trying to move to ES 0.17.6 and I am getting a lot of
the stack traces below when using the CouchDB River with a default
mapping. Effectively nothing is getting indexed and I'm not sure how
to proceed. Any suggestions where to look for the problem?

java.lang.NullPointerException
at org.elasticsearch.index.mapper.FieldMapper
$Names.(FieldMapper.java:55)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper
$Builder.buildNames(AbstractFieldMapper.java:186)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:73)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:53)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
632)
at
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
440)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
565)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray(ObjectMapper.java:
556)
at
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
432)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
566)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
490)
at

org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:
289)
at

org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:
130)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction

$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:
428)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
$AsyncShardOperationAction
$1.run(TransportShardReplicationOperationAction.java:341)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

I have found the problem field (nothing more sophisticated than removing
fields until the problem went away). I think I know the cause of the problem
now. This field is an array that can contain either a map or a string.
Historic records hold maps and newer ones contain strings. Intermediates can
contain both. If I include this field I get the error, if I don't it indexes
okay. I assumed it would be okay as it is valid JSON going in but are there
some rules around mixed types in collections?

On Tue, Aug 16, 2011 at 12:04 PM, Shay Banon kimchy@gmail.com wrote:

Can you try and recreate it with the sample data you index using curl, and
gist it?

On Tue, Aug 16, 2011 at 12:18 PM, Robert Rees robert@wazoku.com wrote:

Hi there, I am trying to move to ES 0.17.6 and I am getting a lot of
the stack traces below when using the CouchDB River with a default
mapping. Effectively nothing is getting indexed and I'm not sure how
to proceed. Any suggestions where to look for the problem?

java.lang.NullPointerException
at org.elasticsearch.index.mapper.FieldMapper
$Names.(FieldMapper.java:55)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper
$Builder.buildNames(AbstractFieldMapper.java:186)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:73)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:53)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
632)
at

org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
440)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
565)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray(ObjectMapper.java:
556)
at

org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
432)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
566)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
490)
at

org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:
289)
at

org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:
130)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction

$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:
428)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
$AsyncShardOperationAction
$1.run(TransportShardReplicationOperationAction.java:341)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

Yes, there are. Once you index a specific json, mapping is created for it.
So, for example, a field can't be type "string", and then become an object.
This is because we need to index them in a specific manner, and know how to
work with them when searching.

On Tue, Aug 16, 2011 at 3:46 PM, Robert Rees robert@wazoku.com wrote:

I have found the problem field (nothing more sophisticated than removing
fields until the problem went away). I think I know the cause of the problem
now. This field is an array that can contain either a map or a string.
Historic records hold maps and newer ones contain strings. Intermediates can
contain both. If I include this field I get the error, if I don't it indexes
okay. I assumed it would be okay as it is valid JSON going in but are there
some rules around mixed types in collections?

On Tue, Aug 16, 2011 at 12:04 PM, Shay Banon kimchy@gmail.com wrote:

Can you try and recreate it with the sample data you index using curl, and
gist it?

On Tue, Aug 16, 2011 at 12:18 PM, Robert Rees robert@wazoku.com wrote:

Hi there, I am trying to move to ES 0.17.6 and I am getting a lot of
the stack traces below when using the CouchDB River with a default
mapping. Effectively nothing is getting indexed and I'm not sure how
to proceed. Any suggestions where to look for the problem?

java.lang.NullPointerException
at org.elasticsearch.index.mapper.FieldMapper
$Names.(FieldMapper.java:55)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper
$Builder.buildNames(AbstractFieldMapper.java:186)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:73)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:53)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
632)
at

org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
440)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
565)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray(ObjectMapper.java:
556)
at

org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
432)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
566)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
490)
at

org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:
289)
at

org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:
130)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction

$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:
428)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
$AsyncShardOperationAction
$1.run(TransportShardReplicationOperationAction.java:341)
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

To help me be clear about sorting this out. Arrays to be indexed must only
contain one type? If I migrate the array to contain only objects I can then
delete the rivers and recreate them to rebuild the inferred schema?

On Tue, Aug 16, 2011 at 3:46 PM, Robert Rees robert@wazoku.com wrote:

I have found the problem field (nothing more sophisticated than removing
fields until the problem went away). I think I know the cause of the problem
now. This field is an array that can contain either a map or a string.
Historic records hold maps and newer ones contain strings. Intermediates can
contain both. If I include this field I get the error, if I don't it indexes
okay. I assumed it would be okay as it is valid JSON going in but are there
some rules around mixed types in collections?

On Tue, Aug 16, 2011 at 12:04 PM, Shay Banon kimchy@gmail.com wrote:

Can you try and recreate it with the sample data you index using curl,
and gist it?

On Tue, Aug 16, 2011 at 12:18 PM, Robert Rees robert@wazoku.com wrote:

Hi there, I am trying to move to ES 0.17.6 and I am getting a lot of
the stack traces below when using the CouchDB River with a default
mapping. Effectively nothing is getting indexed and I'm not sure how
to proceed. Any suggestions where to look for the problem?

java.lang.NullPointerException
at org.elasticsearch.index.mapper.FieldMapper
$Names.(FieldMapper.java:55)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper
$Builder.buildNames(AbstractFieldMapper.java:186)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:73)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:53)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
632)
at

org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
440)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
565)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray(ObjectMapper.java:
556)
at

org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
432)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
566)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
490)
at

org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:
289)
at

org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:
130)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction

$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:
428)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
$AsyncShardOperationAction
$1.run(TransportShardReplicationOperationAction.java:341)
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

Yes, you can do that. Note that deleting the rivers just delete the river,
not the index its indexes into. So, you can potentially simply delete one
river, keep the old index, and then create a new river that would index into
a different index.

On Tue, Aug 16, 2011 at 3:58 PM, Robert Rees robert@wazoku.com wrote:

To help me be clear about sorting this out. Arrays to be indexed must only
contain one type? If I migrate the array to contain only objects I can then
delete the rivers and recreate them to rebuild the inferred schema?

On Tue, Aug 16, 2011 at 3:46 PM, Robert Rees robert@wazoku.com wrote:

I have found the problem field (nothing more sophisticated than removing
fields until the problem went away). I think I know the cause of the problem
now. This field is an array that can contain either a map or a string.
Historic records hold maps and newer ones contain strings. Intermediates can
contain both. If I include this field I get the error, if I don't it indexes
okay. I assumed it would be okay as it is valid JSON going in but are there
some rules around mixed types in collections?

On Tue, Aug 16, 2011 at 12:04 PM, Shay Banon kimchy@gmail.com wrote:

Can you try and recreate it with the sample data you index using curl,
and gist it?

On Tue, Aug 16, 2011 at 12:18 PM, Robert Rees robert@wazoku.comwrote:

Hi there, I am trying to move to ES 0.17.6 and I am getting a lot of
the stack traces below when using the CouchDB River with a default
mapping. Effectively nothing is getting indexed and I'm not sure how
to proceed. Any suggestions where to look for the problem?

java.lang.NullPointerException
at org.elasticsearch.index.mapper.FieldMapper
$Names.(FieldMapper.java:55)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper
$Builder.buildNames(AbstractFieldMapper.java:186)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:73)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:53)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
632)
at

org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
440)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
565)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray(ObjectMapper.java:
556)
at

org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
432)
at

org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
566)
at

org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
490)
at

org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:
289)
at

org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:
130)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction

$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:
428)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
$AsyncShardOperationAction
$1.run(TransportShardReplicationOperationAction.java:341)
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

Here's the simplest example I could think of to exercise the problem.

curl -XPUT http://localhost:9200/test-index/hetro-array/1 -d '{"tracer" :
"dummy", "values" : [{"fruit" : "lemon"}]}'
curl http://localhost:9200/test-index/hetro-array/_search?pretty=true -d
'{"query" : {"term" : {"tracer" : "dummy"}}}'
curl -XPUT http://localhost:9200/test-index/hetro-array/1 -d '{"tracer" :
"dummy", "values" : [{"fruit" : "lemon"}, "lime"]}'

The stack trace is the same as my "live" one.

On Tue, Aug 16, 2011 at 2:21 PM, Shay Banon kimchy@gmail.com wrote:

Yes, you can do that. Note that deleting the rivers just delete the river,
not the index its indexes into. So, you can potentially simply delete one
river, keep the old index, and then create a new river that would index into
a different index.

On Tue, Aug 16, 2011 at 3:58 PM, Robert Rees robert@wazoku.com wrote:

To help me be clear about sorting this out. Arrays to be indexed must only
contain one type? If I migrate the array to contain only objects I can then
delete the rivers and recreate them to rebuild the inferred schema?

On Tue, Aug 16, 2011 at 3:46 PM, Robert Rees robert@wazoku.com wrote:

I have found the problem field (nothing more sophisticated than removing
fields until the problem went away). I think I know the cause of the problem
now. This field is an array that can contain either a map or a string.
Historic records hold maps and newer ones contain strings. Intermediates can
contain both. If I include this field I get the error, if I don't it indexes
okay. I assumed it would be okay as it is valid JSON going in but are there
some rules around mixed types in collections?

On Tue, Aug 16, 2011 at 12:04 PM, Shay Banon kimchy@gmail.com wrote:

Can you try and recreate it with the sample data you index using curl,
and gist it?

On Tue, Aug 16, 2011 at 12:18 PM, Robert Rees robert@wazoku.comwrote:

Hi there, I am trying to move to ES 0.17.6 and I am getting a lot of
the stack traces below when using the CouchDB River with a default
mapping. Effectively nothing is getting indexed and I'm not sure how
to proceed. Any suggestions where to look for the problem?

java.lang.NullPointerException
at org.elasticsearch.index.mapper.FieldMapper
$Names.(FieldMapper.java:55)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper
$Builder.buildNames(AbstractFieldMapper.java:186)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:73)
at org.elasticsearch.index.mapper.core.StringFieldMapper
$Builder.build(StringFieldMapper.java:53)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
632)
at

org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
440)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:
565)
at

org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray(ObjectMapper.java:
556)
at

org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:
432)
at

org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
566)
at

org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:
490)
at

org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:
289)
at

org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:
130)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction

$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:
428)
at

org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction
$AsyncShardOperationAction
$1.run(TransportShardReplicationOperationAction.java:341)
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)