I have a json mapping file that I user for unit testing that is based
on the one I use with CURL. It looks like the java api only accepts
the "mapping: {}" section that starts with the type like this:
I want to add an edgeNGram tokenizer to a field and it looks like it
needs to be specified as follows:
{
"settings" : {
"analysis": {
"analyzer": {
"containsText" : {
"tokenizer": "whitespace",
"filter": ["lowercase", "autocomplete"]
}
},
"filter": {
"autocomplete": {"type": "edgeNGram", "min_gram": "1",
"max_gram": "50", "side": "front"}
}
}
},
"mappings" : {
"records" : {
"properties" : {
"name":{"type":"string", "analyzer":"containsText"}
}
}
}
}
This seems to work from the REST api, but the java mapping api gives
the following exception.
org.elasticsearch.index.mapper.MapperParsingException: mapping
[records]
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService
$1.execute(MetaDataCreateIndexService.java:244)
at org.elasticsearch.cluster.service.InternalClusterService
$2.run(InternalClusterService.java:211)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.elasticsearch.index.mapper.MapperParsingException:
Mapping must have the type as the root object
at
org.elasticsearch.index.mapper.DocumentMapperParser.extractMapping(DocumentMapperParser.java:
231)
at
org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:
141)
at
org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:
269)
at
org.elasticsearch.index.mapper.MapperService.add(MapperService.java:
172)
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService
$1.execute(MetaDataCreateIndexService.java:241)
How can I set this mapping up using the java api so that I can perform
some unit testing with ngrams to make sure they are working
correctly?
I have a json mapping file that I user for unit testing that is based
on the one I use with CURL. It looks like the java api only accepts the
"mapping: {}" section that starts with the type like this:
I want to add an edgeNGram tokenizer to a field and it looks like it
needs to be specified as follows:
{
"settings" : {
"analysis": {
"analyzer": {
"containsText" : {
"tokenizer": "whitespace",
"filter": ["lowercase", "autocomplete"]
}
},
"filter": {
"autocomplete": {"type": "edgeNGram", "min_gram": "1",
"max_gram": "50", "side": "front"}
}
}
},
"mappings" : {
"records" : {
"properties" : {
"name":{"type":"string", "analyzer":"containsText"}
}
}
}
}
This seems to work from the REST api, but the java mapping api gives
the following exception.
org.elasticsearch.index.mapper.MapperParsingException: mapping
[records]
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService
$1.execute(MetaDataCreateIndexService.java:244)
at org.elasticsearch.cluster.service.InternalClusterService
$2.run(InternalClusterService.java:211)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.elasticsearch.index.mapper.MapperParsingException:
Mapping must have the type as the root object
at
org.elasticsearch.index.mapper.DocumentMapperParser.extractMapping(Docu
mentMapperParser.java:
231)
at
org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMappe
rParser.java:
141)
at
org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:
269)
at
org.elasticsearch.index.mapper.MapperService.add(MapperService.java:
172)
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService
$1.execute(MetaDataCreateIndexService.java:241)
How can I set this mapping up using the java api so that I can perform
some unit testing with ngrams to make sure they are working correctly?
Interesting. You're using a different api than I am to create the
index mapping. I'll need to try that. Thanks for the code resource,
very neat and understandable.
Craig
On Thu, May 24, 2012 at 3:47 PM, David Pilato david@pilato.fr wrote:
I have a json mapping file that I user for unit testing that is based
on the one I use with CURL. It looks like the java api only accepts the
"mapping: {}" section that starts with the type like this:
I want to add an edgeNGram tokenizer to a field and it looks like it
needs to be specified as follows:
{
"settings" : {
"analysis": {
"analyzer": {
"containsText" : {
"tokenizer": "whitespace",
"filter": ["lowercase", "autocomplete"]
}
},
"filter": {
"autocomplete": {"type": "edgeNGram", "min_gram": "1",
"max_gram": "50", "side": "front"}
}
}
},
"mappings" : {
"records" : {
"properties" : {
"name":{"type":"string", "analyzer":"containsText"}
}
}
}
}
This seems to work from the REST api, but the java mapping api gives
the following exception.
org.elasticsearch.index.mapper.MapperParsingException: mapping
[records]
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService
$1.execute(MetaDataCreateIndexService.java:244)
at org.elasticsearch.cluster.service.InternalClusterService
$2.run(InternalClusterService.java:211)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.elasticsearch.index.mapper.MapperParsingException:
Mapping must have the type as the root object
at
org.elasticsearch.index.mapper.DocumentMapperParser.extractMapping(Docu
mentMapperParser.java:
231)
at
org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMappe
rParser.java:
141)
at
org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:
269)
at
org.elasticsearch.index.mapper.MapperService.add(MapperService.java:
172)
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService
$1.execute(MetaDataCreateIndexService.java:241)
How can I set this mapping up using the java api so that I can perform
some unit testing with ngrams to make sure they are working correctly?
Ok, figured it out. I tried this one, but must have had the parameters
wrong. So it looks like there is a section for settings and another
for mappings. The REST call must break those apart automatically.
On Thu, May 24, 2012 at 4:11 PM, Craig Brown cbrown@youwho.com wrote:
Interesting. You're using a different api than I am to create the
index mapping. I'll need to try that. Thanks for the code resource,
very neat and understandable.
Craig
On Thu, May 24, 2012 at 3:47 PM, David Pilato david@pilato.fr wrote:
I have a json mapping file that I user for unit testing that is based
on the one I use with CURL. It looks like the java api only accepts the
"mapping: {}" section that starts with the type like this:
I want to add an edgeNGram tokenizer to a field and it looks like it
needs to be specified as follows:
{
"settings" : {
"analysis": {
"analyzer": {
"containsText" : {
"tokenizer": "whitespace",
"filter": ["lowercase", "autocomplete"]
}
},
"filter": {
"autocomplete": {"type": "edgeNGram", "min_gram": "1",
"max_gram": "50", "side": "front"}
}
}
},
"mappings" : {
"records" : {
"properties" : {
"name":{"type":"string", "analyzer":"containsText"}
}
}
}
}
This seems to work from the REST api, but the java mapping api gives
the following exception.
org.elasticsearch.index.mapper.MapperParsingException: mapping
[records]
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService
$1.execute(MetaDataCreateIndexService.java:244)
at org.elasticsearch.cluster.service.InternalClusterService
$2.run(InternalClusterService.java:211)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.elasticsearch.index.mapper.MapperParsingException:
Mapping must have the type as the root object
at
org.elasticsearch.index.mapper.DocumentMapperParser.extractMapping(Docu
mentMapperParser.java:
231)
at
org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMappe
rParser.java:
141)
at
org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:
269)
at
org.elasticsearch.index.mapper.MapperService.add(MapperService.java:
172)
at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService
$1.execute(MetaDataCreateIndexService.java:241)
How can I set this mapping up using the java api so that I can perform
some unit testing with ngrams to make sure they are working correctly?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.