[Java API] Cannot create index with meta data

Hi there,

For some reason I cannot modify the meta data on my index. I can index
data, but all data is indexed as type "string". Also it is not case
sensitive (what I want) and it is not correctly tokenized. So I wanted
to change my metadata, but the changes are not accepted. Esp. I want
to use special tokenizer/analyzer for special fields. The data however
gets indexed.

Here is the output of the status information:
{
state: open
settings: {
index.number_of_shards: 5
index.number_of_replicas: 1
}
mappings: {
datastandards: {
properties: {
somenumber: {
type: string
}
search: {
type: string
}
}
}
}
aliases: [ ]
}

I use the following code to create and fill my index:

        XContentBuilder data =

XContentFactory.jsonBuilder().startObject().startObject("properties")
.startObject("search")
.field("type", "string")
.field("store", "no")
.field("analyzer", "whitespace")
.endObject().startObject("somenumber")
.field("type", "integer")
.field("store", "no")
.endObject()
.endObject().endObject();

		CreateIndexResponse response =

ElasticSearchPlugin.client().admin().indices().create(
new CreateIndexRequest(INDEX_NAME).
mapping(CHILD_TYPE_NAME, data)
).actionGet();
Logger.debug("Response: %s", response);

ElasticSearchPlugin.client().admin().indices().putMapping(new
PutMappingRequest(INDEX_NAME).type(CHILD_TYPE_NAME).source(data)).actionGet();

        BulkRequestBuilder bulkRequest =

ElasticSearchPlugin.client().prepareBulk();

        int n=0;
        for(MyModel myModel : getModels()) {
            try {

bulkRequest.add(ElasticSearchPlugin.client().prepareIndex(INDEX_NAME,
CHILD_TYPE_NAME, String.valueOf(myModel.getId()))
.setSource(XContentFactory.jsonBuilder()
.startObject()
.field("search", myModel.getHeadline() + "
" + myModel.getAbstractText())
.field("somenumber",
myModel.getSomeNumber())
.endObject()
));
n++;
} catch (IOException e) {
Logger.fatal("Cannot index #%d", myModel.getId(),
e);
}
}

        BulkResponse bulkResponse =

bulkRequest.setRefresh(true).execute().actionGet();
if (bulkResponse.hasFailures()) {
Logger.fatal("Cannot update index - %s",
bulkResponse.buildFailureMessage());
} else {
Logger.info("Index created with " + n + " entries");
}

ElasticSearchPlugin.client().admin().indices().optimize( new
OptimizeRequest(INDEX_NAME)).actionGet()

Please help me. I think there is only a small glitch, but cannot find
the cause.

Also:

  • Is it ok to use the bulk request? Or should I index every entry one
    by one?
  • I have the feeling, that the optimize does not do anything. Should I
    omit it?

Thank you and kind regards,

Tobias

Does the index already exists with the mapping when you try and update it? You can introduce new types of the type has already been derived.

On Wednesday, May 25, 2011 at 5:07 PM, ToBe_HH wrote:

Hi there,

For some reason I cannot modify the meta data on my index. I can index
data, but all data is indexed as type "string". Also it is not case
sensitive (what I want) and it is not correctly tokenized. So I wanted
to change my metadata, but the changes are not accepted. Esp. I want
to use special tokenizer/analyzer for special fields. The data however
gets indexed.

Here is the output of the status information:
{
state: open
settings: {
index.number_of_shards: 5
index.number__of_replicas: 1
}
mappings: {
datastandards: {
properties: {
somenumber: {{
type: string
}}
search: {
type: string
}
}
}
}
aliases: [
}

I use the following code to create and fill my index:

XContentBuilder data =
XContentFactory.jsonBuilder().startObject().startObject("properties")
.startObject(""search")
.field("type", "string")
.field("store", "no")
.field("analyzer", "whitespace")
.endObject().startObject("somenumber")
.field("type", "integer")
.field("store", "no")
.endObject()
.endObject().endObject();

CreateIndexResponse response =
ElasticSearchPlugin.client().admin().indices().create(
new CreateIndexRequest(INDEX__NAME).
mapping(CHILD__TYPE_NAME, data)
).actionGet();
Logger.debug("Response: %s", response);

ElasticSearchPlugin.client().admin().indices().putMapping(new
PutMappingRequest(INDEX_NAME).type(CHILD_TYPE_NAME).source(data)).actionGet();

BulkRequestBuilder bulkRequest =
ElasticSearchPlugin.client().prepareBulk();

int n=0;
for(MyModel myModel : getModels()) {
try {

bulkRequest.add(ElasticSearchPlugin.client().prepareIndex(INDEX_NAME,
CHILD_TYPE_NAME, String.valueOf(myModel.getId()))
.setSource(XContentFFactory.jsonBuilder()
.startObject()
.field(""search", myModel.getHeadline() + "
" + myModel.getAbstractText())
.field(""somenumber",
myModel.getSomeNumber())
.endObject()
));
n++;
} catch (IOException e) {
Logger.fatal("Cannot index #%d", myModel.getId(),
e);
}
}

BulkResponse bulkResponse D
bulkRequest.setRefresh(true).execute().actionGet();
if (bulkResponse.hasFailures()) {
Logger.fatal(""Cannot update index - %s",
bulkResponse.buildFailureMessage());
} else {
d"http:="" logger.info"="">Logger.info("Index created with " + n + " entries");
}

ElasticSearchPlugin.client().admin().indices().optimize( new
OptimizeRequest(INDEX_NAME)).actionGet()

Please help me. I think there is only a small glitch, but cannot find
the cause.

Also:

  • Is it ok to use the bulk request? Or should I index every entry one
    by one?
  • I have the feeling, that the optimize does not do anything. Should I
    omit it?

Thank you and kind regards,

Tobias

Hi Shay,

using the code above: no - the index does not exist then.
However, I tried moving the putMapping-Request at the end just before
the optimize-Request and had the same result: no effect.

From what you write I would deduct that I am not terribly wrong - it
is just not working :frowning:

Kind regards,

Tobias

On 26 Mai, 13:01, Shay Banon shay.ba...@elasticsearch.com wrote:

Does the index already exists with the mapping when you try and update it? You can introduce new types of the type has already been derived.

On Wednesday, May 25, 2011 at 5:07 PM, ToBe_HH wrote:

Hi there,

For some reason I cannot modify the meta data on my index. I can index
data, but all data is indexed as type "string". Also it is not case
sensitive (what I want) and it is not correctly tokenized. So I wanted
to change my metadata, but the changes are not accepted. Esp. I want
to use special tokenizer/analyzer for special fields. The data however
gets indexed.

Here is the output of the status information:
{
state: open
settings: {
index.number_of_shards: 5
index.number__of_replicas: 1
}
mappings: {
datastandards: {
properties: {
somenumber: {{
type: string
}}
search: {
type: string
}
}
}
}
aliases: [
}

I use the following code to create and fill my index:

XContentBuilder data =
XContentFactory.jsonBuilder().startObject().startObject("properties")
.startObject(""search")
.field("type", "string")
.field("store", "no")
.field("analyzer", "whitespace")
.endObject().startObject("somenumber")
.field("type", "integer")
.field("store", "no")
.endObject()
.endObject().endObject();

CreateIndexResponse response =
ElasticSearchPlugin.client().admin().indices().create(
new CreateIndexRequest(INDEX__NAME).
mapping(CHILD__TYPE_NAME, data)
).actionGet();
Logger.debug("Response: %s", response);

ElasticSearchPlugin.client().admin().indices().putMapping(new
PutMappingRequest(INDEX_NAME).type(CHILD_TYPE_NAME).source(data)).actionGet ();

BulkRequestBuilder bulkRequest =
ElasticSearchPlugin.client().prepareBulk();

int n=0;
for(MyModel myModel : getModels()) {
try {

bulkRequest.add(ElasticSearchPlugin.client().prepareIndex(INDEX_NAME,
CHILD_TYPE_NAME, String.valueOf(myModel.getId()))
.setSource(XContentFFactory.jsonBuilder()
.startObject()
.field(""search", myModel.getHeadline() + "
" + myModel.getAbstractText())
.field(""somenumber",
myModel.getSomeNumber())
.endObject()
));
n++;
} catch (IOException e) {
Logger.fatal("Cannot index #%d", myModel.getId(),
e);
}
}

BulkResponse bulkResponse D
bulkRequest.setRefresh(true).execute().actionGet();
if (bulkResponse.hasFailures()) {
Logger.fatal(""Cannot update index - %s",
bulkResponse.buildFailureMessage());
} else {

d"http:="" logger.info"="">Logger.info("Index created with " + n + " entries");
}

ElasticSearchPlugin.client().admin().indices().optimize( new
OptimizeRequest(INDEX_NAME)).actionGet()

Please help me. I think there is only a small glitch, but cannot find
the cause.

Also:

  • Is it ok to use the bulk request? Or should I index every entry one
    by one?
  • I have the feeling, that the optimize does not do anything. Should I
    omit it?

Thank you and kind regards,

Tobias