ElasticSearch Rest High Level Client remapping wrong

I'm trying to create a class which will write automatically to ElasticSearch through the Rest High Level Client with the operations (create, createBatch, remove, removeBatch, update, updateBatch) and those operations all work and my test cases all succeed. To add a bit more flexibility, I wanted to implement the following method: (find, findAll, getFirsts(n), getLasts(n)). find(key) and findAll() both work perfectly fine but getFirsts(n) and getLasts(n) don't at all.

Here is the context: Before each test case -> Ensure that index "test" exists and create it if it doesn't After each test case -> Delete index "test" For getFirsts(n) and getLasts(n) I call create to have a few items in ElasticSearch and then search according to the uniqueKey.

Here is the mapping for my Test Object:

{
  "properties": {
    "date": { "type": "long" },
    "name": { "type": "text" },
    "age": { "type": "integer" },
    "uniqueKey": { "type": "keyword" }
  }
}

Here is my test case:

@Test
public void testGetFirstByIds() throws BeanPersistenceException {
    List<StringTestDataBean> beans = new ArrayList<>();
    StringTestDataBean bean1 = new StringTestDataBean();
    bean1.setName("Tester");
    bean1.setAge(22);
    bean1.setTimeStamp(23213987321712L);
    beans.add(elasticSearchService.create(bean1));

    StringTestDataBean bean2 = new StringTestDataBean();
    bean1.setName("Antonio");
    bean1.setAge(27);
    bean1.setTimeStamp(2332321117321712L);
    beans.add(elasticSearchService.create(bean2));

    Assert.assertNotNull("The beans created should not be null", beans);
    Assert.assertEquals("The uniqueKeys of the fetched list should match the existing",
            beans.stream()
                .map(ElasticSearchBean::getUniqueKey)
                .sorted((b1,b2) -> Long.compare(Long.parseLong(b2),Long.parseLong(b1)))
                .collect(Collectors.toList()),

            elasticSearchService.getFirstByIds(2).stream()
                .map(ElasticSearchBean::getUniqueKey)
                .collect(Collectors.toList())
    );
}

Here is getFirstByIds(n):

@Override
public Collection<B> getFirstByIds(int entityCount) throws BeanPersistenceException {
    assertBinding();
    FilterContext filterContext = new FilterContext();
    filterContext.setLimit(entityCount);
    filterContext.setSort(Collections.singletonList(new FieldSort("uniqueKey",true)));
    return Optional.ofNullable(find(filterContext)).orElseThrow();
}

Here is the find(filterContext):

@Override
public List<B> find(FilterContext filter) throws BeanPersistenceException {
    assertBinding();
    BoolQueryBuilder query = QueryBuilders.boolQuery();
    List<FieldFilter> fields = filter.getFields();
    StreamUtil.ofNullable(fields)
            .forEach(fieldFilter -> executeFindSwitchCase(fieldFilter,query));

    SearchSourceBuilder builder = new SearchSourceBuilder().query(query);
    builder.from((int) filter.getFrom());
    builder.size(((int) filter.getLimit() == -1) ? FILTER_LIMIT : (int) filter.getLimit());

    SearchRequest request = new SearchRequest();
    request.indices(index);
    request.source(builder);
    List<FieldSort> sorts = filter.getSort();
    StreamUtil.ofNullable(sorts)
            .forEach(fieldSort -> builder.sort(SortBuilders.fieldSort(fieldSort.getField()).order(
                    fieldSort.isAscending() ? SortOrder.ASC : SortOrder.DESC)));

    try {
        if (strict)
            client.indices().refresh(new RefreshRequest(index), RequestOptions.DEFAULT);
        SearchResponse response = client.search(request, RequestOptions.DEFAULT);
        SearchHits hits = response.getHits();
        List<B> results = new ArrayList<>();

        for (SearchHit hit : hits)
            results.add(objectMapper.readValue(hit.getSourceAsString(), clazz));

        return results;
    }
    catch(IOException e){
        logger.error(e.getMessage(),e);
    }
    return null;
}

The issue happens if I run the test case more than one time. The first time, the test passes fine but whenever we reach the second test, I get an exception :

ElasticsearchStatusException[Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]
]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=Fielddata is disabled on text fields by default. Set fielddata=true on [name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.]];

After looking around for over a day, I've realized that the map gets changed from the original mapping (map specified at the beginning) and it gets automatically created with this :

"test": {
        "aliases": {},
        "mappings": {
            "properties": {
                "age": {
                    "type": "long"
                },
                "name": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "type": "keyword",
                            "ignore_above": 256
                        }
                    }
                },
                "timeStamp": {
                    "type": "long"
                },
                "uniqueKey": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "type": "keyword",
                            "ignore_above": 256
                        }
                    }
                }
            }
        }

As I can see, the mapping changes automatically and throws the error. Thanks for any help!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.