Hello,
I am creating an Elasticsearch local cluster in Java in the following way:
Settings.Builder settingsBuilder = Settings.builder();
settingsBuilder.put("http.enabled", "false");
settingsBuilder.put("cluster.name", esearchConfig.getClusterName() +
(esearchConfig.isEnableTemporaryMode() ? "-transient" : ""));
settingsBuilder.put("node.name", esearchConfig.getNodeName() +
(esearchConfig.isEnableTemporaryMode() ? "-transient" : ""));
settingsBuilder.put("node.data", "true");
settingsBuilder.put("node.master", "true");
settingsBuilder.put("action.auto_create_index", "false");
settingsBuilder.put("transport.bind_host", "127.0.0.1");
settingsBuilder.put("transport.tcp.port", 1312);
settingsBuilder.put("http.bind_host", "127.0.0.1");
settingsBuilder.put("http.port", 1313);
Collection<Class<? extends Plugin>> plugins = Arrays.asList(Netty4Plugin.class);
Node elasticSearchInternalNode = new ElasticSearchNode(settingsBuilder.build(), plugins);
elasticSearchInternalNode.start();
Where ElasticSearchNode is a simple class that extends node and has the following constructor:
public ElasticSearchNode(Settings settings, Collection<Class<? extends Plugin>> plugins) {
super(InternalSettingsPreparer.prepareEnvironment(settings, null), plugins);
}
Then I try to create an index with a nGram filter:
:curl -X PUT "localhost:1313/test04" -H 'Content-Type: application/json' -d'{
"settings" : {
"index" : {
"number_of_shards" : 3,
"number_of_replicas" : 2
},
"analysis": {
"filter": {
"short_ngram_filter": {
"type": "nGram",
"min_gram": "3",
"max_gram": "3"
}
}
}
}}'
But it fails:
{
"error": {
"root_cause": [{
"type": "illegal_argument_exception",
"reason": "Unknown filter type [nGram] for [short_ngram_filter]"
}
],
"type": "illegal_argument_exception",
"reason": "Unknown filter type [nGram] for [short_ngram_filter]"
},
"status": 400
}
The very weird thing is that using a nGram tokenizer works perfectly, for example:
curl -X PUT "localhost:1313/test04" -H 'Content-Type: application/json' -d'{
"settings" : {
"index" : {
"number_of_shards" : 3,
"number_of_replicas" : 2
},
"analysis": {
"tokenizer": {
"short_ngram_tokenizer": {
"type": "nGram",
"min_gram": "3",
"max_gram": "3"
}
}
}
}}'
Does anyone have any idea of what I am doing wrong?
I am using ElasticSearch 6.2.4.
Thanks very much
Giuseppe