Hi,
Currently I am making a autocomplete with ElasticSearch.
My Config:
index:
analysis:
filter:
my_gram_filter:
type: edgeNGram
side: front
min_gram: 1
max_gram: 10
tokenizer:
my_gram:
type: edgeNGram
side: front
min_gram: 2
max_gram: 20
analyzer:
default:
tokenizer: standard
filter: [asciifolding,lowercase]
auto:
type: custom
tokenizer: my_gram
filter: [asciifolding,lowercase]
auto2:
type: custom
tokenizer: standard
filter: [standard,lowercase,asciifolding,my_gram_filter]
Here is my mapping:
{
"song_name":{
"properties" : {
"id" : {"type": string},
"name": {"type": string, "index_analyzer": auto2, "search_analyzer":
default} ,
"data": {"type": string, "index": not_analyzed}
}
}
}
My server:
Server 1, 30Gb Ram, 16 Core, Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
Server 2: 20Gb Ram 16 Core, Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
Server 3: 20Gb Ram 16 Core, Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
I have 5 index, with 5 milions records.
I stress test it around 6k Request/Second.
And the response would take from 2s-6s.
Is anything there for me to increase the performance ?
P/S: I have try to setup caching:
indices.cache.filter.size: 3072mb
index.cache.filter.max_size: 1000000
index.cache.filter.expire: 5m
index.cache.filter.type: resident
index.cache.field.max_size: 1000000
index.cache.field.expire: 5m
And I check the cache status using bigdesk plugin/
I found that:
Filter Size: 16mb
Field Size: 0
Do the caching work right ?
Thanks In Advance.
--