The completion suggester uses payloads (which will presumably use extra
space on disk) and specifies its own set of analyzers (presumably
suggesting that the terms are analyzed a second time).
Therefore I assume if I enable the completion suggester for a bunch of
fields, it will use more disk space. Does anyone have any idea how much
more?
be aware, that the completion suggester not only needs disk space, but also
resides in memory (the secret of its speed). Thats why there is a stats API
for completion stats. This is also the reason, that you should thrive for a
minimum payload, as this can explode the size of the completion data
tremendeously.
That said, trying out and checking seems to be the most viable option.
The completion suggester uses payloads (which will presumably use extra
space on disk) and specifies its own set of analyzers (presumably
suggesting that the terms are analyzed a second time).
Therefore I assume if I enable the completion suggester for a bunch of
fields, it will use more disk space. Does anyone have any idea how much
more?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.