I have ingested a few files (more than 700.000) in an ElasticSearch Index.
The files in the index have to attachement fields where I stored fulltext
data. This field is sometimes really huge.
As I queried my index I missed one special file.
In the attachement filed "ftattach" this document has a text which has over
800 A4 pages. The word I searched for appears just one single time in the
whole text.
Maybe ElasticSearch (or Lucene) calculates that as not relevant because of
a low score.
Is there any posibility to decrement the necessary ranking-score so I can
find the missing file?
I have ingested a few files (more than 700.000) in an Elasticsearch Index.
The files in the index have to attachement fields where I stored fulltext
data. This field is sometimes really huge.
As I queried my index I missed one special file.
In the attachement filed "ftattach" this document has a text which has
over 800 A4 pages. The word I searched for appears just one single time in
the whole text.
Maybe Elasticsearch (or Lucene) calculates that as not relevant because of
a low score.
Is there any posibility to decrement the necessary ranking-score so I can
find the missing file?
I have ingested a few files (more than 700.000) in an Elasticsearch
Index.
The files in the index have to attachement fields where I stored fulltext
data. This field is sometimes really huge.
As I queried my index I missed one special file.
In the attachement filed "ftattach" this document has a text which has
over 800 A4 pages. The word I searched for appears just one single time in
the whole text.
Maybe Elasticsearch (or Lucene) calculates that as not relevant because
of a low score.
Is there any posibility to decrement the necessary ranking-score so I can
find the missing file?
Hope you can help me.
kind regards
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.
Search is returning the document, but just not scored as high as you would
like it? If so, field length might be a problem. You can disable norms on
that field so that length normalization is not occurring.
The document is a XML file which I can't divide. For other things it is
necessary that document one huge document.
There is no possibility to just decrement ranking requirements so
Elasticsearch list also results with a lower score?
Am Donnerstag, 1. August 2013 15:39:41 UTC+2 schrieb Jörg Prante:
Maybe it helps to index each page into a document.
Jörg
On Thu, Aug 1, 2013 at 3:23 PM, <maximilia...@**googlemail.com> wrote:
Hallo dear all,
I have ingested a few files (more than 700.000) in an Elasticsearch
Index.
The files in the index have to attachement fields where I stored
fulltext data. This field is sometimes really huge.
As I queried my index I missed one special file.
In the attachement filed "ftattach" this document has a text which has
over 800 A4 pages. The word I searched for appears just one single time in
the whole text.
Maybe Elasticsearch (or Lucene) calculates that as not relevant because
of a low score.
Is there any posibility to decrement the necessary ranking-score so I
can find the missing file?
Hope you can help me.
kind regards
--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@**googlegroups.com.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.