One problem with the subtraction is any future date will be negative, so
will score below all small fractions between 1 and 0.
why not just
If the compare date is today or sometime in the future that gives a
value of 1.0 for doc whose date is the same as right now (or compare
date), or results in a fraction just below 1.0 as the doc date is older
(compareDate - doc['myDateField']... + 1)
I think you might be getting a string as the result of the parse, so it
number + string + 1
So I'd try:
((1.0d * doc['myDateField'].time())/compareDate)
Maybe would be:
sb.append(String.format(" + %f/(10.0e-9 * (%fD / (1.0D
doc['%s'].time())* + 1)",
(float) set.getValue().getWeight() / 100, compareDate * 1.0f,
I'm not sure what the rest of your script looks like, so not sure how
the fixed weight factor of 1x10^-9 and the +1 enter in.
I was doing something similar and check the round of a float. It is in
the range of "milliseconds per decade".
On 1/7/2013 10:53 AM, Jérôme Gagnon wrote:
This solution is not working (for some reason custom score fail
silently and returns no results).
But would there be a way to do this ?
On Monday, January 7, 2013 11:05:12 AM UTC-5, Jérôme Gagnon wrote:
Good Morning People,
I am currently experiencing OOM error, because of facetting and
custom scoring (doc['field'] access). I'm basically using a long
field (timestamp) with a custom scoring function to adjust my
score by recency. My timestamp precision is in milliseconds, but
for recency custom scoring, we don't really care about this kind
of precision. What I thought is, would id reduce the RAM usage if
I do something like that;
sb.append(String.format(" + %f/(10.0e-9 * (%d -
doc['%s'].date.parse('yyyyMMdd')) + 1)",
(float) set.getValue().getWeight() / 100, compareDate,
Would the cardinality of the field that I'm trying to load be
reduced ? Or will all the documents fields are still going to be
loaded in memory ? That would be great if this could work like
that, or in other words, would it be possible to make it work like