So the cache will evict entries that consumes more memory. I think this
policy is not the best choice cause fat entries can be used quite
frequently. May be there should be several implementation of filter cache
as Solr does.
I don't think the current filter cache evicts entries that consume more
memory first. The weight part is only used to evict when the filter cache
size (in terms of bytes, not entries) grows beyond a configured limit.
I agree this behaviour is quite simplistic and there are two interesting
things that can be improved:
So the cache will evict entries that consumes more memory. I think this
policy is not the best choice cause fat entries can be used quite
frequently. May be there should be several implementation of filter cache
as Solr does.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.