I am looking for advice/best practice on caching painless scripts. For my case, the script itself is lightweight an simple, no more than one simple statement, and no script parameters. An example is "return doc.getter()" . The scripts are, however, executed very frequently, say thousands of times per second.
As per Painless doc , Painless has built-in caching. So my first vanilla version looks like this.
scriptService.compile("return doc.getter()", CONTEXT).newInstance().execute(doc); // run thousands of times a second.
Since the impacts on performance are significant, I changed it a bit to cache the compiled script on the caller side. It looks like this.
Script cachedScript = scriptService.compile("return doc.getter()", CONTEXT).newInstance(); // only once
...
cachedScript.execute(doc); // still thousands of times a second
The performance is significantly better than that of the vanilla version. So I am wondering if I am using Painless correctly.
so ScriptService.compile is actually checking the script cache if there is a compiled version available. It's doing a little bit of work before that, but that should be fast.
One important thing about that cache is the fact, that it has a size of 100 and a LRU policy. So if you have lots of compilations going on, you could have a constant cache eviction happening and thus never benefit from that cache.
You can check this in the nodes stats, there is a dedicated scripts field, or just run
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.