Painless Script Caching

Hi friends,

I am looking for advice/best practice on caching painless scripts. For my case, the script itself is lightweight an simple, no more than one simple statement, and no script parameters. An example is "return doc.getter()" . The scripts are, however, executed very frequently, say thousands of times per second.

As per Painless doc , Painless has built-in caching. So my first vanilla version looks like this.

scriptService.compile("return doc.getter()", CONTEXT).newInstance().execute(doc); // run thousands of times a second.

Since the impacts on performance are significant, I changed it a bit to cache the compiled script on the caller side. It looks like this.

Script cachedScript = scriptService.compile("return doc.getter()", CONTEXT).newInstance(); // only once


cachedScript.execute(doc); // still thousands of times a second

The performance is significantly better than that of the vanilla version. So I am wondering if I am using Painless correctly.


so ScriptService.compile is actually checking the script cache if there is a compiled version available. It's doing a little bit of work before that, but that should be fast.

One important thing about that cache is the fact, that it has a size of 100 and a LRU policy. So if you have lots of compilations going on, you could have a constant cache eviction happening and thus never benefit from that cache.

You can check this in the nodes stats, there is a dedicated scripts field, or just run

GET _nodes/stats/script


I am running very few scripts and here is the script stats.

"script": {
    "compilations": 7,
    "cache_evictions": 0

I also looked at the compile method and nothing major stands out. Could it be newInstance() that causes heavy computation and significant GC?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.