Scripts of type [inline], operation [aggs] and lang [groovy] are disabled

Keep getting this error:

[2015-06-11 16:09:46,824][DEBUG][action.search.type       ] [Radion the Atomic Man] [logstash-2014.12.30][0], node[SoTj_ahJSJa5WtFhRUPWow], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@794652fb]
org.elasticsearch.transport.RemoteTransportException: [Blackwing][inet[/10.0.3.33:9300]][indices:data/read/search[phase/query]]
Caused by: org.elasticsearch.search.SearchParseException: [logstash-2014.12.30][0]: query[name:fame_fan],from[-1],size[-1]: Parse Failure [Failed to parse source [{
  "query": {
    "match": {
      "name": "fame_fan"
    }
  },
  "aggs": {
    "fame_return" : {
            "sum" : {
                "field" : "extra.fame",
                "script" : "doc['extra.fame'].value*1.5"
            }
        }
  }
}
]]
	at org.elasticsearch.search.SearchService.parseSource(SearchService.java:735)
	at org.elasticsearch.search.SearchService.createContext(SearchService.java:560)
	at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:532)
	at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:294)
	at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:776)
	at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:767)
	at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:279)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.script.ScriptException: scripts of type [inline], operation [aggs] and lang [groovy] are disabled
	at org.elasticsearch.script.ScriptService.compile(ScriptService.java:285)
	at org.elasticsearch.script.ScriptService.search(ScriptService.java:483)
	at org.elasticsearch.search.aggregations.support.ValuesSourceParser.createScript(ValuesSourceParser.java:188)
	at org.elasticsearch.search.aggregations.support.ValuesSourceParser.config(ValuesSourceParser.java:182)
	at org.elasticsearch.search.aggregations.metrics.NumericValuesSourceMetricsAggregatorParser.parse(NumericValuesSourceMetricsAggregatorParser.java:65)
	at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:148)
	at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:78)
	at org.elasticsearch.search.aggregations.AggregationParseElement.parse(AggregationParseElement.java:60)
	at org.elasticsearch.search.SearchService.parseSource(SearchService.java:719)
	... 10 more

Any ideas why it happens? (I'm guessing you can see the query inside the error there ...)

UPDATE:
I tried to enable aggs/script/inline using:

# curl -XPUT localhost:9200/_cluster/settings -d '{
    "persistent" : {
        "script.engine.groovy.inline.aggs": true
    }
}

It didn't help :frustrated:

Dynamic scripting (i.e. inline and indexed scripts) are disabled by default for security reasons. To enable them for just groovy scripts in aggregations you can added the following line the elasticsearch.yml file on each node:

script.engine.groovy.inline.aggs: on

More information on dynamic scripting and why it's disabled by default can be found here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-scripting.html#modules-scripting

hope that helps

4 Likes

Thanks @colings86,

That's what I thought. Is it possible to do that when the cluster is LIVE? I don't want to restart it while it's working.

No, these options cannot be changed dynamically as that would have implications for security. i.e. There would then be the possibility of someone exploiting an Elasticsearch instance by turning on dynamic scripting and then running dynamic scripts. You should, however, be able to change the setting in all your elasticsearch.yml files and restart each node individually. This will allow your cluster to remain active while you are changing the setting. You may want to look into disabling allocation during the rolling restart to stop the shards from being moved around as node are restarted.

As you are turning on dynamic scripting you should ensure you have appropriately protected your cluster from malicious attacks (one should always do this but it becomes even more important when dynamic scripting is enabled).

1 Like

Thanks.

I have 2 nodes (master and slave). Can I do the restart and keep the master being master ?

If you restart the master first then it should still be master when you finish restart since it will be elected master while the slave is down. While the master is down, the slave will become master. Bear in mind that it should not matter to you which of the nodes is the master (unless you have dedicated master nodes) since it is just a role (both nodes will be capable of being master, providing their config has node.master=true) and should not affect the way you call Elasticsearch

I send data to ES from LS using ES url (in my case ES and LS are on the same machine so LS is calling it using "localhost"). I have a feeling that when I restart ES master things will break... right?

The one that works for me was the following:

script.groovy.sandbox.enabled: true

Is there a way to keep groovy disabled and get results like:
{ "query": { "script": { "script": "doc['X.made_in'].values.size()>10" } } }

Otherwise I would not know how to find all products manufactured in 10 or more countries.

You can edit your Logstash config file and add both hosts in the output config. hosts = ["localhost:9200", "db2:9200"] and restart Logstash before proceeding with rolling restart of your ES cluster.


Alternative: Use Logstash

If you cannot stop your processing or master, you can just use Logstash to perform the data update too. Create a new .conf file and start up another Logstash instance or place in your /conf.d/ folder and restart Logstash.

Often you can perform an

input (Elasticsearch)
filter (use mutate plugin to change fields)
output (Elasticsearch)

Example:

input {
  elasticsearch {
   tags => ["fix_index"]
   hosts => ["localhost:9200", "search2:9200"]
   index => "oldindex"
   docinfo => true
   size => 2000
  }
}
filter {
 if "fix_index" in [tags] {
   # if you wanted to fix a date field, use this plugin
   date {
        match => [ "timestamp","ISO8601" ]
        target => "@timestamp"
        remove_field => [ "timestamp" ]
   }
   # if you want to fix other fields use this plugin
   mutate {
    #rename => {"timestamp" => "@timestamp"}
    remove_field => [ "object" ]
   }
 }
}
output {
 if "fix_index" in [tags] {
   elasticsearch {
     hosts => ["localhost:9200", "search2:9200"]
     manage_template => false
     index => "newindex"
     document_type => "sometype"
     workers => 2
   }
 }
}

The reason I use "tags" is if I place this in /conf.d/ folder of existing Logstash and restart it, it concats all configs together. I only want the records related to this operation to be filtered this way and indexed this way so I isolate only to this op. You can simply start another instance of Logstash and pass in your config file, however:

/opt/logstash/bin/logstash -f yourconfigfile.conf and likely first run configtest against it. If a long operation you may want to run nohup ... > output.log & and then tail that file as necessary so you don't interrupt processing if your terminal disconnects for some reason (assuming you remote access a Linux server of course).

After you are done, and assuming you pushed updated data to a temp index, then use the _reindex API and move data back to your main index after deleting it. What really helps is if you use alias for your index so you can "hot swap" old index for new index after done and not mess with your applications.

1 Like