If I want to add a host, or remove a host can I do it host by host?
what happens with the statically configured IPs? are they merged with the dynamic ones or lost?
[2018-04-20T18:02:11,967][WARN ][o.e.g.DanglingIndicesState] [wilco-2] [[.monitoring-es-6-2018.04.16/IuLNNiMKTCyaZWkxYshq6w]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2018-04-20T18:02:11,967][INFO ][o.e.c.m.TemplateUpgradeService] [wilco-2] Finished upgrading templates to version 6.2.2
[2018-04-20T18:02:17,283][ERROR][o.e.x.w.t.s.ExecutableScriptTransform] [wilco-2] failed to execute [script] transform for [UTIZbUvoTtizEv91Q260jQ_elasticsearch_cluster_status_b8610155-8b8a-4c78-8125-1b90dac5f1fe-2018-04-20T16:02:17.258Z]
org.elasticsearch.script.ScriptException: runtime error
at org.elasticsearch.painless.PainlessScript.convertToScriptException(PainlessScript.java:101) ~[?:?]
at org.elasticsearch.painless.PainlessScript$Script.execute(ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0) ? ctx.payload.kibana_settings.hits.hits[0]._source.kibana_settings.xpack.default_admin_email : null;ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;ctx.vars.is_resolve ...:1070) ~[?:?]
at org.elasticsearch.painless.ScriptImpl.run(ScriptImpl.java:105) ~[?:?]
at org.elasticsearch.xpack.watcher.transform.script.ExecutableScriptTransform.doExecute(ExecutableScriptTransform.java:69) ~[x-pack-watcher-6.2.2.jar:6.2.2]
at org.elasticsearch.xpack.watcher.transform.script.ExecutableScriptTransform.execute(ExecutableScriptTransform.java:53) ~[x-pack-watcher-6.2.2.jar:6.2.2]
at org.elasticsearch.xpack.watcher.transform.script.ExecutableScriptTransform.execute(ExecutableScriptTransform.java:38) ~[x-pack-watcher-6.2.2.jar:6.2.2]
at org.elasticsearch.xpack.watcher.execution.ExecutionService.executeInner(ExecutionService.java:481) ~[x-pack-watcher-6.2.2.jar:6.2.2]
at org.elasticsearch.xpack.watcher.execution.ExecutionService.execute(ExecutionService.java:322) ~[x-pack-watcher-6.2.2.jar:6.2.2]
at org.elasticsearch.xpack.watcher.execution.ExecutionService.lambda$executeAsync$7(ExecutionService.java:426) ~[x-pack-watcher-6.2.2.jar:6.2.2]
at org.elasticsearch.xpack.watcher.execution.ExecutionService$WatchExecutionTask.run(ExecutionService.java:580) [x-pack-watcher-6.2.2.jar:6.2.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:573) [elasticsearch-6.2.2.jar:6.2.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653) ~[?:1.8.0_121]
at java.util.ArrayList.get(ArrayList.java:429) ~[?:1.8.0_121]
at org.elasticsearch.painless.PainlessScript$Script.execute(ctx.vars.email_recipient = (ctx.payload.kibana_settings.hits.total > 0) ? ctx.payload.kibana_settings.hits.hits[0]._source.kibana_settings.xpack.default_admin_email : null;ctx.vars.is_new = ctx.vars.fails_check && !ctx.vars.not_resolved;ctx.vars.is_resolve ...:347) ~[?:?]
... 12 more
Still stucked. I have restarted my cluster, after adding xpack.security.transport.filter.enabled: false on each node, but the result is the same. Cannot connect the cluster. But I know the server is GREEN because I did receive an email from x-pack monitoring
The ip filtering rules work just like any other cluster setting, see precedence of settings . You cannot mix conf file settings with the dynamic ones or append them, you only set or unset them, in file or via the api where the api has precedence.
I have not fully acknowledged the fail state you are describing.
This should not gate the HTTP layer, curl should work. Also, turning xpack.security.transport.filter.enabled: false should disable filtering. All this assumes that only the settings mentioned previously have been set via the API. Otherwise, any other settings applied from the API take precedence.
In all case, have you tried to temporarily set the public IP "88.xxx.xxx.83" on the local network interface of one of the nodes, run the curl which clears all the settings, then revert the dummy IP addr ?
I am better compared to the friday-PM-last-minute-change that ruined my apero
Anyway, I did exactly what you have suggested and saved the accesses. I have decided to disable IP-Filtering and set it up with a standard linux firewall (UFW). I feel better this way because I always can ssh the server to be behind the firewall.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.