Node Move Failing for ES

So i just setup another brand new instance of ECE and I'm following the same process i've used successfully in the past. That being install an 'admin' node with less resources than the other 3 nodes. Add the new allocators and once they all show up under platform -> allocators i try and Platform -> allocator -> {select first installed node} -> move nodes -> select all -> Move nodes

but i get this error on the ES migrations. KB migation appears to work and can be validated on the allocator screen

no.found.constructor.validation.ValidationException: 1. Can't apply a move_only plan with topology / setting changes. Actions: [settings]
at no.found.constructor.validation.Validation$EveryError.asFailedFuture(Validation.scala:238)
at no.found.constructor.steps.ValidatePlanPrerequisites$$anonfun$no$found$constructor$steps$ValidatePlanPrerequisites$$validateWithRetries$1$3.apply(ValidatePlanPrerequisites.scala:101)
at no.found.constructor.steps.ValidatePlanPrerequisites$$anonfun$no$found$constructor$steps$ValidatePlanPrerequisites$$validateWithRetries$1$3.apply(ValidatePlanPrerequisites.scala:84)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:253)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at no.found.concurrent.WrappedRunnable.run(ControllableExecutionContextWrapper.scala:80)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

I've tried starting fresh and reinstalled a number of times, but i always get this error. Any suggestions?

Thanks

Hi @Rob_wylde

Sorry you're running into problems here. We're looking into why this has started happening, but in the meantime, you should be able to go to the ES cluster, copy the existing plan from activity, go to advanced editor, paste in the copied plan, and set "move_only" to false

I believe the issue will only happen with the system clusters, and only the first time that you try to move them.

Sorry again for the inconvenience

Alex

1 Like

Alex,

Thanks for the reply. You are correct that this is only an issue with default deployments (admin-console-elasticsearch, logging-and-metrics, and security-cluster) . Newly created deployments install originally onto the very first node but migrating / moving the new deployment to another node without issue.

Thanks for the tip about forcing this through. I will give it a try.

Incidentally a colleague suggested a more elegant workaround: just reapply the existing plan (eg edit then save), then the move should work!

I can confirm that this works. I'll share the steps I took in an effort to help others.

Deployments -> admin-console-elasticsearch ( example ) -> Edit -> make no modifications and click save

  • magic happens *
    You can now 'move nodes' without any errors

Thanks Alex and your colleague

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.