Elasticsearch license downgrade for clusters managed by ECK

Hi,

Long story short we are running multiple self-managed Elasticsearch in Kubernetes using ECK operator. After upgrading one of the clusters with Enterprise license we have noticed that another cluster was upgraded as well. Apparently as discussed in this Github issue it's not possible to run more than one cluster managed by ECK with different licenses.

Now we want to downgrade license for one of the clusters to Basic. Right now i am looking what is the best option to achieve this downgrade without any downtime or migrations to new clusters. Do we have any documentation or guides how this could be achieved?

Our setup looks like:

  • ECK operator runs in elastic-system namespace and manages two Elasticsearch clusters in single Kubernetes cluster
  • Elasticsearch clusters run in dedicated namespaces E.g namespace-1 and namespace-2
  • Only one Elasticsearch cluster needs Enterprise license

Right now i am considering following theoretical action plan:

  1. Annotate existing Elasticsearch resources to disable cluster management by the operator using provided annotator.
  2. Delete existing ECK operator
  3. Deploy ECK operator for each of the cluster it will be used to manage
  4. Annotate Elasticsearch resources that they are managed by new operators

I have tried to follow this plan but i got to the dead end in step 2. After trying to delete ECK operator i have observed that not all resources have been deleted.

helm uninstall elastic-operator -n elastic-system
These resources were kept due to the resource policy:
[CustomResourceDefinition] kibanas.kibana.k8s.elastic.co
[CustomResourceDefinition] logstashes.logstash.k8s.elastic.co
[CustomResourceDefinition] stackconfigpolicies.stackconfigpolicy.k8s.elastic.co
[CustomResourceDefinition] agents.agent.k8s.elastic.co
[CustomResourceDefinition] apmservers.apm.k8s.elastic.co
[CustomResourceDefinition] beats.beat.k8s.elastic.co
[CustomResourceDefinition] elasticmapsservers.maps.k8s.elastic.co
[CustomResourceDefinition] elasticsearchautoscalers.autoscaling.k8s.elastic.co
[CustomResourceDefinition] elasticsearches.elasticsearch.k8s.elastic.co
[CustomResourceDefinition] enterprisesearches.enterprisesearch.k8s.elastic.co

release "elastic-operator" uninstalled

In the next step i have followed restricted installation guide which requires to install global resources first and than install ECK operators for each of the clusters separately.

During installation of global resources i get an error due to already existing resources which haven't been deleted alongside ECK operator.

helm upgrade --install elastic-operator-crds elastic/eck-operator-crds
Release "elastic-operator-crds" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "agents.agent.k8s.elastic.co" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "elastic-operator-crds": current value is "elastic-operator"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "elasticsearch-cluster-1": current value is "elastic-system"

Is this a right way to do it? How we should proceed?

Any help is welcome!

From Elastic Search to Elastic Cloud on Kubernetes (ECK)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.