Kubectl apply failed installing ES

I keep getting the following error, while my collegues with "almost" exactly the same configuration in minikube have no issues:

TASK [elastic : Elastic Cluster] ********************************************************************************************************************************************************************
fatal: [minikube]: FAILED! => {
"changed": false
}

MSG:

Failed to find exact match for elasticsearch.k8s.elastic.co/v1beta1.Elasticsearch by [kind, name, singularName, shortNames]

cat elastic_cluster.yml

apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elastic
namespace: "{{ config.elastic.namespace }}"
spec:
version: "{{ config.elastic.version }}"
nodeSets:

  • name: default
    count: 1
    config:
    node.master: true
    node.data: true
    node.ingest: true
    node.store.allow_mmap: false
    xpack.security.authc.realms.native.native1.order: 0

with

elastic:
namespace: l12m-elastic
fqdn: elastic.local.l12m.nl
version: 7.4.2

There's a couple recommendations I'd make:

  • does the same thing happen using kubectl directly rather than what appears to be ansible? It may help to track down an issue with ansible vs ECK.

  • it may be worthwhile to ensure you are using the v1 version of the operator (and v1 versions of the resources), as there are improvements present in v1 that are not in the beta.

minikube:/playbook# curl https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml -o all-in-one.yaml
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 93520 100 93520 0 0 75237 0 0:00:01 0:00:01 --:--:-- 75237
minikube:/playbook# kubectl apply -f all-in-one.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/elastic-operator unchanged
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator unchanged
namespace/elastic-system unchanged
statefulset.apps/elastic-operator configured
serviceaccount/elastic-operator unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co unchanged
service/elastic-webhook-server unchanged
secret/elastic-webhook-server-cert unchanged
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"apiextensions.k8s.io/v1beta1","kind...son":"NoConflicts" "status":"True" "type":"NamesAccepted"] map["lastTransitionTime":"2020-02-10T15:04:02Z" "message":"the initial names have been accepted" "reason":"InitialNamesAccepted" "status":"True" "type":"Established"]]]]}
for: "all-in-one.yaml": CustomResourceDefinition.apiextensions.k8s.io "kibanas.kibana.k8s.elastic.co" is invalid: [spec.validation.openAPIV3Schema.properties[spec].properties[http].properties[service].properties[spec].properties[selector].additionalProperties: Forbidden: additionalProperties cannot be set to false, spec.version: Invalid value: "v1": field is immutable]

the download suggests 1.0.1 for the operator but the file contains v1beta1

I suppose it fails creating custom object as running

[config.local ~/Logius]$ kubectl apply -f /tmp/elastic.yaml

  • kubectl apply -f /tmp/elastic.yaml
    error: unable to recognize "/tmp/elastic.yaml": no matches for kind "Elasticsearch" in version "elasticsearch.k8s.elastic.co/v1"

If you do not have any existing resources, can you delete the CRDs and then re-apply? Or if you do, what ECK version are you upgrading from and what version of kubernetes/kubectl are you using? That error where it is rejecting the Kibana CRD is new to me at least.

the download suggests 1.0.1 for the operator but the file contains v1beta1

This is expected as the 1.0.1 CRDs include v1beta1 in the versions list for backwards compatibility.

Seems the error is caused by older version of minikube, after upgrading minikube to version 1.7.2 no error occured and ES is running fine