Timeout: request did not complete within requested timeout 30s

Hello World!

I'm just Deploy the Elasticsearch cluster, yet unable to delete it:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.24", GitCommit:"2ce02ef1754a457ba464ab87dba9090d90cf0468", GitTreeState:"clean", BuildDate:"2019-08-12T22:05:28Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
$ time kubectl delete elasticsearch quickstart 
elasticsearch.elasticsearch.k8s.elastic.co "quickstart" deleted

elasticsearch cluster never gets deleted, meanwhile in Monitor the operator logs:

{"level":"info","ts":1567129334.4552548,"logger":"license-controller","msg":"Start reconcile iteration","iteration":35,"namespace":"default","es_name":"quickstart"}
{"level":"info","ts":1567129334.4554448,"logger":"license-controller","msg":"End reconcile iteration","iteration":35,"took":0.000193777,"namespace":"default","es_name":"quickstart"}
{"level":"info","ts":1567129334.4555485,"logger":"elasticsearch-controller","msg":"Start reconcile iteration","iteration":75,"namespace":"default","es_name":"quickstart"}
{"level":"info","ts":1567129334.4557135,"logger":"finalizer","msg":"Executing finalizer","finalizer_name":"expectations.finalizers.elasticsearch.k8s.elastic.co","namespace":"default","name":"quickstart"}
{"level":"info","ts":1567129334.4557407,"logger":"finalizer","msg":"Executing finalizer","finalizer_name":"observer.finalizers.elasticsearch.k8s.elastic.co","namespace":"default","name":"quickstart"}
{"level":"info","ts":1567129334.4557512,"logger":"finalizer","msg":"Executing finalizer","finalizer_name":"secure-settings.finalizers.elasticsearch.k8s.elastic.co","namespace":"default","name":"quickstart"}
{"level":"info","ts":1567129334.4557755,"logger":"finalizer","msg":"Executing finalizer","finalizer_name":"dynamic-watches.finalizers.k8s.elastic.co/http-certificates","namespace":"default","name":"quickstart"}
{"level":"info","ts":1567129364.4594567,"logger":"elasticsearch-controller","msg":"Updating status","iteration":75,"namespace":"default","es_name":"quickstart"}
{"level":"info","ts":1567129364.4595168,"logger":"generic-reconciler","msg":"Aggregated reconciliation results complete","result":{"Requeue":false,"RequeueAfter":0}}
{"level":"info","ts":1567129364.4595733,"logger":"elasticsearch-controller","msg":"End reconcile iteration","iteration":75,"took":30.00402465,"namespace":"default","es_ame":"quickstart"}
{"level":"error","ts":1567129364.4596124,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"default/quickstart","error":"Timeout: request did not complete within requested timeout 30s","errorCauses":[{"error":"Timeout: request did not complete within requested timeout 30s"}],"stacktrace":"github.com/elastic/cloud-on-k8s/operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/elastic/cloud-on-k8s/operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/cloud-on-k8s/operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/elastic/cloud-on-k8s/operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/elastic/cloud-on-k8s/operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/elastic/cloud-on-k8s/operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/elastic/cloud-on-k8s/operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/elastic/cloud-on-k8s/operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/cloud-on-k8s/operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/elastic/cloud-on-k8s/operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/cloud-on-k8s/operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/elastic/cloud-on-k8s/operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

Please advise.

Hey @alexus, what is the metadata of the ES cluster at that point? I wonder if the timeout is failing to remove finalizers for some reason. Either way it definitely looks like we need better logging around here to be able to diagnose the situation.

https://pastebin.com/EDd1n9Cu

Sorry I meant of the cluster itself -- so kubectl get elasticsearch quickstart -o yaml

$ kubectl get elasticsearch quickstart -o yaml
apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
kind: Elasticsearch
metadata:
  annotations:
    common.k8s.elastic.co/controller-version: 0.9.0
    elasticsearch.k8s.elastic.co/cluster-uuid: mqAKDcYgRaGAIPYfM9O7pA
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"elasticsearch.k8s.elastic.co/v1alpha1","kind":"Elasticsearch","metadata":{"annotations":{},"name":"quickstart","namespace":"default"},"spec":{"nodes":[{"config":{"node.data":true,"node.ingest":true,"node.master":true},"nodeCount":3,"volumeClaimTemplates":[{"metadata":{"name":"elasticsearch-data"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"10Gi"}}}}]}],"version":"7.2.0"}}
  creationTimestamp: "2019-09-03T16:38:14Z"
  deletionGracePeriodSeconds: 0
  deletionTimestamp: "2019-09-05T17:35:30Z"
  finalizers:
  - expectations.finalizers.elasticsearch.k8s.elastic.co
  - observer.finalizers.elasticsearch.k8s.elastic.co
  - secure-settings.finalizers.elasticsearch.k8s.elastic.co
  - dynamic-watches.finalizers.k8s.elastic.co/http-certificates
  generation: 5
  name: quickstart
  namespace: default
  resourceVersion: "1062577"
  selfLink: /apis/elasticsearch.k8s.elastic.co/v1alpha1/namespaces/default/elasticsearches/quickstart
  uid: 3967e39c-ce69-11e9-8ea2-4201ac100003
spec:
  http:
    service:
      metadata:
        creationTimestamp: null
      spec: {}
    tls:
      certificate: {}
  nodes:
  - config:
      node.data: true
      node.ingest: true
      node.master: true
    nodeCount: 3
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
  updateStrategy: {}
  version: 7.2.0
status:
  availableNodes: 3
  clusterUUID: mqAKDcYgRaGAIPYfM9O7pA
  controllerVersion: 0.9.0
  health: green
  masterNode: quickstart-es-cknhv8dz7p
  phase: Operational
  service: quickstart-es-http
  zenDiscovery:
    minimumMasterNodes: 2
$