Error from server (Timeout): error when creating "STDIN": Timeout: request did not complete within requested timeout 30s

Hello World!

I'm trying to follow Quickstart | Elastic Cloud on Kubernetes [0.9] | Elastic:

Deploy ECK in your Kubernetes cluster

$ kubectl apply -f https://download.elastic.co/downloads/eck/0.9.0/all-in-one.yaml
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/trustrelationships.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
namespace/elastic-system created
statefulset.apps/elastic-operator created
secret/webhook-server-secret created
serviceaccount/elastic-operator created
$
$ kubectl -n elastic-system logs statefulset.apps/elastic-operator  | tail
{"level":"info","ts":1566775664.8577778,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"license-controller"}
{"level":"info","ts":1566775664.8577247,"logger":"kubebuilder.webhook","msg":"installing webhook configuration in cluster"}
{"level":"info","ts":1566775664.957982,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"apmserver-controller","worker count":1}
{"level":"info","ts":1566775664.958148,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibana-association-controller","worker count":1}
{"level":"info","ts":1566775664.958176,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibana-controller","worker count":1}
{"level":"info","ts":1566775664.9581378,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"license-controller","worker count":1}
{"level":"info","ts":1566775664.9582195,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"apm-es-association-controller","worker count":1}
{"level":"info","ts":1566775664.958255,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"trial-controller","worker count":1}
{"level":"info","ts":1566775664.9582841,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"elasticsearch-controller","worker count":1}
{"level":"info","ts":1566775664.9925923,"logger":"kubebuilder.webhook","msg":"starting the webhook server."}
$ 

Deploy the Elasticsearch cluster

$ cat <<EOF | kubectl apply -f -
> apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
> kind: Elasticsearch
> metadata:
>   name: quickstart
> spec:
>   version: 7.2.0
>   nodes:
>   - nodeCount: 1
>     config:
>       node.master: true
>       node.data: true
>       node.ingest: true
> EOF
Error from server (Timeout): error when creating "STDIN": Timeout: request did not complete within requested timeout 30s
$ 

How does one troubleshoot ECK?

Please advise.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.19", GitCommit:"bebe882824db5431820e3d59851c8fb52cb41675", GitTreeState:"clean", BuildDate:"2019-07-26T00:09:47Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get pods --all-namespaces=true
NAMESPACE             NAME                                                     READY   STATUS    RESTARTS   AGE
elastic-system        elastic-operator-0                                       1/1     Running   1          13m
gitlab-managed-apps   certmanager-cert-manager-6df979599b-rvhgg                1/1     Running   0          18m
gitlab-managed-apps   ingress-nginx-ingress-controller-7cf6944677-bnlkh        1/1     Running   0          19m
gitlab-managed-apps   ingress-nginx-ingress-default-backend-7f7bf55777-qxzch   1/1     Running   0          19m
gitlab-managed-apps   prometheus-kube-state-metrics-5d5958bc-5mpcz             1/1     Running   0          17m
gitlab-managed-apps   prometheus-prometheus-server-5c476cc89-xz25r             2/2     Running   0          17m
gitlab-managed-apps   runner-gitlab-runner-767dc4d987-2zvgq                    1/1     Running   0          16m
gitlab-managed-apps   tiller-deploy-5c85978967-fjz2b                           1/1     Running   0          21m
kube-system           event-exporter-v0.2.4-5f88c66fb7-r9hnm                   2/2     Running   0          25m
kube-system           fluentd-gcp-scaler-59b7b75cd7-t72bn                      1/1     Running   0          25m
kube-system           fluentd-gcp-v3.2.0-bpn88                                 2/2     Running   0          24m
kube-system           heapster-v1.6.1-7447959494-fz56j                         3/3     Running   0          24m
kube-system           kube-dns-6987857fdb-6kffz                                4/4     Running   0          25m
kube-system           kube-dns-autoscaler-bb58c6784-hvnth                      1/1     Running   0          25m
kube-system           kube-proxy-gke-test-default-pool-590b06f4-x9r8           1/1     Running   0          25m
kube-system           l7-default-backend-fd59995cd-9tz2n                       1/1     Running   0          25m
kube-system           metrics-server-v0.3.1-57c75779f-dk76z                    2/2     Running   0          24m
kube-system           prometheus-to-sd-98mlh                                   1/1     Running   0          25m
$
[
 {
   "protoPayload": {
     "@type": "type.googleapis.com/google.cloud.audit.AuditLog",
     "status": {
       "code": 4,
       "message": "DEADLINE_EXCEEDED"
     },
     "authenticationInfo": {
       "principalEmail": "X@X.X"
     },
     "requestMetadata": {
       "callerIp": "X.X.X.X",
       "callerSuppliedUserAgent": "kubectl/v1.15.3 (darwin/amd64) kubernetes/2d3c76f",
       "requestAttributes": {},
       "destinationAttributes": {}
     },
     "serviceName": "k8s.io",
     "methodName": "co.elastic.k8s.elasticsearch.v1alpha1.elasticsearches.create",
     "authorizationInfo": [
       {
         "resource": "elasticsearch.k8s.elastic.co/v1alpha1/namespaces/default/elasticsearches",
         "permission": "co.elastic.k8s.elasticsearch.v1alpha1.elasticsearches.create",
         "granted": true,
         "resourceAttributes": {}
       }
     ],
     "resourceName": "elasticsearch.k8s.elastic.co/v1alpha1/namespaces/default/elasticsearches"
   },
   "insertId": "a699227c-78aa-4184-ba96-695927a26e12",
   "resource": {
     "type": "k8s_cluster",
     "labels": {
       "project_id": "X-X-X",
       "cluster_name": "test",
       "location": "us-east4-a"
     }
   },
   "timestamp": "2019-08-26T15:25:23.900097Z",
   "labels": {
     "authorization.k8s.io/reason": "",
     "authorization.k8s.io/decision": "allow"
   },
   "logName": "projects/X-X-X/logs/cloudaudit.googleapis.com%2Factivity",
   "operation": {
     "id": "a699227c-78aa-4184-ba96-695927a26e12",
     "producer": "k8s.io",
     "first": true,
     "last": true
   },
   "receiveTimestamp": "2019-08-26T15:25:34.316862823Z"
 }
]

per Error from server (NotFound): services "quickstart-es-http" not found, workaround:

kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io validating-webhook-configuration

Hey @alexus,

Is there maybe a firewall rule (or similar) in place on your GKE cluster that would prevent Kubernetes apiserver to reach the operator validation HTTP webhook (port 9876)?

I think this one has been solved: https://github.com/elastic/cloud-on-k8s/issues/1673

per link that I comment before your reply, for private cluster there is a need for firewall tcp:9443), which after creating it, eck doesn't timeout anymore...

that's same link as I posted in my comment) not sure if I'd call it resolved exactly, more like work around... at very least it's not documented properly anywhere...

This is mentioned in the troubleshooting docs:

Though it may be worthwhile to be more explicit with some popular providers. Definitely open to suggestions.