Elasticsearch ECK on AWS EKS hosted on Fargate

I am curious to know if ECK is supported (even feasible) on AWS ECK hosted on Fargate. The ECK operator won't install when I try installing it on a Fargate hosted K8S cluster... but I can get it to install when the EKS cluster is hosted on AWS EC2.

When I attempt to install on AWS EKS hosted on Fargate:

helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace \
  --set=installCRDs=false \
  --set=managedNamespaces='{default}' \
  --set=createClusterScopedResources=false \
  --set=webhook.enabled=false \
  --set=config.validateStorageClass=false
 kubectl get pods -n elastic-system
NAME                 READY   STATUS    RESTARTS   AGE
elastic-operator-0   0/1     Pending   0          12m
kubectl describe pod -n elastic-system elastic-operator-0
Name:             elastic-operator-0
Namespace:        elastic-system
Priority:         0
Service Account:  elastic-operator
Node:             <none>
Labels:           app.kubernetes.io/instance=elastic-operator
                  app.kubernetes.io/name=elastic-operator
                  controller-revision-hash=elastic-operator-544dd6df54
                  statefulset.kubernetes.io/pod-name=elastic-operator-0
Annotations:      checksum/config: 36f3444d6059a5088fcd88b3767b2ef0a8864f93195a446ef0c580d3a5d0c38d
                  co.elastic.logs/raw:
                    [{"type":"container","json.keys_under_root":true,"paths":["/var/log/containers/*${data.kubernetes.container.id}.log"],"processors":[{"conv...
Status:           Pending
IP:
IPs:              <none>
Controlled By:    StatefulSet/elastic-operator
Containers:
  manager:
    Image:      docker.elastic.co/eck/eck-operator:2.7.0
    Port:       <none>
    Host Port:  <none>
    Args:
      manager
      --config=/conf/eck.yaml
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:     100m
      memory:  150Mi
    Environment:
      OPERATOR_NAMESPACE:  elastic-system (v1:metadata.namespace)
      POD_IP:               (v1:status.podIP)
    Mounts:
      /conf from conf (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gv7zs (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      elastic-operator
    Optional:  false
  kube-api-access-gv7zs:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  2m37s (x3 over 12m)  default-scheduler  0/4 nodes are available: 4 node(s) had untolerated taint {eks.amazonaws.com/compute-type: fargate}. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling..

Highlighting the warning/error about an un-tolerated taint.

Yeah, I'm afraid we do not supports AWS Fargate (to my knowledge). We certainly do not test on anything but (plain) EKS in AWS.

Would you mind creating an issue with all the relevant details? :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.