Problems with ELK over EKS Amazon and kubernetes

How can I configure an Elasticksearch ELK over EKS for claim persistent volumes,
I have disk problems.

kubectl get pods -A

NAMESPACE NAME READY STATUS RESTARTS AGE
default counter 1/1 Running 0 112m
kube-logging is-cluster-0 0/1 Pending 0 133m

Hello @Milton , welcome to the community !
Can you please share the deployment configuration - part where you are defining and mounting PVC and PV ? If its running as a StatefulSet, then perhaps you will need StorageClass, PV and PVC defined explicitly and then use them in your STS yaml.
Also would be good to have the events of your kube-logging pod to understand more on the error being thrown.

Hi Ayush_Mathur, good afternoon to you.
Please can you share the configurations or yaml to configure the PVs and PVCs to run the elasticksearh+kibana+fluend over kubernetes Amazon EKS, I already have a good time trying to configure on local nodes and on an Amazon EKS cluster. I copy my email please and share the configuration link.

Email: vargascalisin@gmail.com
Cell, Washatt: 995379762

I copy the error messages that it shows me

# kubectl get services --namespace=kube-logging
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None         <none>        9200/TCP,9300/TCP   15s

When I issue this command, it gives me this output.

kubectl rollout status sts/es-cluster --namespace=kube-logging

Waiting for 3 pods to be ready...

# kubectl get pods --namespace=kube-logging
NAME                      READY   STATUS    RESTARTS   AGE
es-cluster-0              0/1     Pending   0          4m39s
fluentd-fpxx8             1/1     Running   0          96s
fluentd-jf7mz             1/1     Running   0          96s
fluentd-mgcn7             0/1     Pending   0          96s
kibana-84fc546945-pqj4g   1/1     Running   0          2m41s
[root@mongodb-server-1b ~]# kubectl -n kube-logging describe po es-cluster-0
Name:             es-cluster-0
Namespace:        kube-logging
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=elasticsearch
                  controller-revision-hash=es-cluster-577cc5b6d8
                  statefulset.kubernetes.io/pod-name=es-cluster-0
Annotations:      kubernetes.io/psp: eks.privileged
Status:           Pending
IP:
IPs:              <none>
Controlled By:    StatefulSet/es-cluster
Init Containers:
  fix-permissions:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      chown -R 1000:1000 /usr/share/elasticsearch/data
    Environment:  <none>
    Mounts:
      /usr/share/elasticsearch/data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-75p96 (ro)
  increase-vm-max-map:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sysctl
      -w
      vm.max_map_count=262144
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-75p96 (ro)
  increase-fd-ulimit:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      ulimit -n 65536
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-75p96 (ro)
Containers:
  elasticsearch:
    Image:       docker.elastic.co/elasticsearch/elasticsearch:7.2.0
    Ports:       9200/TCP, 9300/TCP
    Host Ports:  0/TCP, 0/TCP
    Limits:
      cpu:  1
    Requests:
      cpu:  100m
    Environment:
      cluster.name:                  k8s-logs
      node.name:                     es-cluster-0 (v1:metadata.name)
      discovery.seed_hosts:          es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch
      cluster.initial_master_nodes:  es-cluster-0,es-cluster-1,es-cluster-2
      ES_JAVA_OPTS:                  -Xms512m -Xmx512m
    Mounts:
      /usr/share/elasticsearch/data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-75p96 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-es-cluster-0
    ReadOnly:   false
  kube-api-access-75p96:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  4m49s  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption:   0/3 nodes are available: 3 Preemption is not helpful for scheduling.
[root@mongodb-server-1b ~]#

Those are the error messages that I get, please your support, if you could send me your ymal. to be able to configure with PV and PVC on an Amazon EKS.
Thank you so much

Cell: 995379762
Email: vargascalisin@gmail.com

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.