hi guys i have installed elk stack on kubernetes using helm charts. all pods are running fine except logstash with this error message.
[2022-05-13T12:01:46,928][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-05-13T12:02:16,412][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
although at the begining of the logs it says :
[2022-05-13T12:01:21,601][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch-master:9200"]}
[2022-05-13T12:01:21,703][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch-master:9200/]}}
[2022-05-13T12:01:22,004][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch-master:9200/"}
[2022-05-13T12:01:22,020][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.1) {:es_version=>7}
[2022-05-13T12:01:22,023][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-05-13T12:01:22,312][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-05-13T12:01:22,414][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2022-05-13T12:01:22,621][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x649715b7 run>"}
[2022-05-13T12:01:25,307][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>2.68}
[2022-05-13T12:01:25,400][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2022-05-13T12:01:25,419][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-05-13T12:01:25,700][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-05-13T12:01:25,901][INFO ][org.logstash.beats.Server][main][a052736baced7cdf5e3bb4186a486fe93f23da5f0680b2986c54f1bb65250496] Starting server on port: 5044
here's my logstash configuration :
replicas: 1
# Allows you to add any config files in /usr/share/logstash/config/
# such as logstash.yml and log4j2.properties
#
# Note that when overriding logstash.yml, `http.host: 0.0.0.0` should always be included
# to make default probes work.
logstashConfig: {}
# logstash.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
# Allows you to add any pipeline files in /usr/share/logstash/pipeline/
### ***warn*** there is a hardcoded logstash.conf in the image, override it first
logstashPipeline:
logstash.conf: |
input {
beats {
port => 5044
}
}
output { elasticsearch {hosts =>"http://elasticsearch-master:9200" } }
# Allows you to add any pattern files in your custom pattern dir
logstashPatternDir: "/usr/share/logstash/patterns/"
logstashPattern: {}
# pattern.conf: |
# DPKG_VERSION [-+~<>\.0-9a-zA-Z]+
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
# name: env-secret
# - configMapRef:
# name: config-map
# Add sensitive data to k8s secrets
secrets: []
# - name: "env"
# value:
# ELASTICSEARCH_PASSWORD: "LS1CRUdJTiBgUFJJVkFURSB"
# api_key: ui2CsdUadTiBasRJRkl9tvNnw
# - name: "tls"
# value:
# ca.crt: |
# LS0tLS1CRUdJT0K
# LS0tLS1CRUdJT0K
# LS0tLS1CRUdJT0K
# LS0tLS1CRUdJT0K
# cert.crt: "LS0tLS1CRUdJTiBlRJRklDQVRFLS0tLS0K"
# cert.key.filepath: "secrets.crt" # The path to file should be relative to the `values.yaml` file.
# A list of secrets and their paths to mount inside the pod
secretMounts: []
hostAliases: []
#- ip: "127.0.0.1"
# hostnames:
# - "foo.local"
# - "bar.local"
image: "docker.elastic.co/logstash/logstash"
imageTag: "7.17.1"
imagePullPolicy: "IfNotPresent"
imagePullSecrets: []
podAnnotations: {}
# additionals labels
labels: {}
logstashJavaOpts: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "100m"
memory: "1536Mi"
limits:
cpu: "1000m"
memory: "1536Mi"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
annotations:
{}
#annotation1: "value1"
#annotation2: "value2"
#annotation3: "value3"
podSecurityPolicy:
create: false
name: ""
spec:
privileged: false
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
persistence:
enabled: false
annotations: {}
extraVolumes:
[]
# - name: extras
# emptyDir: {}
extraVolumeMounts:
[]
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraContainers:
[]
# - name: do-something
# image: busybox
# command: ['do', 'something']
extraInitContainers:
[]
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
nodeAffinity: {}
# This is inter-pod affinity settings as defined in
# https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
podAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
httpPort: 9600
# Custom ports to add to logstash
extraPorts:
[]
# - name: beats
# containerPort: 5001
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# How long to wait for logstash to stop gracefully
terminationGracePeriod: 120
# Probes
# Default probes are using `httpGet` which requires that `http.host: 0.0.0.0` is part of
# `logstash.yml`. If needed probes can be disabled or overridden using the following syntaxes:
#
# disable livenessProbe
# livenessProbe: null
#
# replace httpGet default readinessProbe by some exec probe
# readinessProbe:
# httpGet: null
# exec:
# command:
# - curl
# - localhost:9600
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 3
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
nodeSelector: {}
tolerations: []
nameOverride: ""
fullnameOverride: ""
lifecycle:
{}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
service:
{}
# annotations: {}
# type: ClusterIP
# loadBalancerIP: ""
# ports:
# - name: beats
# port: 5044
# protocol: TCP
# targetPort: 5044
# - name: http
# port: 8080
# protocol: TCP
# targetPort: 8080
ingress:
enabled: false
annotations:
{}
# kubernetes.io/tls-acme: "true"
className: "nginx"
pathtype: ImplementationSpecific
hosts:
- host: logstash-example.local
paths:
- path: /beats
servicePort: 5044
- path: /http
servicePort: 8080
tls: []
xpack.monitoring.enabled: true
# - secretName: logstash-example-tls
# hosts:
# - logstash-example.local
i only change the beats part and the output to Elasticsearch-master.
the elk charts version is 7.17.1. and i only modified logstash configuration didn't change anything on Elasticsearch.