Error while connecting logstash to elasticsearch

hi guys i have installed elk stack on kubernetes using helm charts. all pods are running fine except logstash with this error message.

[2022-05-13T12:01:46,928][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-05-13T12:02:16,412][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}

although at the begining of the logs it says :

[2022-05-13T12:01:21,601][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch-master:9200"]}
[2022-05-13T12:01:21,703][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch-master:9200/]}}
[2022-05-13T12:01:22,004][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch-master:9200/"}
[2022-05-13T12:01:22,020][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.1) {:es_version=>7}
[2022-05-13T12:01:22,023][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-05-13T12:01:22,312][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-05-13T12:01:22,414][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2022-05-13T12:01:22,621][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x649715b7 run>"}
[2022-05-13T12:01:25,307][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>2.68}
[2022-05-13T12:01:25,400][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2022-05-13T12:01:25,419][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-05-13T12:01:25,700][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-05-13T12:01:25,901][INFO ][org.logstash.beats.Server][main][a052736baced7cdf5e3bb4186a486fe93f23da5f0680b2986c54f1bb65250496] Starting server on port: 5044

here's my logstash configuration :

replicas: 1

# Allows you to add any config files in /usr/share/logstash/config/
# such as logstash.yml and log4j2.properties
#
# Note that when overriding logstash.yml, `http.host: 0.0.0.0` should always be included
# to make default probes work.
logstashConfig: {}
#  logstash.yml: |
#    key:
#      nestedkey: value
#  log4j2.properties: |
#    key = value

# Allows you to add any pipeline files in /usr/share/logstash/pipeline/
### ***warn*** there is a hardcoded logstash.conf in the image, override it first
logstashPipeline: 
 logstash.conf: |
   input {
     beats {
       port => 5044
       
     }
   }
   output { elasticsearch {hosts =>"http://elasticsearch-master:9200" } }

# Allows you to add any pattern files in your custom pattern dir
logstashPatternDir: "/usr/share/logstash/patterns/"
logstashPattern: {}
#    pattern.conf: |
#      DPKG_VERSION [-+~<>\.0-9a-zA-Z]+

# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
#  - name: MY_ENVIRONMENT_VAR
#    value: the_value_goes_here

# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
#     name: env-secret
# - configMapRef:
#     name: config-map

# Add sensitive data to k8s secrets
secrets: []
#  - name: "env"
#    value:
#      ELASTICSEARCH_PASSWORD: "LS1CRUdJTiBgUFJJVkFURSB"
#      api_key: ui2CsdUadTiBasRJRkl9tvNnw
#  - name: "tls"
#    value:
#      ca.crt: |
#        LS0tLS1CRUdJT0K
#        LS0tLS1CRUdJT0K
#        LS0tLS1CRUdJT0K
#        LS0tLS1CRUdJT0K
#      cert.crt: "LS0tLS1CRUdJTiBlRJRklDQVRFLS0tLS0K"
#      cert.key.filepath: "secrets.crt" # The path to file should be relative to the `values.yaml` file.

# A list of secrets and their paths to mount inside the pod
secretMounts: []

hostAliases: []
#- ip: "127.0.0.1"
#  hostnames:
#  - "foo.local"
#  - "bar.local"

image: "docker.elastic.co/logstash/logstash"
imageTag: "7.17.1"
imagePullPolicy: "IfNotPresent"
imagePullSecrets: []

podAnnotations: {}

# additionals labels
labels: {}

logstashJavaOpts: "-Xmx1g -Xms1g"

resources:
  requests:
    cpu: "100m"
    memory: "1536Mi"
  limits:
    cpu: "1000m"
    memory: "1536Mi"

volumeClaimTemplate:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 1Gi

rbac:
  create: false
  serviceAccountAnnotations: {}
  serviceAccountName: ""
  annotations:
    {}
    #annotation1: "value1"
    #annotation2: "value2"
    #annotation3: "value3"

podSecurityPolicy:
  create: false
  name: ""
  spec:
    privileged: false
    fsGroup:
      rule: RunAsAny
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
      - secret
      - configMap
      - persistentVolumeClaim

persistence:
  enabled: false
  annotations: {}

extraVolumes:
  []
  # - name: extras
  #   emptyDir: {}

extraVolumeMounts:
  []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

extraContainers:
  []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

extraInitContainers:
  []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""

# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"

# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"

# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
nodeAffinity: {}

# This is inter-pod affinity settings as defined in
# https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
podAffinity: {}

# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"

httpPort: 9600

# Custom ports to add to logstash
extraPorts:
  []
  # - name: beats
  #   containerPort: 5001

updateStrategy: RollingUpdate

# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1

podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000

securityContext:
  capabilities:
    drop:
      - ALL
  # readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000

# How long to wait for logstash to stop gracefully
terminationGracePeriod: 120

# Probes
# Default probes are using `httpGet` which requires that `http.host: 0.0.0.0` is part of
# `logstash.yml`. If needed probes can be disabled or overridden using the following syntaxes:
#
# disable livenessProbe
# livenessProbe: null
#
# replace httpGet default readinessProbe by some exec probe
# readinessProbe:
#   httpGet: null
#   exec:
#     command:
#       - curl
#      - localhost:9600

livenessProbe:
  httpGet:
    path: /
    port: http
  initialDelaySeconds: 300
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1

readinessProbe:
  httpGet:
    path: /
    port: http
  initialDelaySeconds: 60
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 3

## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""

nodeSelector: {}
tolerations: []

nameOverride: ""
fullnameOverride: ""

lifecycle:
  {}
  # preStop:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
  # postStart:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]

service:
  {}
  # annotations: {}
  # type: ClusterIP
  # loadBalancerIP: ""
  # ports:
  #   - name: beats
  #     port: 5044
  #     protocol: TCP
  #     targetPort: 5044
  #   - name: http
  #     port: 8080
  #     protocol: TCP
  #     targetPort: 8080

ingress:
  enabled: false
  annotations:
    {}
    # kubernetes.io/tls-acme: "true"
  className: "nginx"
  pathtype: ImplementationSpecific
  hosts:
    - host: logstash-example.local
      paths:
        - path: /beats
          servicePort: 5044
        - path: /http
          servicePort: 8080
  tls: []
xpack.monitoring.enabled: true   
  #  - secretName: logstash-example-tls
  #    hosts:
  #      - logstash-example.local

i only change the beats part and the output to Elasticsearch-master.

the elk charts version is 7.17.1. and i only modified logstash configuration didn't change anything on Elasticsearch.

[2022-05-13T12:01:25,901][INFO ][org.logstash.beats.Server][main][a052736baced7cdf5e3bb4186a486fe93f23da5f0680b2986c54f1bb65250496] Starting server on port: 5044

Beats is listening. LS has been started, it looks OK. Check does FB send data. Usually it keeps tracks in file registry database, to avoid reading a file again. Try to send data from FB again and track FB log.

thanks for the reply. actually the log that shows it's functionning well it's only on the start of the pod then after listening on port 5044 comes the errors. either way i'll check with fb.

Hello again actually filebeat is working well as u can see.

41,"periods":5}}},"cpuacct":{"total":{"ns":847543925}},"memory":{"mem":{"usage":{"bytes":90112}}}},"cpu":{"system":{"ticks":2700,"time":{"ms":14}},"total":{"ticks":9030,"time":{"ms":59},"value":9030},"user":{"ticks":6330,"time":{"ms":45}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":16},"info":{"ephemeral_id":"6ac3c473-9140-444b-b9ef-6688332306f8","uptime":{"ms":3570122},"version":"7.17.1"},"memstats":{"gc_next":23471264,"memory_alloc":19680520,"memory_total":610515200,"rss":143355904},"runtime":{"goroutines":84}},"filebeat":{"events":{"added":35,"done":35},"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":35,"active":0,"batches":6,"total":35},"read":{"bytes":4402},"write":{"bytes":61894}},"pipeline":{"clients":1,"events":{"active":0,"published":35,"total":35},"queue":{"acked":35}}},"registrar":{"states":{"current":37,"update":35},"writes":{"success":6,"total":6}},"system":{"load":{"1":0.61,"15":0.97,"5":0.87,"norm":{"1":0.1525,"15":0.2425,"5":0.2175}}}}}}
2022-05-14T19:04:00.343Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":50,"throttled":{"ns":143913763,"periods":6}}},"cpuacct":{"total":{"ns":888790295}},"memory":{"mem":{"usage":{"bytes":69632}}}},"cpu":{"system":{"ticks":2720,"time":{"ms":22}},"total":{"ticks":9110,"time":{"ms":75},"value":9110},"user":{"ticks":6390,"time":{"ms":53}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":16},"info":{"ephemeral_id":"6ac3c473-9140-444b-b9ef-6688332306f8","uptime":{"ms":3600119},"version":"7.17.1"},"memstats":{"gc_next":23160384,"memory_alloc":12778096,"memory_total":614323200,"rss":143355904},"runtime":{"goroutines":84}},"filebeat":{"events":{"added":43,"done":43},"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":43,"active":0,"batches":4,"total":43},"read":{"bytes":3066},"write":{"bytes":74446}},"pipeline":{"clients":1,"events":{"active":0,"published":43,"total":43},"queue":{"acked":43}}},"registrar":{"states":{"current":37,"update":43},"writes":{"success":4,"total":4}},"system":{"load":{"1":0.62,"15":0.96,"5":0.85,"norm":{"1":0.155,"15":0.24,"5":0.2125}}}}}}
2022-05-14T19:04:30.344Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":57,"throttled":{"ns":109949769,"periods":4}}},"cpuacct":{"total":{"ns":878402136}},"memory":{"mem":{"usage":{"bytes":110592}}}},"cpu":{"system":{"ticks":2740,"time":{"ms":18}},"total":{"ticks":9170,"time":{"ms":67},"value":9170},"user":{"ticks":6430,"time":{"ms":49}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":16},"info":{"ephemeral_id":"6ac3c473-9140-444b-b9ef-6688332306f8","uptime":{"ms":3630120},"version":"7.17.1"},"memstats":{"gc_next":23160384,"memory_alloc":17562784,"memory_total":619107888,"rss":143355904},"runtime":{"goroutines":84}},"filebeat":{"events":{"added":32,"done":32},"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":32,"active":0,"batches":7,"total":32},"read":{"bytes":5085},"write":{"bytes":57383}},"pipeline":{"clients":1,"events":{"active":0,"published":32,"total":32},"queue":{"acked":32}}},"registrar":{"states":{"current":37,"update":32},"writes":{"success":7,"total":7}},"system":{"load":{"1":0.56,"15":0.95,"5":0.82,"norm":{"1":0.14,"15":0.2375,"5":0.205}}}}}}
2022-05-14T19:05:00.344Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":51,"throttled":{"ns":149559835,"periods":7}}},"cpuacct":{"total":{"ns":917449555}},"memory":{"mem":{"usage":{"bytes":65536}}}},"cpu":{"system":{"ticks":2760,"time":{"ms":20}},"total":{"ticks":9250,"time":{"ms":79},"value":9250},"user":{"ticks":6490,"time":{"ms":59}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":16},"info":{"ephemeral_id":"6ac3c473-9140-444b-b9ef-6688332306f8","uptime":{"ms":3660120},"version":"7.17.1"},"memstats":{"gc_next":22883936,"memory_alloc":11608984,"memory_total":623286056,"rss":143355904},"runtime":{"goroutines":84}},"filebeat":{"events":{"active":7,"added":58,"done":51},"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":51,"active":0,"batches":4,"total":51},"read":{"bytes":3115},"write":{"bytes":87662}},"pipeline":{"clients":1,"events":{"active":7,"published":58,"total":58},"queue":{"acked":51}}},"registrar":{"states":{"current":37,"update":51},"writes":{"success":4,"total":4}},"system":{"load":{"1":0.76,"15":0.96,"5":0.85,"norm":{"1":0.19,"15":0.24,"5":0.2125}}}}}}
2022-05-14T19:05:30.342Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":54,"throttled":{"ns":95722118,"periods":6}}},"cpuacct":{"total":{"ns":903439206}},"memory":{"mem":{"usage":{"bytes":81920}}}},"cpu":{"system":{"ticks":2780,"time":{"ms":20}},"total":{"ticks":9310,"time":{"ms":55},"value":9310},"user":{"ticks":6530,"time":{"ms":35}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":16},"info":{"ephemeral_id":"6ac3c473-9140-444b-b9ef-6688332306f8","uptime":{"ms":3690119},"version":"7.17.1"},"memstats":{"gc_next":22883936,"memory_alloc":15556144,"memory_total":627233216,"rss":143355904},"runtime":{"goroutines":84}},"filebeat":{"events":{"active":-7,"added":26,"done":33},"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":33,"active":0,"batches":5,"total":33},"read":{"bytes":3682},"write":{"bytes":58247}},"pipeline":{"clients":1,"events":{"active":0,"published":26,"total":26},"queue":{"acked":33}}},"registrar":{"states":{"current":37,"update":33},"writes":{"success":5,"total":5}},"system":{"load":{"1":0.59,"15":0.94,"5":0.8,"norm":{"1":0.1475,"15":0.235,"5":0.2}}}}}}

and i added this line in logstash configuration

logstashConfig: 
  logstash.yml: |
     xpack.monitoring.enabled: false 
#    key:
#      nestedkey: value
#  log4j2.properties: |
#    key = value

# Allows you to add any pipeline files in /usr/share/logstash/pipeline/
### ***warn*** there is a hardcoded logstash.conf in the image, override it first
logstashPipeline: 
 logstash.conf: |
               
   input {
     beats {
       port => 5044
       
     }
   }
   output { elasticsearch {hosts =>"http://elasticsearch-master:9200" } }

but still giving me same errors. any solutions ?

I had this issue when LS don't see ES server. Do you have multiple "elasticsearch {hosts =>
elasticsearch-master:9200" connections? Does LS see ES over port 9200?
Is your master server only with master role? No data role? Can you point to server with data dole
Also, jst for testing purpose, can you replace elastic in output to:
stdout { codec => rubydebug{}

thanks @Rios actually that error did not reflect the logs that filebeat provides. so everything was working good inspite of that error.

Sorry my mistake, I haven't seen the license issue. FB should be OK.
LS cannot read the license from the ES server.

  1. How many the ES server do you have? Especially with role data
  2. Is your master server only with master role? Without data role.
  3. Can you point to server with data dole
    Also, jus for testing purpose, can you replace elastic in output to:
    stdout { codec => rubydebug{}

hello @Rios thanks again for replying, actually in able to solve this actually. my logstash isn't working is it possible for logstash to keep sending data even logstash is down and the output for fb is logstash ?

FB can buffer data for a small period in memory and reconnect when LS is up or use the disk queue.

does that mean even with that error after a certain time Elasticsearch should not be getting any logs from fb, that means that error is not affecting LS right ?

If there is no data FB wait and listening, LS is listening and wait, ES just accept what you submit.
Neither one of app will go down if not receive data. For instance, if there is an error on FB like wrong config or crash, next in chain will not get data, LS is up, ES is up. If LS goes down, FB will read data and keep as it possible to have in the buffer but FB will be up. When the connection is established with LS, FB will release the buffer and continue with a standard processing.

hello @Rios. i understand the process. i really tried everything to solve that error. but i think it's a dns problem right? i mean it says it doesn't know know Elasticsearch:9200 but it's actually Elasticsearch-master:9200. the problem could be there u think ?

[2022-05-13T12:01:22,004][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch-master:9200/"}
LS can connect to ES.

The main issue is:

[2022-05-13T12:02:16,412][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}

Can you disable xpack.monitoring.enabled: false on both lines temporary and restart LS?

Hello @Rios i added this line in logstash.yml and reinstalled it.

logstashConfig: 
  logstash.yml: |
    xpack.monitoring.enabled: false
#    key:
#      nestedkey: value
#  log4j2.properties: |
#    key = value

but the problem is logstash keep restarting.

elasticsearch-master-0           1/1     Running   0              4d22h
elasticsearch-master-1           1/1     Running   0              4d22h
elasticsearch-master-2           1/1     Running   1              4d22h
kibana-kibana-554c7f87d5-rs5vc   1/1     Running   0              9d
logstash-logstash-0              0/1     Running   157 (3s ago)   14h

here's the logs:

Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2022-05-28T15:02:48,712][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2022-05-28T15:02:48,777][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.1", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.13+8 on 11.0.13+8 +indy +jit [linux-x86_64]"}
[2022-05-28T15:02:48,780][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Xmx1g, -Xms1g]
[2022-05-28T15:02:48,874][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2022-05-28T15:02:48,890][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2022-05-28T15:02:49,986][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"6fdb3eea-ceda-40ca-a3b9-857193a8ffbc", :path=>"/usr/share/logstash/data/uuid"}
[2022-05-28T15:02:53,500][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-05-28T15:02:55,880][INFO ][org.reflections.Reflections] Reflections took 269 ms to scan 1 urls, producing 119 keys and 417 values 
[2022-05-28T15:02:57,318][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2022-05-28T15:02:57,479][WARN ][deprecation.logstash.inputs.beats] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2022-05-28T15:02:57,622][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2022-05-28T15:02:57,782][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2022-05-28T15:02:57,998][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch-master:9200"]}
[2022-05-28T15:02:58,874][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch-master:9200/]}}
[2022-05-28T15:02:59,406][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch-master:9200/"}
[2022-05-28T15:02:59,475][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.1) {:es_version=>7}
[2022-05-28T15:02:59,478][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-05-28T15:02:59,682][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-05-28T15:02:59,688][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-05-28T15:02:59,806][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2022-05-28T15:03:00,071][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x2945f8e4 run>"}
[2022-05-28T15:03:02,086][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>2.01}
[2022-05-28T15:03:02,114][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2022-05-28T15:03:02,187][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-05-28T15:03:02,371][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-05-28T15:03:02,680][INFO ][org.logstash.beats.Server][main][fd429d763aec99d88cd5a5ac4389e27eca7366ef317856495354a8fa788da4db] Starting server on port: 5044

i don't see anything suspicious in these logs.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.