Filebeats not sending output to Logstash

I suspect filebeats is not sending output to Logstash since Logstash has not created an index in Elastic Search. There are no errors in logs. Filebeat logs also do not indicate nor mention Logstash connection (not sure if they should). Logs do appear to be captured in Filebeats, as this is what I see inside its container:

Other info:
Helm Chart Version: 7.17.3

Changes made to default helm chart values:

  • Filebeat:
  filebeatConfig:
    filebeat.yml: |
      filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.logstash:
        hosts: ["logstash-logstash:5044"]
  • Logstash:
logstashConfig:
  logstash.yml: |
    http.host: 0.0.0.0

logstashPipeline:
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    output {
      elasticsearch {
        hosts => "http://elasticsearch-master:9200"
        manage_template => false
        ssl_certificate_verification => false
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
        user=> "elastic"
        password => "changeme"
      }
    }

persistence:
  enabled: true

service:
  type: ClusterIP
  ports:
    - name: beats
      port: 5044
      protocol: TCP
      targetPort: 5044
    - name: http
      port: 8080
      protocol: TCP
      targetPort: 8080

Log outputs:

  • Logstash:
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2024-01-16T09:59:05,650][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2024-01-16T09:59:05,662][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.3", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.14.1+1 on 11.0.14.1+1 +indy +jit [linux-x86_64]"}
[2024-01-16T09:59:05,664][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Xmx1g, -Xms1g]
[2024-01-16T09:59:05,838][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2024-01-16T09:59:05,856][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2024-01-16T09:59:07,357][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"da83986b-0074-4fd3-8436-66edee1edfb1", :path=>"/usr/share/logstash/data/uuid"}
[2024-01-16T09:59:10,860][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-01-16T09:59:13,253][INFO ][org.reflections.Reflections] Reflections took 203 ms to scan 1 urls, producing 119 keys and 419 values 
[2024-01-16T09:59:15,060][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2024-01-16T09:59:15,256][WARN ][deprecation.logstash.inputs.beats] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2024-01-16T09:59:15,369][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2024-01-16T09:59:15,466][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch ssl_certificate_verification=>false, password=><password>, hosts=>[http://elasticsearch-master:9200], index=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", manage_template=>false, id=>"d8f958685bb06e01c056337804ee087fb480475bec323f0e464ac9cb2cf024a3", user=>"elastic", document_type=>"%{[@metadata][type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0e28094b-c790-4834-9f90-9eedce4a282f", enable_metric=>true, charset=>"UTF-8">, workers=>1, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false, retry_initial_interval=>2, retry_max_interval=>64, data_stream_type=>"logs", data_stream_dataset=>"generic", data_stream_namespace=>"default", data_stream_sync_fields=>true, data_stream_auto_routing=>true, template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy">}
[2024-01-16T09:59:15,544][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2024-01-16T09:59:15,752][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch-master:9200"]}
[2024-01-16T09:59:16,591][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@elasticsearch-master:9200/]}}
[2024-01-16T09:59:17,177][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@elasticsearch-master:9200/"}
[2024-01-16T09:59:17,256][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.3) {:es_version=>7}
[2024-01-16T09:59:17,258][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2024-01-16T09:59:17,442][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2024-01-16T09:59:17,445][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2024-01-16T09:59:17,664][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x3464df94 run>"}
[2024-01-16T09:59:19,550][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.88}
[2024-01-16T09:59:19,568][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-01-16T09:59:19,660][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-01-16T09:59:19,986][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-01-16T09:59:20,155][INFO ][org.logstash.beats.Server][main][eae4104931c7babd922d00e5753644df6a529b403856023ce59bc6cf65e92cd5] Starting server on port: 5044
  • Filebeat (sample):
INFO	[monitoring]	log/log.go:184	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":39,"throttled":{"ns":2660328076,"periods":9}}},"cpuacct":{"total":{"ns":1200065822}},"memory":{"mem":{"usage":{"bytes":24576}}}},"cpu":{"system":{"ticks":1390,"time":{"ms":16}},"total":{"ticks":3910,"time":{"ms":34},"value":3910},"user":{"ticks":2520,"time":{"ms":18}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"06fd6f3d-bde3-4ad0-a4b4-c34da080583b","uptime":{"ms":2370234},"version":"7.17.3"},"memstats":{"gc_next":26735920,"memory_alloc":15805152,"memory_total":187210712,"rss":133189632},"runtime":{"goroutines":82}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.79,"15":2.96,"5":3.14,"norm":{"1":0.1579,"15":0.1233,"5":0.1308}}}}}}
2024-01-16T10:38:55.127Z	INFO	[monitoring]	log/log.go:184	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":32,"throttled":{"ns":2013075928,"periods":6}}},"cpuacct":{"total":{"ns":1083076407}},"memory":{"mem":{"usage":{"bytes":4096}}}},"cpu":{"system":{"ticks":1400,"time":{"ms":10}},"total":{"ticks":3940,"time":{"ms":32},"value":3940},"user":{"ticks":2540,"time":{"ms":22}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"06fd6f3d-bde3-4ad0-a4b4-c34da080583b","uptime":{"ms":2400234},"version":"7.17.3"},"memstats":{"gc_next":26735920,"memory_alloc":17230928,"memory_total":188636488,"rss":133189632},"runtime":{"goroutines":82}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.78,"15":2.99,"5":3.21,"norm":{"1":0.1575,"15":0.1246,"5":0.1338}}}}}}
2024-01-16T10:39:25.127Z	INFO	[monitoring]	log/log.go:184	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":39,"throttled":{"ns":2298392164,"periods":7}}},"cpuacct":{"total":{"ns":1170071494}},"memory":{"mem":{"usage":{"bytes":270336}}}},"cpu":{"system":{"ticks":1410,"time":{"ms":13}},"total":{"ticks":3970,"time":{"ms":34},"value":3970},"user":{"ticks":2560,"time":{"ms":21}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"06fd6f3d-bde3-4ad0-a4b4-c34da080583b","uptime":{"ms":2430238},"version":"7.17.3"},"memstats":{"gc_next":26735920,"memory_alloc":18368008,"memory_total":189773568,"rss":133189632},"runtime":{"goroutines":82}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.44,"15":2.99,"5":3.18,"norm":{"1":0.1433,"15":0.1246,"5":0.1325}}}}}}
2024-01-16T10:39:55.126Z	INFO	[monitoring]	log/log.go:184	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":39,"throttled":{"ns":2390639250,"periods":7}}},"cpuacct":{"total":{"ns":1244300135}},"memory":{"mem":{"usage":{"bytes":69632}}}},"cpu":{"system":{"ticks":1410},"total":{"ticks":4050,"time":{"ms":78},"value":4050},"user":{"ticks":2640,"time":{"ms":78}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"06fd6f3d-bde3-4ad0-a4b4-c34da080583b","uptime":{"ms":2460230},"version":"7.17.3"},"memstats":{"gc_next":26927136,"memory_alloc":14110048,"memory_total":191402048,"rss":133189632},"runtime":{"goroutines":82}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.03,"15":2.98,"5":3.12,"norm":{"1":0.1263,"15":0.1242,"5":0.13}}}}}}

Any help is greatly appreciated as I've been trying to figure out these for a few days now.

The Filebeat logs show zero harvesters which makes me think that the input auto discover is not configured correctly.

Can you share the full Filebeat log and helm chart with any sensitive info removed?

It might also be worth looking at the filebeat.yml in the container to verify that it has your provided configuration.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.