Filebeat events not sending to Logstash

Hello,

I've been working on an issue all morning and can't quite figure it out. Hoping someone might be able to point me in the right direction.

Issue
Filebeat data not making it to Logstash or at least it doesn't appear that way. Installed Filebeat on a SLES 12 SP3 server. No errors during install. Configured Filebeat output to Logstash (see config below). When checking Kibana the data is not there. Also, when I run nestat -an | grep 5044 on the Filebeat server, no results show up. Not sure if Filebeat keeps an open connection or not? Feel like it should. I am able to telnet from the Filebeat server to the Logstash server OK over port 5044, so there shouldn't be any issue with firewalls. Plus I've confirmed firewall service is off on both servers (Filebeat and Logstash). Logstash is working with port 5544 because my Cisco ASA logs are getting to Elasticsearch and I can see them in Kibana. There is an error during the startup of Logstash but I don't know what to do about it or if that's even the issue I am dealing with. "Error in Netty pipeline: org.jruby.exceptions.RaiseException: (NoMethodError) undefined method" See error log below for more details.

Thanks in advance!

Current setup
Filebeat client -> Logstash -> Elasticsearch

Versions

  • Filebeat 6.6.1-1
  • Logstash 6.6.1-1
  • Elasticsearch 6.6.1-1

Filebeat config

###################### Filebeat Configuration Example #########################


#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
  #ignore_older: 72h
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1


#============================= Filebeat modules ===============================

#filebeat.config.modules:
  # Glob pattern for configuration loading
#  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
#  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

#setup.template.settings:
#  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
#setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
#  hosts: ["localhost:9200"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["172.16.1.197:5044"]
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

#processors:
#  - add_host_metadata: ~
#  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

Logstash input config

input {
  # Receive Cisco ASA logs on the standard syslog UDP port 514
  tcp {
    port => 5544
  }

  # Receive Beats Alerts
  beats {
    port => 5044
    ssl => false
  }

}

output {
    elasticsearch {
        hosts => "172.16.1.198"
    }
}

Logstash log

Mar  7 13:57:36 172.16.1.197 systemd: Started logstash.
Mar  7 13:58:00 172.16.1.197 logstash: Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
Mar  7 13:58:01 172.16.1.197 logstash: [2019-03-07T13:58:01,082][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Mar  7 13:58:01 172.16.1.197 logstash: [2019-03-07T13:58:01,100][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.6.1"}
Mar  7 13:58:01 172.16.1.197 systemd: Started Session 60315 of user root.
Mar  7 13:58:09 172.16.1.197 logstash: [2019-03-07T13:58:09,042][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
Mar  7 13:58:09 172.16.1.197 logstash: [2019-03-07T13:58:09,512][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://172.16.1.198:9200/]}}
Mar  7 13:58:09 172.16.1.197 logstash: [2019-03-07T13:58:09,724][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://172.16.1.198:9200/"}
Mar  7 13:58:09 172.16.1.197 logstash: [2019-03-07T13:58:09,784][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
Mar  7 13:58:09 172.16.1.197 logstash: [2019-03-07T13:58:09,788][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
Mar  7 13:58:09 172.16.1.197 logstash: [2019-03-07T13:58:09,816][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//172.16.1.198"]}
Mar  7 13:58:09 172.16.1.197 logstash: [2019-03-07T13:58:09,852][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
Mar  7 13:58:09 172.16.1.197 logstash: [2019-03-07T13:58:09,871][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
Mar  7 13:58:09 172.16.1.197 logstash: [2019-03-07T13:58:09,907][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:5544", :ssl_enable=>"false"}
Mar  7 13:58:10 172.16.1.197 logstash: [2019-03-07T13:58:10,594][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
Mar  7 13:58:10 172.16.1.197 logstash: [2019-03-07T13:58:10,674][ERROR][logstash.inputs.tcp      ] Error in Netty pipeline: org.jruby.exceptions.RaiseException: (NoMethodError) undefined method `<<' for nil:NilClass
Mar  7 13:58:10 172.16.1.197 logstash: [2019-03-07T13:58:10,722][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x666ec80f sleep>"}
Mar  7 13:58:10 172.16.1.197 logstash: [2019-03-07T13:58:10,735][INFO ][org.logstash.beats.Server] Starting server on port: 5044
Mar  7 13:58:10 172.16.1.197 logstash: [2019-03-07T13:58:10,700][WARN ][io.netty.util.concurrent.DefaultPromise] An exception was thrown by org.logstash.tcp.InputLoop$InputHandler$FlushOnCloseListener.operationComplete()
Mar  7 13:58:10 172.16.1.197 logstash: org.jruby.exceptions.RaiseException: (NoMethodError) undefined method `<<' for nil:NilClass
Mar  7 13:58:10 172.16.1.197 logstash: at usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_tcp_minus_5_dot_2_dot_0_minus_java.lib.logstash.inputs.tcp.enqueue_decorated(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-tcp-5.2.0-java/lib/logstash/inputs/tcp.rb:270) ~[?:?]
Mar  7 13:58:10 172.16.1.197 logstash: at usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_tcp_minus_5_dot_2_dot_0_minus_java.lib.logstash.inputs.tcp.block in flush_codec(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-tcp-5.2.0-java/lib/logstash/inputs/tcp.rb:198) ~[?:?]
Mar  7 13:58:10 172.16.1.197 logstash: at usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_codec_minus_line_minus_3_dot_0_dot_8.lib.logstash.codecs.line.flush(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-line-3.0.8/lib/logstash/codecs/line.rb:44) ~[?:?]
Mar  7 13:58:10 172.16.1.197 logstash: at usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_tcp_minus_5_dot_2_dot_0_minus_java.lib.logstash.inputs.tcp.flush_codec(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-tcp-5.2.0-java/lib/logstash/inputs/tcp.rb:197) ~[?:?]
Mar  7 13:58:10 172.16.1.197 logstash: at usr.share.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_tcp_minus_5_dot_2_dot_0_minus_java.lib.logstash.inputs.tcp.decoder_impl.flush(/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-tcp-5.2.0-java/lib/logstash/inputs/tcp/decoder_impl.rb:33) ~[?:?]
Mar  7 13:58:10 172.16.1.197 logstash: [2019-03-07T13:58:10,814][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
Mar  7 13:58:11 172.16.1.197 logstash: [2019-03-07T13:58:11,047][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Any errors in the Filebeat's logs?

This seems related to https://github.com/logstash-plugins/logstash-input-tcp/issues/132

I haven't been able to reproduce this on my side. When you start logstash, is there already a lot of sources trying to write to logstash? or does this happen every time even if there's no one writing to port 5544?

Nothing obvious in the Filebeat log but I could be missing something.

2019-03-08T11:48:30.122-0600    INFO    instance/beat.go:616    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2019-03-08T11:48:30.123-0600    INFO    instance/beat.go:623    Beat UUID: 64a79db5-2c6c-4a0b-b482-98692b83f67e
2019-03-08T11:48:30.123-0600    INFO    [seccomp]       seccomp/seccomp.go:116  Syscall filter successfully installed
2019-03-08T11:48:30.123-0600    INFO    [beat]  instance/beat.go:936    Beat info       {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "64a79db5-2c6c-4a0b-b482-98692b83f67e"}}}
2019-03-08T11:48:30.123-0600    INFO    [beat]  instance/beat.go:945    Build info      {"system_info": {"build": {"commit": "928f5e3f35fe28c1bd73513ff1cc89406eb212a6", "libbeat": "6.6.1", "time": "2019-02-13T16:12:26.000Z", "version": "6.6.1"}}}
2019-03-08T11:48:30.123-0600    INFO    [beat]  instance/beat.go:948    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.10.8"}}}
2019-03-08T11:48:30.123-0600    INFO    [beat]  instance/beat.go:952    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-01-25T09:10:31-06:00","containerized":false,"name":"172.16.1.44","ip":["127.0.0.1/8","::1/128","172.16.1.44/16","fe80::250:56ff:fe96:6103/64"],"kernel_version":"4.4.162-94.72-default","mac":["00:50:56:96:61:03"],"os":{"family":"suse","platform":"sles","name":"SLES","version":"12-SP3","major":12,"minor":0,"patch":0},"timezone":"CST","timezone_offset_sec":-21600,"id":"e11e2e0552aff2760eb22f385925e79a"}}}
2019-03-08T11:48:30.124-0600    INFO    [beat]  instance/beat.go:981    Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 17263, "ppid": 1, "seccomp": {"mode":"filter"}, "start_time": "2019-03-08T11:48:30.040-0600"}}}
2019-03-08T11:48:30.124-0600    INFO    instance/beat.go:281    Setup Beat: filebeat; Version: 6.6.1
2019-03-08T11:48:30.124-0600    INFO    [publisher]     pipeline/module.go:110  Beat name: 172.16.1.44
2019-03-08T11:48:30.125-0600    INFO    instance/beat.go:403    filebeat start running.
2019-03-08T11:48:30.125-0600    INFO    registrar/registrar.go:134      Loading registrar data from /var/lib/filebeat/registry
2019-03-08T11:48:30.125-0600    INFO    registrar/registrar.go:141      States Loaded from registrar: 51
2019-03-08T11:48:30.125-0600    WARN    beater/filebeat.go:367  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-03-08T11:48:30.125-0600    INFO    crawler/crawler.go:72   Loading Inputs: 1
2019-03-08T11:48:30.126-0600    INFO    [monitoring]    log/log.go:117  Starting metrics logging every 30s
2019-03-08T11:48:30.147-0600    INFO    log/input.go:138        Configured paths: [/var/log/*.log]
2019-03-08T11:48:30.147-0600    INFO    input/input.go:114      Starting input of type: log; ID: 11204088409762598069
2019-03-08T11:48:30.147-0600    INFO    crawler/crawler.go:106  Loading and starting Inputs completed. Enabled inputs: 1
2019-03-08T11:49:00.128-0600    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":0,"time":{"ms":9}},"total":{"ticks":10,"time":{"ms":27},"value":0},"user":{"ticks":10,"time":{"ms":18}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":6},"info":{"ephemeral_id":"72caafd7-6f95-47cd-86ea-dffa633cc38c","uptime":{"ms":30014}},"memstats":{"gc_next":4194304,"memory_alloc":3388680,"memory_total":4971704,"rss":23748608}},"filebeat":{"events":{"added":13,"done":13},"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0,"filtered":13,"total":13}}},"registrar":{"states":{"current":51,"update":13},"writes":{"success":13,"total":13}},"system":{"cpu":{"cores":2},"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}
2019-03-08T11:49:30.127-0600    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":4}},"total":{"ticks":20,"time":{"ms":5},"value":20},"user":{"ticks":10,"time":{"ms":1}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":6},"info":{"ephemeral_id":"72caafd7-6f95-47cd-86ea-dffa633cc38c","uptime":{"ms":60013}},"memstats":{"gc_next":4194304,"memory_alloc":3655416,"memory_total":5238440}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":51}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}

There are a lot of events that write to port 5544. As a test, I disabled that port as an input in Logstash and restarted the Logstash service twice. Both times the error listed below did not show up. As soon as I enabled the 5544 port the error returned. With port 5544 disabled the Filebeat logs still did not show up in Kibana either.

[ERROR][logstash.inputs.tcp      ] Error in Netty pipeline: org.jruby.exceptions.RaiseException: (NoMethodError) undefined method `<<' for nil:NilClass

ok found a bug. it's a race condition where we already have opened the socket but haven't setup the queue for writing.

Opened https://github.com/logstash-plugins/logstash-input-tcp/pull/142 as a the potential fix.

Thanks @jsvd I updated the Logstash TCP plugin, as mentioned here - https://github.com/logstash-plugins/logstash-input-tcp/issues/132#issuecomment-471228612.

After updating the plugin and starting over from scratch with with my filebeat.yml file, the events started showing up in Logstash.

Thanks again everyone for the help!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.