First time user - Unable to get Filebeat > logstash

I've been trying to get a home monitoring system up and running and i've fallen flat.

My goal was to get MQTT and other messages into filebeat, thru logstash and into Kibana to build a dashboard.

I've followed a few guides on how to set up a Ubuntu host. Previously I was able to get Filebeat logs into Kibana but they seem to be bypassing logstash and i'm not sure why. The reason I want to use logstash is so that I can parse the MQTT messages.

I'm not sure what information is required to help me here so i'm publishing all that I can

user@ELK:~$ sudo filebeat -e -c filebeat.yml -d "publish"@
[sudo] password for user:
{"log.level":"info","@timestamp":"2024-02-03T12:11:13.094+0700","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":811},"message":"Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-03T12:11:13.094+0700","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).configure","file.name":"instance/beat.go","file.line":819},"message":"Beat ID: a574dab3-2a5b-4b87-a747-3b1075bc661d","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-03T12:11:16.097+0700","log.logger":"add_cloud_metadata","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/processors/add_cloud_metadata.(*addCloudMetadata).init.func1","file.name":"add_cloud_metadata/add_cloud_metadata.go","file.line":100},"message":"add_cloud_metadata: hosting provider type not detected.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2024-02-03T12:11:17.703+0700","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.(*Beat).launch","file.name":"instance/beat.go","file.line":430},"message":"filebeat stopped.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2024-02-03T12:11:17.703+0700","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/cmd/instance.handleError","file.name":"instance/beat.go","file.line":1312},"message":"Exiting: /var/lib/filebeat/filebeat.lock: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data)","service.name":"filebeat","ecs.version":"1.6.0"}
Exiting: /var/lib/filebeat/filebeat.lock: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data)
user@ELK:~$

from logstash.yml file. - "Invalid version of beats protocol: 60" - I commented out some of the filebeats.yml file for troubleshooting.

[2024-02-03T11:20:07,778][INFO ][org.logstash.beats.BeatsHandler][main][0584ea2ca64206b366d49f9cec829e66bb9e36e24135690c55dc57f3ad28d327] [local: 127.0.0.1:5044, remote: 12>
[2024-02-03T11:20:07,778][WARN ][io.netty.channel.DefaultChannelPipeline][main][0584ea2ca64206b366d49f9cec829e66bb9e36e24135690c55dc57f3ad28d327] An exceptionCaught() event>
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:426) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:393) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:376) ~[netty-codec-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:305) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:61) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:286) ~[netty-transport-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-common-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.100.Final.jar:4.1.100.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.100.Final.jar:4.1.100.Final]
        at java.lang.Thread.run(Thread.java:840) [?:?]

Here is the filebeats.yml

filebeat.inputs:
#- type: filestream
#  id: logify-id
#  tags: ["logify"]
#  enabled: true
#  paths:
#    - /var/log/logify/app.log

#- type: filestream
#  enabled: true
#  id: auth-id
#  tags: ["system"]
#  paths:
#    - /var/log/auth.log

#- type: filestream
#  enabled: true
#  id: speedtest-id
#  tags: ["speedtest"]
#  paths:
#    - /var/log/speedtests.log

- type: mqtt
  enabled: true
  id: mqtt-sensor-id
  tags: ["mqtt"]
  hosts:
    - tcp://127.0.0.1:1883
  username: sensor
  password: sensorMQTT
  topics:
    - '#'
    - /GV/Outdoor/Sonoff-OutdoorLights/stat/RESULT

setup.kibana:
  host: "localhost:5601"

output.logstash:
  # The Logstash hosts
  hosts: ["192.168.21.102:5044"]

Here is my filebats input /etc/logstash/conf.d/02-beats-input.conf


input {
  beats {
    port => 5044
  }
}

Here is 30-elasticsearch-output.conf

output {
  if [@metadata][pipeline] {
        elasticsearch {
        hosts => ["localhost:9200"]
        manage_template => false
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        pipeline => "%{[@metadata][pipeline]}"
        }
  } else {
        elasticsearch {
        hosts => ["localhost:9200"]
        manage_template => false
        index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
        }
  }
}

Here is when I run the logstash test

user@ELK:/var/log/logstash$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
Using bundled JDK: /usr/share/logstash/jdk
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_int
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_f
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2024-02-03T12:22:05,222][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2024-02-03T12:22:05,243][WARN ][logstash.runner          ] The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[2024-02-03T12:22:05,245][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.12.0", "jruby.version"=>"jruby 9.4.5.0 (3.1.4) 2023-11-02 1abae2700f OpenJDK 64-Bit Server VM 17.0.9+9 on 17.0.9+9 +indy +jit [x86_64-linux]"}
[2024-02-03T12:22:05,248][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2024-02-03T12:22:05,252][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2024-02-03T12:22:05,252][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2024-02-03T12:22:06,730][INFO ][org.reflections.Reflections] Reflections took 118 ms to scan 1 urls, producing 132 keys and 468 values
[2024-02-03T12:22:07,261][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
Configuration OK
[2024-02-03T12:22:07,262][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

Lastly, I did notice this message in the kibana dashboard logs

{"log.level":"error","@timestamp":"2024-02-03T12:53:39.237+0700","log.logger":"publisher_pipeline_output","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/publisher/pipeline.
(*netClientWorker).publishBatch","file.name":"pipeline/client_worker.go","file.line":174},"message":"failed to publish events: write tcp 192.168.21.102:53734->192.168.21.102:5044: write: connection reset by peer","service.name":"filebeat","ecs.version":"1.6.0"}

I'm a bit overwhelmed with all of this as a first time user. In over my head as they say

Adding my full configs here. I'm not sure why but it seems my filebeat cannot connect to logstash as the "write tcp" error, yet the logs are there in Kibana?

tim@ELK:~$ sudo cat /etc/filebeat/filebeat.yml
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.

# filestream is an input for collecting log messages from files.
#- type: filestream
  # Unique ID among all inputs, an ID is required.
#  id: my-filestream-id
  # Change to true to enable this input configuration.
#  tags: ["filestream"]
#  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
#  paths:
#    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*


#- type: filestream
#  id: logify-id
#  tags: ["logify"]
#  enabled: true
#  paths:
#    - /var/log/logify/app.log

#- type: filestream
#  enabled: true
#  id: auth-id
#  tags: ["system"]
#  paths:
#    - /var/log/auth.log

#- type: filestream
#  enabled: true
#  id: speedtest-id
#  tags: ["speedtest"]
#  paths:
#    - /var/log/speedtests.log

- type: mqtt
  enabled: true
  id: mqtt-sensor-id
  tags: ["mqtt"]
  hosts:
    - tcp://127.0.0.1:1883
  username: sensor
  password: sensorMQTT
  topics:
    - '#'
    - /GV/Outdoor/Sonoff-OutdoorLights/stat/RESULT


#output.console:
#  pretty: true


  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #  hosts: ["localhost:9200"]

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  #preset: balanced

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.21.102:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: error
logging.to.files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 3
  permissions: 0644

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

tim@ELK:~$
tim@ELK:~$ cd /etc/logstash/
tim@ELK:/etc/logstash$ cd conf.d/
tim@ELK:/etc/logstash/conf.d$ ls -al
total 16
drwxr-xr-x 2 root root 4096 Feb  3 13:59 .
drwxr-xr-x 3 root root 4096 Feb  3 13:56 ..
-rw-r--r-- 1 root root   60 Feb  3 13:40 02-beats-input.conf
-rw-r--r-- 1 root root  403 Jan 21 13:10 30-elasticsearch-output.conf
tim@ELK:/etc/logstash/conf.d$ tail 02-beats-input.conf
input {
  beats {
    port => 5044
    ssl  => false
  }
}


tim@ELK:/etc/logstash/conf.d$ cat 30-elasticsearch-output.conf

output {
  if [@metadata][pipeline] {
	elasticsearch {
  	hosts => ["localhost:9200"]
  	manage_template => false
  	index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  	pipeline => "%{[@metadata][pipeline]}"
	}
  } else {
	elasticsearch {
  	hosts => ["localhost:9200"]
  	manage_template => false
  	index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
	}
  }
}

on your logstash host please verify it is in fact listening to host port 5044
sudo netstat -tulnp | grep 5044

then on your filebeat host verify connectivity
telnet 192.168.21.102 5044

@eezeetee

Exactly What versions of all components are you running?

Thank you.

I don't have netstate, but I assume this'll work

$ sudo lsof -V -nP -i4TCP +c0
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd-resolve 451 systemd-resolve 14u IPv4 16010 0t0 TCP 127.0.0.53:53 (LISTEN)
cupsd 693 root 7u IPv4 21059 0t0 TCP 127.0.0.1:631 (LISTEN)
mosquitto 712 mosquitto 5u IPv4 21016 0t0 TCP *:1883 (LISTEN)
mosquitto 712 mosquitto 8u IPv4 20121 0t0 TCP 192.168.21.102:1883->192.168.231.18:40971 (ESTABLISHED)
sshd 722 root 3u IPv4 16306 0t0 TCP *:22 (LISTEN)
nginx 756 root 6u IPv4 20054 0t0 TCP *:80 (LISTEN)
nginx 757 www-data 6u IPv4 20054 0t0 TCP *:80 (LISTEN)
nginx 758 www-data 6u IPv4 20054 0t0 TCP *:80 (LISTEN)
nginx 759 www-data 6u IPv4 20054 0t0 TCP *:80 (LISTEN)
nginx 760 www-data 6u IPv4 20054 0t0 TCP *:80 (LISTEN)
node 774 kibana 18u IPv4 33173 0t0 TCP 192.168.21.102:50930->192.168.21.102:9200 (ESTABLISHED)
node 774 kibana 28u IPv4 30803 0t0 TCP 192.168.21.102:5601 (LISTEN)
java 1441 elasticsearch 542u IPv6 31564 0t0 TCP 192.168.21.102:9200->192.168.21.102:50930 (ESTABLISHED)
java 1441 elasticsearch 574u IPv6 28892 0t0 TCP 192.168.21.102:9200->192.168.21.102:48688 (ESTABLISHED)
sshd 3014 root 4u IPv4 36371 0t0 TCP 192.168.21.102:22->192.168.230.119:65045 (ESTABLISHED)
sshd 3092 tim 4u IPv4 36371 0t0 TCP 192.168.21.102:22->192.168.230.119:65045 (ESTABLISHED)
filebeat 3152 root 7u IPv4 33222 0t0 TCP 127.0.0.1:38798->127.0.0.1:5044 (ESTABLISHED)
filebeat 3152 root 18u IPv4 36497 0t0 TCP 127.0.0.1:41274->127.0.0.1:1883 (ESTABLISHED)
java 3211 logstash 74u IPv6 33076 0t0 TCP 127.0.0.1:9600 (LISTEN)
java 3211 logstash 89u IPv6 36531 0t0 TCP 127.0.0.1:57180->127.0.0.1:9200 (ESTABLISHED)
java 3211 logstash 90u IPv6 29497 0t0 TCP 127.0.0.1:57182->127.0.0.1:9200 (ESTABLISHED)
java 3211 logstash 108u IPv6 31740 0t0 TCP 127.0.0.1:5044->127.0.0.1:38798 (ESTABLISHED)
java 3211 logstash 109u IPv6 29540 0t0 TCP 127.0.0.1:39096->127.0.0.1:9200 (ESTABLISHED)
java 3211 logstash 110u IPv6 29541 0t0 TCP 127.0.0.1:39106->127.0.0.1:9200 (ESTABLISHED)
java 3211 logstash 111u IPv6 29542 0t0 TCP 127.0.0.1:39112->127.0.0.1:9200 (ESTABLISHED)
java 3211 logstash 112u IPv6 31488 0t0 TCP 127.0.0.1:39122->127.0.0.1:9200 (ESTABLISHED)
java 3211 logstash 113u IPv6 29547 0t0 TCP 127.0.0.1:41046->127.0.0.1:9200 (ESTABLISHED)
user@ELK:~

telnet 192.168.21.102 5044
Trying 192.168.21.102...
Connected to 192.168.21.102.
Escape character is '^]'.

Connection closed by foreign host.

Hi stephen5

here is the versions of filebeat, logstash, elasticsearch

filebeat version 8.12.0 (amd64), libbeat 8.12.0 [27c592782c25906c968a41f0a6d8b1955790c8c5 built 2024-01-10 21:05:10 +0000 UTC]

/usr/share/logstash/bin/logstash --version
Using bundled JDK: /usr/share/logstash/jdk
logstash 8.12.0

/usr/share/kibana/bin/kibana --version
{"log.level":"info","@timestamp":"2024-02-05T10:22:31.536Z","log.logger":"elastic-apm-node","ecs.version":"8.10.0","agentVersion":"4.2.0","env":{"pid":27054,"proctitle":"/usr/share/kibana/bin/../node/bin/node","os":"linux 6.5.0-15-generic","arch":"x64","host":"ELK","timezone":"UTC+0700","runtime":"Node.js v18.18.2"

/usr/share/elasticsearch/bin/elasticsearch --version
warning: ignoring JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64; using bundled JDK
Version: 8.12.0, Build: deb/1665f706fd9354802c02146c1e6b5c0fbcddfbc9/2024-01-11T10:05:27.953830042Z, JVM: 21.0.1

What i'm struggling to underststand is why my filebeat is getting to Elasticsearch even though my config has logstash configured. The attempts to connect to logstash seem to be not working , the "write tcp" error as well

can anyone help?

Can you check is there another the filebeat process active?
If is not, can you rename filebeat.lock to something else, for instacne fileb.txt

That or you're not actually using filebeat.yml You think you are?

How did you install filebeat?

How are you starting filebeat?

If you go to

/etc/filebeat

The run

filebeat -c ./filebeat.yml -e

Do you get the same results?

There is only 1 filebeat process.

It works when I add a 3rd and 4th filestream and mqtt. SO I know the filebeat is fine.
What wasn't working is getting the information to logstash.

I've given up and will try to use filebeat inputs..

I've given up with logstash so now trying to get my speedtest data parsed by filebeat.

Is this correct? The log file is there with json data

  • type: filestream
    enabled: true
    id: speedtest-id
    tags: ["speedtest"]
    paths:
    • /var/log/speedtests.log
      json.keys_under_root: true
      json.message_key: log
      include_lines: ['{"type"'] #this is the message I want to see
      fields:
      app_id: speedtest

Here is the log file in "Real time stream" in Elasticsearch

17:00:36.368

{"type":"result","timestamp":"2024-02-09T10:00:36Z","ping":{"jitter":0.682,"latency":5.649,"low":5.135,"high":6.033},"download":{"bandwidth":6594403,"bytes":121144716,"elapsed":15011,"latency":{"iqm":25.857,"low":5.150,"high":496.777,"jitter":17.589}},"upload":{"bandwidth":19054037,"bytes":287473074,"elapsed":14994,"latency":{"iqm":17.597,"low":4.408,"high":273.334,"jitter":7.169}},"packetLoss":1.3333333333333333,"isp":"AIT Fibre","interface":{"internalIp":"192.168.21.102","name":"enp0s3","macAddr":"08:00:27:C4:C9:4E","isVpn":false,"externalIp":"100.122.2.130"},"server":{"id":16061,"host":"speedtest-nbi1.aitasda.net","port":8080,"name":"AIS","location":"Somewhere","country":"Somewhere","ip":"51.4.58.12"},"result":{"id":"0a584f2f-98b5-47e3-88a3-5409da338e40","url":"Speedtest by Ookla - The Global Broadband Speed Test":true}}

Can anyone guide me to a document that explains

Filebeat reads from a log file. Parses the content to look for certain content and publishes?

I've got a syslog file and a MQTT. Both in Json but i'm not sure how to parse it to get it into ELK

still strugglging here. I have no idea where to go. The use case is

Filebeat data coming in from log file AND mqtt.
The format is Json.

I don't know how to parse it to get the data stored into Elasticsearch so that I can graph it

If you have given up on using logstash then asking in the logstash forum is not going to reach the people most able to help you.

If you want to connect filebeat and elasticsearch then check out the beats forum.

You have said that you do not completely understand how events are moving through the filebeat / logstash / elasticsearch / kibana complex. I suggest you try working with two components rather than four.

If you want to connect filebeat and logstash then take Kibana and elasticsearch out of the system, and just try to get logstash run on the command line (with a stdout/rubydebug output) to consume events from filebeat run as a service.

1 Like

Or just download the tar.gz of filebeat and run from the command line...

If you're really confused, that's what I always suggest and then get things working.

I would uninstall your filebeat that you have today

Then I would download the tar.gz and just untar it

Then just run it from the command line.

I agree with @Badger

Either connect filebeat to logstash and monitor that output

Or filebeat directly to elasticsearch

But don't try filebeat to logstash to elasticsearch as your first attempt.

Also using the tar.gz versus the packages and services is much easier to debug

1 Like

Thanks Stephenb and Badger.

I'm getting filebeat logs into Elasticsearch. I can see them in the logs view in real time. So Filebeat > Elasticsearch is working.

What I don't know how to do is parse the json and store the data for graphing purposes in a dashboard..

Okay so good progress.
Pretty simple

Open a topic

Help me parse this with filebeat
Provide your current filebeat.ynl and several samples of the raw logs.

Pretty sure we'll be able to help you pretty quick.

If the logs are ndjson

Just follow this..

1 Like

Thank you StephenB. Will do that. Appreciate it