Logstash communication | configuration

I am traying to send logs from beats to logs to ES cloud.

5 beats are hosted on IP addresses from 192.168.1.10-15. Logstash is installed on 192.168.1.100 node.

Here is example of part of packet beat conf file from one of my nodes

output.logstash:
      hosts: ["192.168.1.100:5044"] 
      pipline: geoip-info 

Here is how conf file is configured in logstash ( thanks @stephenb)

################################################
# beats->logstash->es default config.
################################################
input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][pipeline] {
    elasticsearch {
      cloud_auth => "elastic:password"
      cloud_id => "mycloud:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRj......"

      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      pipeline => "%{[@metadata][pipeline]}" 
    }
  } else {
    elasticsearch {
      cloud_auth => "elastic:password"
      cloud_id => "mycloud:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRj......"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
    }
  }
}

And yml from logstash as follows:

# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
#   pipeline:
#     batch:
#       size: 125
#       delay: 5
#
# Or as flat keys:
#
#   pipeline.batch.size: 125
#   pipeline.batch.delay: 5
#
# ------------  Node identity ------------
#
# Use a descriptive name for the node:
#
node.name: logstash-node1
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
# path.data:
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
# pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# Set the pipeline event ordering. Options are "auto" (the default), "true" or "false".
# "auto" will  automatically enable ordering if the 'pipeline.workers' setting
# is also set to '1'.
# "true" will enforce ordering on the pipeline and prevent logstash from starting
# if there are multiple workers.
# "false" will disable any extra processing necessary for preserving ordering.
#
#pipeline.ordered: auto
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
#path.config: C:/logstash-7.12.1/config/logstash-sample.conf
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
# Note that the unit value (s) is required. Values without a qualifier (e.g. 60) 
# are treated as nanoseconds.
# Setting the interval this way is not recommended and might change in later versions.
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
#config.debug: true
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#

# on it will not work as intended.
# http.enabled: true
#
# By default, the HTTP API is bound to only the host's local loopback interface,
# ensuring that it is not accessible to the rest of the network. Because the API
# includes neither authentication nor authorization and has not been hardened or
# tested for use as a publicly-reachable API, binding to publicly accessible IPs
# should be avoided where possible.
#
# http.host: 127.0.0.1
#
# The HTTP API web server will listen on an available port from the given range.
# Values can be specified as a single port (e.g., `9600`), or an inclusive range
# of ports (e.g., `9600-9700`).
#
# http.port: 9600-9700

# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
#cloud.id: "security-deployment:ZWFzdHVzMi5henVyZS5lbGFzdGljLWNsb3VkLmNvbTo5MjQzJDI0ZThkODgwYjVhZDQ4Y2FhMzkxNjA4NjU3YjQ2ODNmJDIyMTgxYWIzMDNhMTQ4NDBhNDhmODllYWU5MjU4MDcz"

#cloud.auth: "elastic:yXrLSSfWDPcmHYhCNK0Kx8RQ"
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
#cloud.auth: elastic:yXrLSSfWDPcmHYhCNK0Kx8RQ
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false

# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb

# If using dead_letter_queue.enable: true, the interval in milliseconds where if no further events eligible for the DLQ
# have been created, a dead letter queue file will be written. A low value here will mean that more, smaller, queue files
# may be written, while a larger value will introduce more latency between items being "written" to the dead letter queue, and
# being available to be read by the dead_letter_queue input when items are are written infrequently.
# Default is 5000.
#
# dead_letter_queue.flush_interval: 5000

# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
#
# log.level: info
# path.logs:
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# Flag to output log lines of each pipeline in its separate log file. Each log filename contains the pipeline.name
# Default is false
# pipeline.separate_logs: false
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_admin
#xpack.monitoring.elasticsearch.password: Imported88!
#xpack.monitoring.elasticsearch.proxy: ["http://proxy:port"]
#xpack.monitoring.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.monitoring.elasticsearch.cloud_id: ZWFzdHVzMi5henVyZS5lbGFzdGljLWNsb3VkLmNvbTo5MjQzJDI0ZThkODgwYjVhZDQ4Y2FhMzkxNjA4NjU3YjQ2ODNmJDIyMTgxYWIzMDNhMTQ4NDBhNDhmODllYWU5MjU4MDcz
#xpack.monitoring.elasticsearch.cloud_auth: elastic:yXrLSSfWDPcmHYhCNK0Kx8RQ
# another authentication alternative is to use an Elasticsearch API key
#xpack.monitoring.elasticsearch.api_key: "id:api_key"
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_xxxx
#xpack.management.elasticsearch.password: xxxxxx
#xpack.management.elasticsearch.proxy: ["http://proxy:port"]
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
xpack.management.elasticsearch.cloud_id: xxxx-deployment:ZxxxxxxxFhMzkxNjA4NjU3YjQ2ODNmJDIyMTgxYWIzMDNhMTQ4NxxxxU5MjU4MDcz
xpack.management.elasticsearch.cloud_auth: elastic:ks9LIaWkVHpzxUJ3rYvfjIer
# another authentication alternative is to use an Elasticsearch API key
#xpack.management.elasticsearch.api_key: "id:api_key"
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s

I have created firewall rules on logstash and I know beat can communicate with logstash, however logstash is not forwarding data correctly to Elastic cloud because I can not see new data coming in Kibana. Any suggestions where problem can be? Is logstash listening on default from all ip addresses?

PS. Everything works fine if I setup beat to send logs directly to ES cloud. The problem starts if I direct logs to logstash and from logstash to ES cloud. Also, if I send logs from beat installed locally on logstash everything works fine.

# By default, the HTTP API is bound to only the host's local loopback interface,
# ensuring that it is not accessible to the rest of the network 

The solution is probably here, but how do I define what addresses logstash should listen from

please advise

@farciarz121 Please be patient as only volunteers staff this forum.

The above code defines which port logstash will listen on for beats it will be listening on port 5044.

Which is were you are pointing beats to with this

output.logstash:
      hosts: ["192.168.1.100:5044"] 
      pipline: geoip-info 

Steps to debug.

  1. Try to telnet from a filebeat host to the logstash host on port 5044
    From filebeat host it should connect.
    telnet <logstaship> 5044

  2. from Logstah host try to curl elastic cloud. You can get the elasticsearch URL from the cloud console, make sure you

curl -u username:password https://elasticsearchurl

  1. Note logstash monitoring port runs on localhost:9600 but unless that port is already consumed you don't need to worry about that. You can check that with

curl -u username:password https://elasticsearchurl

  1. Important, always Provide the startup logs from logstash which you should always do when asking a question. The logs will mostly likely have the issue in them.
1 Like

Hi @stephenb , my apologies. I do appreciate help and every single input from all forum members. I am really impressed with the elastic community. It sometimes happens that a new topic gets berried under other topics and forums members do not have a chance to see it.

Regarding to your suggestions, I will try these steps and will get back to you. Also, I will remember to post startup logs next time. Thank you

PS C:\logstash-7.12.1-windows-x86_64\logstash-7.12.1> .\bin\logstash.bat -f .\config\logstash.conf
"Using bundled JDK: ""
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in
a future release.
Sending Logstash logs to C:/logstash-7.12.1-windows-x86_64/logstash-7.12.1/logs which is now configured via log4j2.prope
rties
[2021-06-10T15:45:38,199][INFO ][logstash.runner          ] Log4j configuration path used is: C:\logstash-7.12.1-windows
-x86_64\logstash-7.12.1\config\log4j2.properties
[2021-06-10T15:45:38,209][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.12.1", "jruby.vers
ion"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.10+9 on 11.0.10+9 +indy +jit [mswin32-
x86_64]"}
[2021-06-10T15:45:38,313][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or
 command line options are specified
[2021-06-10T15:45:39,238][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2021-06-10T15:45:40,808][INFO ][org.reflections.Reflections] Reflections took 41 ms to scan 1 urls, producing 23 keys a
nd 47 values
[2021-06-10T15:45:41,815][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:remo
ved=>[], :added=>[https://elastic:xxxxxx@24e8d880b5ad48caa391608657b4683f.eastus2.azure.elastic-cloud.com:9243/]}}
[2021-06-10T15:45:42,284][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https:
//elastic:xxxxxx@24e8d880b5ad48caa391608657b4683f.eastus2.azure.elastic-cloud.com:9243/"}
[2021-06-10T15:45:42,443][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2021-06-10T15:45:42,446][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` even
t field won't be used to determine the document _type {:es_version=>7}
[2021-06-10T15:45:42,507][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outp
uts::ElasticSearch", :hosts=>["https://2xxxx.eastus2.azure.elastic-cloud.com:9243"]}
[2021-06-10T15:45:42,538][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:remo
ved=>[], :added=>[https://elastic:xxxxxx@24xxxxxb4683f.eastus2.azure.elastic-cloud.com:9243/]}}
[2021-06-10T15:45:42,659][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https:
//elastic:xxxxxx@24e8d880b5adxxxxxxxb4683f.eastus2.azure.elastic-cloud.com:9243/"}
[2021-06-10T15:45:42,783][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2021-06-10T15:45:42,786][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` even
t field won't be used to determine the document _type {:es_version=>7}
[2021-06-10T15:45:42,834][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outp
uts::ElasticSearch", :hosts=>["https://24e8xxxxxx57b4683f.eastus2.azure.elastic-cloud.com:9243"]}
[2021-06-10T15:45:42,925][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.wor
kers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["C:
/logstash-7.12.1-windows-x86_64/logstash-7.12.1/config/logstash.conf"], :thread=>"#<Thread:0x2bafa0de run>"}
[2021-06-10T15:45:43,766][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"
=>0.84}
[2021-06-10T15:45:43,816][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2021-06-10T15:45:43,834][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-06-10T15:45:43,914][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :
non_running_pipelines=>[]}
[2021-06-10T15:45:43,960][INFO ][org.logstash.beats.Server][main][e5980c2f89ab7ab6e256ddce0d1310850166358d3ea7245cf6df7b
5e802f2d40] Starting server on port: 5044

Still not sucessfull. I have completely turned off Windows firewall on both machines.

I am trying to send logs from 192.168.1.205 to 192.158.1.150 (logstash).
Pfsense rules as follow:

And here is what I get when I try to connect with telnet

Just black screen...

I'll also add that logstash is install on Win Server 2016

What is the IP of your machines?

You said that your logstash is using the IP 192.158.1.150, but I will assume this is a type and the correct IP is 192.168.1.150, which is the IP in your telnet screen and your screenshot.

But your packet beat configuration is pointing to another IP address, which is 192.168.1.150.

output.logstash:
      hosts: ["192.168.1.100:5044"] 
      pipline: geoip-info 

What is the correct IP address? Check your configuration to see if you are using the correct IP address.

The Telnet blank screen means that the connection was established.

1 Like

Oh gosh... That was a silly mistake... Sorry, in the process of learning, I have reconfigured it so many times that I completely overlooked this. You are right, configuration file was pointing to the wrong address. I have updated my configuration file and life is good again...

Well maybe not 100% ... good
still getting some errors

[2021-06-10T15:45:41,815][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:remo
ved=>[], :added=>[https://elastic:xxxxxx@24e8d880b5ad48caa391608657b4683f.eastus2.azure.elastic-cloud.com:9243/]}}
[2021-06-10T15:45:42,284][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https:
//elastic:xxxxxx@24e8d880b5ad48caa391608657b4683f.eastus2.azure.elastic-cloud.com:9243/"}
[2021-06-10T15:45:42,443][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2021-06-10T15:45:42,446][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` even
t field won't be used to determine the document _type {:es_version=>7}
[2021-06-10T15:45:42,507][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outp
uts::ElasticSearch", :hosts=>["https://24e8d880b5ad48caa391608657b4683f.eastus2.azure.elastic-cloud.com:9243"]}
[2021-06-10T15:45:42,538][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:remo
ved=>[], :added=>[https://elastic:xxxxxx@24e8d880b5ad48caa391608657b4683f.eastus2.azure.elastic-cloud.com:9243/]}}
[2021-06-10T15:45:42,659][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https:
//elastic:xxxxxx@24e8d880b5ad48caa391608657b4683f.eastus2.azure.elastic-cloud.com:9243/"}
[2021-06-10T15:45:42,783][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2021-06-10T15:45:42,786][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` even
t field won't be used to determine the document _type {:es_version=>7}
[2021-06-10T15:45:42,834][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outp
uts::ElasticSearch", :hosts=>["https://24e8d8xxxx7b4683f.eastus2.azure.elastic-cloud.com:9243"]}
[2021-06-10T15:45:42,925][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.wor
kers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["C:
/logstash-7.12.1-windows-x86_64/logstash-7.12.1/config/logstash.conf"], :thread=>"#<Thread:0x2bafa0de run>"}
[2021-06-10T15:45:43,766][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"
=>0.84}
[2021-06-10T15:45:43,816][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2021-06-10T15:45:43,834][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-06-10T15:45:43,914][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :
non_running_pipelines=>[]}
[2021-06-10T15:45:43,960][INFO ][org.logstash.beats.Server][main][e5980c2f89ab7ab6e256ddce0d1310850166358d3ea7245cf6df7b
5e802f2d40] Starting server on port: 5044
[2021-06-10T15:53:54,597][INFO ][org.logstash.beats.BeatsHandler][main][e5980c2f89ab7ab6e256ddce0d1310850166358d3ea7245c
f6df7b5e802f2d40] [local: 192.168.1.150:5044, remote: 192.168.1.205:63768] Handling exception: io.netty.handler.codec.De
coderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 92 (caused by: org.
logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 92)
[2021-06-10T15:53:54,601][WARN ][io.netty.channel.DefaultChannelPipeline][main][e5980c2f89ab7ab6e256ddce0d1310850166358d
3ea7245cf6df7b5e802f2d40] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually m
eans the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats prot
ocol: 92
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-all-4.1.49.Fina
l.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-all-4.1.49.Fin
al.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[ne
tty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:61) ~[netty-all-
4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:370) ~[netty-all-4.1.
49.Final.jar:4.1.49.Final]
        at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.49.Final.jar:
4.1.49.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.4
9.Final.jar:4.1.49.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.49.Final.jar:4.1.49.
Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.49.Final
.jar:4.1.49.Final]
        at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 92
        at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.1.2.jar:?]
        at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.1.2.jar:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[n
etty-all-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[netty-all-4.1.49.Fina
l.jar:4.1.49.Final]
PS C:\Windows\system32> cd "C:\Program Files (x86)\winlogbeat"
PS C:\Program Files (x86)\winlogbeat> .\winlogbeat.exe -e
2021-06-10T15:06:47.196-0800    INFO    instance/beat.go:665    Home path: [C:\Program Files (x86)\winlogbeat] Config path: [C:\Program Files (x86)\winlogbeat] Data path: [C:\Program Files (x86)\winlogbeat\data] Logs path: [C:\Program Files (x86)\winlogbeat\logs]
2021-06-10T15:06:47.197-0800    INFO    instance/beat.go:673    Beat ID: 3d395a55-2444-417d-ad73-c819727ad837
2021-06-10T15:06:47.213-0800    INFO    [beat]  instance/beat.go:1014   Beat info       {"system_info": {"beat": {"path": {"config": "C:\\Program Files (x86)\\winlogbeat", "data": "C:\\Program Files (x86)\\winlogbeat\\data", "home": "C:\\Program Files (x86)\\winlogbeat", "logs": "C:\\Program Files (x86)\\winlogbeat\\logs"}, "type": "winlogbeat", "uuid": "3d395a55-2444-417d-ad73-c819727ad837"}}}
2021-06-10T15:06:47.213-0800    INFO    [beat]  instance/beat.go:1023   Build info      {"system_info": {"build": {"commit": "054e224d226b42a1dd7c72dcf48c3f18de452e22", "libbeat": "7.13.0", "time": "2021-05-19T22:47:56.000Z", "version": "7.13.0"}}}
2021-06-10T15:06:47.213-0800    INFO    [beat]  instance/beat.go:1026   Go runtime info {"system_info": {"go": {"os":"windows","arch":"amd64","max_procs":1,"version":"go1.15.12"}}}
2021-06-10T15:06:47.227-0800    INFO    [beat]  instance/beat.go:1030   Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-06-10T13:57:54.95-08:00","name":"DESKTOP_1","ip":["fe80::4d0f:4596:ccf9:e17e/64","192.168.1.205/24","::1/128","127.0.0.1/8"],"kernel_version":"10.0.19041.985 (WinBuild.160101.0800)","mac":["00:0c:29:0e:4b:84"],"os":{"type":"windows","family":"windows","platform":"windows","name":"Windows 10 Pro","version":"10.0","major":10,"minor":0,"patch":0,"build":"19042.985"},"timezone":"AKDT","timezone_offset_sec":-28800,"id":"dc186c98-07cb-471f-a5c7-fefad9d6cecd"}}}
2021-06-10T15:06:47.230-0800    INFO    [beat]  instance/beat.go:1059   Process info    {"system_info": {"process": {"cwd": "C:\\Program Files (x86)\\winlogbeat", "exe": "C:\\Program Files (x86)\\winlogbeat\\winlogbeat.exe", "name": "winlogbeat.exe", "pid": 1436, "ppid": 4340, "start_time": "2021-06-10T15:06:47.083-0800"}}}
2021-06-10T15:06:47.230-0800    INFO    instance/beat.go:309    Setup Beat: winlogbeat; Version: 7.13.0
2021-06-10T15:06:47.230-0800    INFO    [publisher]     pipeline/module.go:113  Beat name: DESKTOP_1
2021-06-10T15:06:47.230-0800    INFO    beater/winlogbeat.go:69 State will be read from and persisted to C:\Program Files (x86)\winlogbeat\data\.winlogbeat.yml
2021-06-10T15:06:47.282-0800    WARN    [cfgwarn]       registered_domain/registered_domain.go:61       BETA: The registered_domain processor is beta.
2021-06-10T15:06:47.351-0800    WARN    [cfgwarn]       registered_domain/registered_domain.go:61       BETA: The registered_domain processor is beta.
2021-06-10T15:06:47.377-0800    INFO    instance/beat.go:473    winlogbeat start running.
2021-06-10T15:06:47.397-0800    INFO    [monitoring]    log/log.go:117  Starting metrics logging every 30s
2021-06-10T15:06:47.442-0800    WARN    beater/eventlogger.go:124       EventLog[Microsoft-Windows-Sysmon/Operational] Open() error. No events will be read from this source. The specified channel could not be found.
2021-06-10T15:06:50.237-0800    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:101    add_cloud_metadata: hosting provider type not detected.
2021-06-10T15:06:51.244-0800    INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(async(tcp://192.168.1.150:5044))
2021-06-10T15:06:51.245-0800    INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2021-06-10T15:06:51.262-0800    INFO    [publisher]     pipeline/retry.go:223     done
2021-06-10T15:06:51.263-0800    INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(async(tcp://192.168.1.150:5044)) established
2021-06-10T15:06:53.497-0800    INFO    beater/eventlogger.go:88        EventLog[Security] successfully published 570 events
2021-06-10T15:06:53.500-0800    INFO    beater/eventlogger.go:88        EventLog[Application] successfully published 344 events
2021-06-10T15:06:54.351-0800    INFO    beater/eventlogger.go:88        EventLog[Security] successfully published 630 events
2021-06-10T15:06:55.153-0800    INFO    beater/eventlogger.go:88        EventLog[Security] successfully published 700 events
2021-06-10T15:06:56.037-0800    INFO    beater/eventlogger.go:88        EventLog[Security] successfully published 700 events
2021-06-10T15:06:57.019-0800    INFO    beater/eventlogger.go:88        EventLog[Security] successfully published 700 events
2021-06-10T15:06:57.787-0800    INFO    beater/eventlogger.go:88        EventLog[Security] successfully published 700 events
2021-06-10T15:06:58.632-0800    INFO    beater/eventlogger.go:88        EventLog[Security] successfully published 700 events
2021-06-10T15:06:59.980-0800    INFO    beater/eventlogger.go:88        EventLog[Security] successfully published 700 events
2021-06-10T15:07:01.415-0800    INFO    beater/eventlogger.go:88        EventLog[Security] successfully published 700 events
2021-06-10T15:07:01.955-0800    INFO    beater/eventlogger.go:88        EventLog[Security] successfully published 385 events
2021-06-10T15:07:17.407-0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":781,"time":{"ms":781}},"total":{"ticks":7796,"time":{"ms":7796},"value":7796},"user":{"ticks":7015,"time":{"ms":7015}}},"handles":{"open":223},"info":{"ephemeral_id":"412bab9e-6e4e-4044-8681-1de8af2feba9","uptime":{"ms":30261}},"memstats":{"gc_next":42497360,"memory_alloc":38623896,"memory_sys":76498040,"memory_total":475005968,"rss":85790720},"runtime":{"goroutines":35}},"libbeat":{"config":{"module":

(please notice this is from winlog not packet beat, I decided it will be easier to set up winlog beat first)

For the InvalidFrameProtocolException see here.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.