Invalid version of beats protocol: 69 and 70

Hello everybody

Help to understand the problem

There is an Oracle Linux 8 server on which Postfix and Filebeat 8.7.1 are installed

Filebeat configuration

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  # id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/maillog*
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  fields:
  #  level: debug
  #  review: 1
     server: postfix

# ============================== Filebeat modules ==============================

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "172.31.2.43:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["172.31.2.43:9200"]
  # pipeline: "postfix-pipeline"

  # Protocol - either `http` (default) or `https`.
  protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "pass"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  hosts: ["172.31.2.43:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["/etc/pki/root/http_ca.crt"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

Server number 2 has Elasticsearch +Logstahs +kibana installed
configuration file /etc/logstash/conf.d/logstash.conf


input {
  beats {
    port => 5044
  }
}

filter {
  if [fields][server] == "postfix" {
    grok {
      match => { "message" => "%{SYSLOGBASE} %{GREEDYDATA:log_message}" }
    }
  }
}

output {
  elasticsearch {
    hosts => ["https://172.31.2.43:9200"]
    ssl_enabled => true
    ssl_verification_mode => "full"
    ssl_certificate_authorities => "/etc/logstash/http_ca.crt"
    user => "elastic"
    password => "pass"
    index => "postfix-logs-%{+YYYY.MM.dd}"
  }
}


but when i execute command /bin/logstash -f file.conf

[ERROR] 2023-05-25 20:32:48.623 [[main]<beats] javapipeline - A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats port=>5044, id=>"c79f20341b062a8ccd359f8ec8ddbbb09843cffc9c535616879cfdd858e7c5e1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_ce3a9bf3-8aac-450e-a679-481563cd981a", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_enabled=>false, ssl_client_authentication=>"none", ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, ssl_cipher_suites=>["TLS_AES_256_GCM_SHA384", "TLS_AES_128_GCM_SHA256", "TLS_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], ssl_supported_protocols=>["TLSv1.2", "TLSv1.3"], client_inactivity_timeout=>60, executor_threads=>4, add_hostname=>false, tls_min_version=>1, tls_max_version=>1.3>
  Error: Address already in use
  Exception: Java::JavaNet::BindException

And if start tail -f /var/log/logstash/logstash-plain.log

[2023-05-25T20:35:26,910][INFO ][org.logstash.beats.BeatsHandler][main][c79f20341b062a8ccd359f8ec8ddbbb09843cffc9c535616879cfdd858e7c5e1] [local: 172.31.2.43:5044, remote: 172.31.2.40:55366] Handling exception: io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69 (caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69)
[2023-05-25T20:35:26,910][WARN ][io.netty.channel.DefaultChannelPipeline][main][c79f20341b062a8ccd359f8ec8ddbbb09843cffc9c535616879cfdd858e7c5e1] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

Tell me where the error is in my configuration

Thank you!

This is wrong, you have the elasticsearch output enabled, and some parts of the logstash output uncommented.

Beats only supports one output, if you want to send logs to logstash you need to comment everything related to the elasticsearch output, and uncomment the line output.logstash.

This means that another process in the same machine is already using the port 5044, do you have another logstash instance? You need to check this.

This happens when something else is sending logs to the beats port without using the beats protocol.

You first need to fix your filebeat, the way it is it may not be working correctly.

When I comment out the options in the filebeat agent

# ---------------------------- Elasticsearch Output ----------------------------
# output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["172.31.2.43:9200"]
  # pipeline: "postfix-pipeline"

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "pass"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["172.31.2.43:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/http_ca.crt"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

Then I get a message


{"log.level":"error","@timestamp":"2023-06-01T09:44:39.064+0300","log.origin":{"file.name":"instance/beat.go","file.line":1274},"message":"Exiting: index management requested but the Elasticsearch output is not configured/enabled","service.name":"filebeat","ecs.version":"1.6.0"}
Exiting: index management requested but the Elasticsearch output is not configured/enabled

I will add information

I received an error that elasticsearch was not configured when I used the command
@filebeat setup -e@
if you use the command
@sudo ./filebeat -e -c filebeat.yml -d "publish"@ has no errors that something is wrong with the configuration.
I added a section in Logstash to output information to the console and I can see the result!

Closed )))

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.