Logstash upgrade issue

Hello Team,

We have an IOT device and the logs generated from this device are received by a logstash instance as input and sent the same logs to the cloudamqp queue as output.

Working condition -- > Logstash version 6.8.22

Not working --> Today, we upgraded Logstash to 7.17.5 and the issue started with the below error.

We are using a helm chart to deploy the application in our EKS cluster

" {:exception=>Java::JavaxNetSsl::SSLException, :message=>"Unsupported or unrecognized SSL message"}"

When we searched about this issue, some docs said about the wrong protocol we are using with the logstash config. But, we are not sure, what change we need to make to fix this

Any idea, what could be the issue?

Welcome to the community.

Are you using FB to forward logs to LS?
Where are you using SSL? Input or output?
Would be useful to see input or output from .conf

Hello Rios

Rsyslog is running the test devices, collects the logs from the device, and sends them to the Rsyslog Logstash service in EKS

logstash.conf: |
        input {
            tcp {
                port                  => 5140
                type                  => "device-logs"
                ssl_enable            => "true"
                tcp_keep_alive        => "true"
                ssl_cert              => "/etc/pki/logstash/syslog-listener.crt"
                ssl_key               => "/etc/pki/logstash/syslog-listener.key"
                ssl_extra_chain_certs => ["/etc/pki/logstash/syslog-listener-ca.crt"]
                ssl_certificate_authorities => ["/etc/pki/logstash/syslog-listener-ca.crt"]
            }
        }
        filter {
        }
        output {
            rabbitmq {
                vhost         => "device"
                host          => "xxxxx"
                ssl           => true
                exchange      => "logs"
                exchange_type => "x-consistent-hash"
                passive       => true
                user          => "${RABBITMQ_USERNAME}"
                password      => "${RABBITMQ_PASSWORD}"
                key           => "%{+SSS}" # Hashing Key based on the current fraction of a second
            }
        }

logstash.yml file

logstashConfig:
  logstash.yml: |
    http.host: "0.0.0.0"
    xpack.monitoring.enabled: true
    xpack.monitoring.elasticsearch.hosts: "${ELASTICSEARCH_URL}"
    xpack.monitoring.elasticsearch.username: ${ELASTICSEARCH_USERNAME}
    xpack.monitoring.elasticsearch.password: ${ELASTICSEARCH_PASSWORD}
    xpack.monitoring.collection.pipeline.details.enabled: true
    path.config: /usr/share/logstash/pipeline
    pipeline.id: devicelogs-rsyslog-amqp-upload
    log.level: warn

The above configuration is working fine with logsatsh 6.8.22 and failing with logasth 7.17.5

Most likely TLSv1.3 version cause an issue. Try to downgrade the input or upgrade TLS to at least 1.2 on Rsyslog side.
Check this

There is also possibility to set ssl_version on the rabbitmq plugin, however I would check first the input settings.

Hello Rios,

Thank you for the update. We will take a look and update here

Hello Rios,

The issue was with the Cloudamqp port. We were using the default port 5672 instead of TLS port 5671. I have explicitly added that port to the output section and now it's working fine

  1. Change
output {
            rabbitmq {
                port          => 5671  *This TLS port added here to fix the issue*
                vhost         => "device"
                host          => "xxxxx"
                ssl           => true
                exchange      => "logs"
                exchange_type => "x-consistent-hash"
                passive       => true
                user          => "${RABBITMQ_USERNAME}"
                password      => "${RABBITMQ_PASSWORD}"
                key           => "%{+SSS}" # Hashing Key based on the current fraction of a second
            }
        }

1 Like

Excellent. Thank you for feedback.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.