Logstash log showing "error:1e00007d:Cipher functions:OPENSSL_internal:INVALID_NONCE"

Hi, we are testing ELK 6.6.1 on s390x platform (Linux on Z).

After starting the ELK 6.6.1 stack, everything seems working fine and we can see logs on Kibana UI, but logstash log has following entries from time to time:

[2019-07-07T12:11:35,834][INFO ][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.79.173:48148] Handling exception: error:1e00007d:Cipher functions:OPENSSL_internal:INVALID_NONCE
[2019-07-07T12:11:35,834][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

javax.net.ssl.SSLException: error:1e00007d:Cipher functions:OPENSSL_internal:INVALID_NONCE

at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:895) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:882) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.wrap(ReferenceCountedOpenSslEngine.java:824) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:509) ~[?:1.8.0_212]

at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:1019) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:825) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:793) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:774) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.channel.AbstractChannelHandlerContext.access$1500(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.channel.AbstractChannelHandlerContext$16.run(AbstractChannelHandlerContext.java:756) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) [netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462) [netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.30.Final.jar:4.1.30.Final]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]

After turning on org.logstash.beats.BeatsHandler debug level tracing, we can see following pattern:

(normal pipeline processing...)

[2019-07-07T21:09:38,423][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.10.165:34646] Sending a new message for the listener, sequence: 73

[2019-07-07T21:09:38,427][DEBUG][org.logstash.beats.BeatsHandler] 5cc2452e: batches pending: false

[2019-07-07T21:09:38,429][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.10.165:34646] Received a new payload

[2019-07-07T21:09:38,430][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.10.165:34646] Sending a new message for the listener, sequence: 1

[2019-07-07T21:09:38,434][DEBUG][org.logstash.beats.BeatsHandler] 5cc2452e: batches pending: false

[2019-07-07T21:09:38,436][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.79.173:54172] Handling exception: error:1e00007d:Cipher functions:OPENSSL_internal:INVALID_NONCE

javax.net.ssl.SSLException: error:1e00007d:Cipher functions:OPENSSL_internal:INVALID_NONCE

at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:895) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
(similar call stack like above)
...
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]

[2019-07-07T21:09:38,439][INFO ][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.79.173:54172] Handling exception: error:1e00007d:Cipher functions:OPENSSL_internal:INVALID_NONCE

[2019-07-07T21:09:38,440][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.10.165:34646] Received a new payload

[2019-07-07T21:09:38,440][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.10.165:34646] Sending a new message for the listener, sequence: 1

[2019-07-07T21:09:38,439][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

javax.net.ssl.SSLException: error:1e00007d:Cipher functions:OPENSSL_internal:INVALID_NONCE

at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:895) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
(similar call stack like above)
...
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]

[2019-07-07T21:09:38,444][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.10.165:34646] Sending a new message for the listener, sequence: 2

[2019-07-07T21:09:38,448][DEBUG][org.logstash.beats.BeatsHandler] 5cc2452e: batches pending: false

[2019-07-07T21:09:38,727][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.79.173:53042] Received a new payload

(started new sequence...)

[2019-07-07T21:09:38,727][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.79.173:53042] Sending a new message for the listener, sequence: 1

[2019-07-07T21:09:38,731][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.79.173:53042] Sending a new message for the listener, sequence: 2

[2019-07-07T21:09:38,734][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.79.173:53042] Sending a new message for the listener, sequence: 3

...

It looks like that logstash recovered the exception and resumes processing logs from filebeat (version is 6.6.1)

Any idea what's going on here? How can we find the root cause of this issue?

Best regards.

Hi,

may you please post the result of these two commands?

openssl s_client -status -connect 10.1.79.146:5044
openssl s_client -status -connect 10.1.79.146:5044 -CAfile "<path-to-ca-file-used-in-logstash"

That will help to see what is going wrong. Possibly this is something like, CA certificates don't match together or so.

Hi, Andre, thanks.

We use self signed certificate:

# openssl s_client -CAfile $CERT/ca.crt -cert $CERT/filebeat.crt -key $CERT/filebeat.key -connect "10.1.79.146:5044"
Enter pass phrase for /usr/share/elasticsearch/config/tls/filebeat.key:
CONNECTED(00000005)
...
verify return:1
---
Certificate chain
...
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIEyTCCArGgAwIBAgIBAzANBgkqhkiG9w0BAQsFADBjMQswCQYDVQQGEwJVUzER
MA8GA1UECAwITmV3IFlvcmsxDzANBgNVBAcMBkFybW9uazEaMBgGA1UECgwRSUJN
IENsb3VkIFByaXZhdGUxFDASBgNVBAMMC3d3dy5pYm0uY29tMCAXDTE5MDcwNjE5
...
FzeJpp6jEaiDhRPpmP+ZTnpLzB1LcbuZ3GbF/Yk=
-----END CERTIFICATE-----
...

---
Acceptable client certificate CA names
...
Client Certificate Types: RSA sign, ECDSA sign
Requested Signature Algorithms: ECDSA+SHA256:RSA-PSS+SHA256:RSA+SHA256:ECDSA+SHA384:RSA-PSS+SHA384:RSA+SHA384:RSA-PSS+SHA512:RSA+SHA512:RSA+SHA1
Shared Requested Signature Algorithms: ECDSA+SHA256:RSA-PSS+SHA256:RSA+SHA256:ECDSA+SHA384:RSA-PSS+SHA384:RSA+SHA384:RSA-PSS+SHA512:RSA+SHA512:RSA+SHA1
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1846 bytes and written 3299 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: 180F40D649A3E976FF5EDA0D351D4DD4FE34CC42BDF828D1C147A8E9E425208D
    Session-ID-ctx:
    Master-Key: 2E98E118B11B169894AEC018D2143F39F74F3CEA0E52B57BB13D92C2831D66A622113CD784DF41091BAAC72F1C0B3DEE
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1562684039
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
    Extended master secret: yes
---

Best regards.

Sorry for the delay as I was out. You have cut out some more of the important parts. But assuming that there is no error inside the whole OpenSSL Snippet it looks like the certificate chain can be verified. So I rather think it lies in the beats configuration. Could you please post your beats.yml file containing the logstash output part, please? What is the the host you are using for logstash? Is it a host that is contained in the certificate as CN oder SAN Subject alternative Name? Because this should be the first step to be done.

Ideally post the logstash part for the input as well.
By the way I have seen you had to type in a password.
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-ssl.html#_literal_key_passphrase_literal
Did you eventually set up a key_passphrase setting?

Hi, Andre, thanks.

[1] Just want to mention that our ELK stack runs on Kubernetes platform.

[2] beat's filebeat.yml:

# cat filebeat.yml
filebeat.config:
  inputs:
  ...
output.logstash:
  hosts: logstash:5044
  timeout: 15
  ssl.certificate_authorities: ["/usr/share/elasticsearch/config/tls/ca.crt"]
  ssl.certificate: "/usr/share/elasticsearch/config/tls/filebeat.crt"
  ssl.key: "/usr/share/elasticsearch/config/tls/filebeat.key"
  ssl.key_passphrase: ${APP_KEYSTORE_PASSWORD}
logging.level: info

Note that a) APP_KEYSTORE_PASSWORD is set as environmental variable during pod startup. b) logstash is a Kubernetes service name, which is backed up by real Kubernetes pod(s). In our test case, only one logstash pod.

[3] logstash input:

input {
beats {
port => 5044
tls_min_version => 1.2
ssl => true
ssl_certificate_authorities => ["/usr/share/elasticsearch/config/tls/ca.crt"]
ssl_certificate => "/usr/share/elasticsearch/config/tls/logstash.crt"
ssl_key => "/usr/share/elasticsearch/config/tls/logstash.key"
ssl_key_passphrase => "${APP_KEYSTORE_PASSWORD}"
ssl_verify_mode => "force_peer"
cipher_suites => ["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"]
}
}

Thanks and best regards.

Now I assume, that your logstash key /usr/share/elasticsearch/config/tls/logstash.key is not a PKCS#8 unencrypted Private key.
As a try, can you convert it please:
openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in /usr/share/elasticsearch/config/tls/logstash.key -out /usr/share/elasticsearch/config/tls/logstash.pkcs8.key
and then change setting for key in logstash input:

    ssl_key => "/usr/share/elasticsearch/config/tls/logstash.pkcs8.key"

and comment out ssl_key_passphrase.

Then try if the beat works, potentially you can do the same on the beats side:
convert the key to a new PKCS#8 and disable the ssl.key_passphrase and point to the new keyfile.

Let me know how it goes.

Hi, Andre, thanks.

Our z k8s cluster is not stable recently... We have a flag to control if the key should be encrypted or not during key generation process and I did get a chance to test once:

if [ $key_encrypt == "false" ] <-- I set the flag to false so the key is unencrypted
then
    echo "exporting unencrypted keys"
    openssl pkcs8 -in $cert_name.key.p12 -topk8 -passin pass:$app_keystore_password -out $cert_name.key.p8 -nocrypt
else
    echo "exporting encrypted keys"
    openssl pkcs8 -in $cert_name.key.p12 -topk8 -passin pass:$app_keystore_password -out $cert_name.key.p8 -passout pass:$app_keystore_password
fi

But when starting logstash with the unencrypted key, we got a different exception:

[2019-07-22T22:44:37,246][ERROR][logstash.inputs.beats ] Looks like you either have a bad certificate, an invalid key or your private key was not in PKCS8 format.

We'll test your suggestion in previous reply after our cluster is stable again.

Best regards.

Hi, Andre,

I tested with unencrypted key but still got the same exceptions.

sh-4.2# openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in /tmp/o.key -out /tmp/logstash.key
Enter pass phrase for /tmp/o.key: <-- key in the pass phrase for the original encrypted key
sh-4.2# ls -ltr
total 8
-rw-r--r-- 1 root root 1785 Jul 24 00:27 o.key
-rw-r--r-- 1 root root 1704 Jul 24 00:28 logstash.key

sh-4.2# head -n 2 o.key
-----BEGIN ENCRYPTED PRIVATE KEY-----
MIIE6TAbBgkqhkiG9w0BBQMwDgQIYWLfccCjtFYCAggABIIEyNP7aDrRFaULtAZk

sh-4.2# head -n 2 logstash.key
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCKEOm3js5PCwdi

I cat and grep the exception in logstash container log:

cat /tmp/ls.log | grep "error:1e00007d:Cipher functions:OPENSSL_internal:INVALID_NONCE" | wc -l
116

Any more ideas that we can try?

Thanks and best regards.

Hi Andre,
Any other suggestion from you to try out?

Hi Anita,

Not at the moment without having the configuration files it will be difficult to diagnose. Ideally I would get the configuration somewhere uploaded and per private message, then it can be analyzed with a bit more details.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.