Logstash can't establish pipeline with ssl

So I have been trying to implement ssl security across my single-node stack, and I got kibana-elasticsearch ssl working (didn't enable https), but I'm having a hard time getting logstash to communicate with elasticsearch.

The elasticsearch config is as follows (truncated):

xpack:
  security:
    enabled: true
    authc:
      realms:
        file1:
          type: file
          order: 0
    transport:
      ssl:
        enabled: true
        verification_mode: certificate
        keystore:
          path: elastic-certificates.p12
        truststore:
          path: elastic-certificates.p12

discovery.type: single-node

And the logstash config:

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

    ssl => true
    cacert => '/usr/share/elasticsearch/elastic-cert.pem'
  }
}

It's worth noting that before I tried to implement ssl, everything was working. This is the error that logstash returned (in debug level logging):

[2019-07-19T12:45:35,851][ERROR][logstash.pipeline        ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::OutputDelegator:0x64c13046>", :error=>"Unrecognized SSL message, plaintext connection?", :thread=>"#<Thread:0x6def8e67 run>"}
[2019-07-19T12:45:35,855][ERROR][logstash.pipeline        ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37:in `block in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:74:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:291:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:245:in `block in healthcheck!'", "org/jruby/RubyHash.java:1419:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241:in `healthcheck!'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:341:in `update_urls'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:71:in `start'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:302:in `build_pool'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:64:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:103:in `create_http_client'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:99:in `build'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch.rb:238:in `build_client'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-9.4.0-java/lib/logstash/outputs/elasticsearch/common.rb:25:in `register'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:106:in `register'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:48:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:259:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:270:in `block in register_plugins'", "org/jruby/RubyArray.java:1792:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:270:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:611:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:280:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:217:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:176:in `block in start'"], :thread=>"#<Thread:0x6def8e67 run>"}
[2019-07-19T12:45:35,869][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}

I followed this tutorial to create the certificates and then used openssl to convert from p12 to pem for the pieces that needed it (all of them except es). For this whole process I have been following this tutorial to the best of my ability. Thanks in advance for any help you offer, and if you need any more logs/info just let me know.

EDIT
After some more work I got that error resolved (don't ask me how), but now a new one has cropped up:

[2019-07-19T16:12:16,147][ERROR][logstash.pipeline        ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::OutputDelegator:0x31751e5e>", :error=>"Unable to initialize, java.io.IOException: Short read of DER length", :thread=>"#<Thread:0x154eaa05 run>"}
[2019-07-19T16:12:16,151][ERROR][logstash.pipeline        ] _minus_java.lib.manticore.client.RUBY$method$pool_builder$0$__VARARGS__(usr/share/logstash/vendor/bundle/jruby/$2_dot_5_dot_0/gems/manticore_minus_0_dot_6_dot_4_minus_java/lib/manticore//usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/client.rb)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.manticore_minus_0_dot_6_dot_4_minus_java.lib.manticore.client.pool(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/client.rb:405)", "usr.share.lo ... TRUNCATED
[2019-07-19T16:12:16,166][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}

The elasticsearch configuration you have shared so far, has TLS enabled only for the transport layer. However, Logstash's Elasticsearch output plugin will attempt to communicate over the http layer with elasticsearch. This is what was causing the

[2019-07-19T12:45:35,851][ERROR][logstash.pipeline        ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::OutputDelegator:0x64c13046>", :error=>"Unrecognized SSL message, plaintext connection?", :thread=>"#<Thread:0x6def8e67 run>"}

previously. If you have fixed/changed the elasticsearch configuration, please do update your post so that we can give you relevant suggestions.

Your latest error

[2019-07-19T16:12:16,147][ERROR][logstash.pipeline        ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::OutputDelegator:0x31751e5e>", :error=>"Unable to initialize, java.io.IOException: Short read of DER length", :thread=>"#<Thread:0x154eaa05 run>"}

seems to point out that Logstash can't read the file that you have selected as the cacert

cacert => '/usr/share/elasticsearch/elastic-cert.pem'

Please note that you can use your PKCS#12 store that you have already without exporting the CA certificate from it to a PEM file, using the truststore setting of the Elasticsearch output plugin in Logstash

Okay, so I changed my elasticsearch config to also have ssl on the http layer. My new elasticsearch config (shortened):

xpack:
  security:
    enabled: true
    authc:
      realms:
        file1:
          type: file
          order: 0
    transport:
      ssl:
        enabled: true
        verification_mode: certificate
        keystore:
          path: certs/node-2.p12
          password: "********"
        truststore:
          path: certs/node-2.p12
          password: "********"
    http:
      ssl:
        enabled: true
        verification_mode: certificate
        keystore:
          path: certs/node-2.p12
          password: "********"
        truststore:
          path: certs/node-2.p12
          password: "********"

My new logstash config (also shortened):

output {
  elasticsearch {
    hosts => ["https://127.0.0.1:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    ssl => true
    ssl_certificate_verification => true
    cacert => "/etc/elasticsearch/certs/ca/ca.p12"
  }
}

The file now pointed to by cacert is the ca certificate created by certutil at the time I created the actual certificates in use. However now I get this error, which is the same as the other one except for the top line so I'll only include that:

[2019-07-19T16:12:16,147][ERROR][logstash.pipeline        ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::OutputDelegator:0x5633baa2>", :error=>"signed fields invalid", :thread=>"#<Thread:0x60033b6a run>"}

I've never seen signed fields invalid before, so I'm stumped. Any ideas?

You have pointed cacert to a PKCS#12 truststore which is not going to work. As I mentioned above you either need to use

  • cacert pointing to a PEM encoded CA certificate or
  • truststore pointing to the PKCS#12 truststore
1 Like

Ah yes I was discovering/writing this when you wrote that reply. I found this topic (which I just realized you authored :wink: ) which stated that this error came from the fact that I was using a .p12 ca cert instead of a .pem version. As per your instructions, I used openssl pkcs12 -in ca.p12 -clcerts -nokeys -chain -out ca.pem to convert it, but when I started it all back up the logstash logs said

[2019-07-22T13:40:40,822][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'https://127.0.0.1:9200/'"}

I added basic authentication which I had configured on elasticsearch before, and then elastic started giving me this:

[2019-07-22T13:43:15,219][WARN ][o.e.h.n.Netty4HttpServerTransport] [AJMHkPE] caught exception while handling client http traffic, closing connection [id: 0x752c2c77, L:0.0.0.0/0.0.0.0:9200 ! R:/127.0.0.1:39798]
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 48454144202f20485454502f312e310d0a417574686f72697a6174696f6e3a2042617369632061326c69595735684f6c4268637a56334d484a6b49513d3d0d0a486f73743a206c6f63616c686f73743a393230300d0a436f6e74656e742d4c656e6774683a20300d0a436f6e6e656374696f6e3a206b6565702d616c6976650d0a0d0a
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) [netty-common-4.1.32.Final.jar:4.1.32.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]

*logstash of course was complaining about elasticsearch failing to respond

The 401s you get from Elasticsearch is because Logstash's Elasticsearch output plugin doesn't use a username and password.

That should have solved the problem you had above.

Please share the exact error message, as it is hard to offer suggestions when we don't know what exactly failed and how !

This is Kibana attempting to connect to Elasticsearch over HTTP (and not HTTPS) and Elasticsearch is complaining that it got a plaintext connection instead of one over TLS

io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:

The HEX string you see after that message basically decodes to a HTTP HEAD request

HEAD / HTTP/1.1
Authorization: Basic <Removed the base64 string but it's your kibana user credentials here>
Host: localhost:9200
Content-Length: 0
Connection: keep-alive

I now also noticed:

This sentence from your first post is self contradictory (one can't enable TLS between kibana and elasticsearch without enabling TLS for the http layer of elasticsearch) , and is the root cause of the last exception you shared. You need to configure Kibana to connect to Elasticsearch over HTTPS ( see point 2 in Encrypting communications in Kibana | Kibana Guide [7.2] | Elastic)

I would like to focus on the logstash-elasticsearch connection right now, development-wise. I had restarted kibana (I can't remember why) but I guess I had forgotten to stop it again while I'm working on this. Thank you for clearing things up... I think I was trying to say earlier that I had enabled ssl in kibana but set elasticsearch.ssl.verificationMode to none. Hopefully elasticsearch will be able to give meaningful information about its connection with logstash and hopefully they were able to connect and I just wasn't able to see it under the flood caused by kibana... I'll get back in a minute.

So I stopped kibana but the same java exceptions are still coming through on elasticsearch. The error on logstash is as follows:

[2019-07-22T14:34:37,287][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}

I guess when elasticsearch threw its error it shut down all traffic so logstash couldn't get a response. Don't know why elasticsearch is still throwing that exception when I shut down kibana though.

EDIT:

I ran some tests, and the elasticsearch errors don't come when logstash isn't running. It must be logstash that is causing those errors.

I'm sorry but given that these exceptions came from Kibana attempting to communicate to Elasticsearch, I can't see how you'd still get these when Kibana is not running. Maybe some similar exception? It's always preferable to share the actual logs than a description of what is in there :slight_smile:

Is

[2019-07-22T14:34:37,287][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}

the only thing you see in Logstash logs ? Either way, the Elasticsearch logs will be much more helpful

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.