Logstash Elasticsearch Fail to Respond

Hello Everyone,

I'm trying to connect Logstash to Elasticsearch. However, it fails with the message Failed to perform request {:message=>"10.10.145.124:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>org.apache.http.NoHttpResponseException: 10.10.145.124:9200 failed to respond}

Here's the configuration of Logstash in pipeline.conf:

ich versuche, die Daten aus Logstash nach Elasticsearch zu senden. Und es scheitert. Hier ist Konfiguration von Logstash in pipeline.conf:

input {
    kafka {
            bootstrap_servers => "10.10.145.88:9092"
            topics => ["apache_logs"]
    }
}
 
output {
    elasticsearch {
        hosts => ["10.10.145.124:9200"]
        index => "apache_logs"
        workers => 1
        keystore => "/etc/logstash/certs/http.p12"
        cacert => "/etc/logstash/certs/http_ca.crt"
        ssl => true
        ssl_certificate_verification => false
        user => "logstash_system"
        password => "logstash"
    }
}

Here's the configuration of Elasticsearch in Elasticsearch.yml:

cluster.name: kafka-tutorial
node.name: node-1

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: "10.10.145.124"
cluster.initial_master_nodes: "node-1"
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
  verification_mode: none
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12

On the other hand, Kibana manages to connect with Elasticsearch. Here are the settings of Kibana:

# This section was automatically generated during setup.
server.port: 5601
server.host: 10.10.145.124
elasticsearch.hosts: ['https://10.10.145.124:9200']
logging.appenders.file.type: file
logging.appenders.file.fileName: /var/log/kibana/kibana.log
logging.appenders.file.layout.type: json
logging.root.appenders: [default, file]
pid.file: /run/k*# This section was automatically generated during setup.
sibana/kibana.pid
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2NTUwODEzMTkyNzc6NnViY3p2enlSVnlZMUc3SVRyV3hlQQ
elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1655081320166.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://10.10.145.124:9200'], ca_trusted_fingerprint: 31e853f7f32382a58205fc0d4fb8e5d3bedf642f89a9db331838371406d0f8e5}]

I was searching desperately in this Forum. Allthough there were some similar issues, none of them helped. Do you have any suggestion? Could somebody please help me? I would be thankful for any support.

Greetings,
Milos Tepavcevic

Hello,

Please let's try to confirm that Logstash can talk to elasticsearch through the specific port,

If you're on UNIX based system with BASH can you run the following test :

echo > /dev/tcp/10.10.145.124/9200 && echo "ok"

Then we can carry on to another layer

Hi grumo35,

Thank you for helping me.
The Logstash is running on rhel8.
The commend you gave me returned ok
What shall we try next?

Milos

Hey,

So it means that the logstash server can actually talk with, it should be related to your ssl config

Did everything on your SSL config is selfsigned ?

You might want to try to add "https"

hosts => ["https://10.10.145.124:9200"]

These certificates are automatically generated by Elasticsearch.

Thanks for pointing out to add https://. Unfortunately, the result remains the same: failed to respond

Hey what's the return of

openssl s_client -CAfile=/etc/logstash/certs/http_ca.crt 10.10.145.124:9200

Would be better if you could manage to use hostname, i know it doesnt matter since you turned off verification.

Also turning on Logstash DEBUG level would help a little here :sweat_smile:

Thanks for the command. Here's the output:

[root@sl-001257 logstash]# openssl s_client -CAfile=/etc/logstash/certs/http_ca.crt 10.10.145.124:9200
CONNECTED(00000003)
Can't use SSL_get_servername
depth=1 CN = Elasticsearch security auto-configuration HTTP CA
verify return:1
depth=0 CN = sl-001258
verify return:1
---
Certificate chain
 0 s:CN = sl-001258
   i:CN = Elasticsearch security auto-configuration HTTP CA
 1 s:CN = Elasticsearch security auto-configuration HTTP CA
   i:CN = Elasticsearch security auto-configuration HTTP CA
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFfjCCA2agAwIBAgIUcHgI2cgFpWhs/ySkNVdE3qgilsMwDQYJKoZIhvcNAQEL
BQAwPDE6MDgGA1UEAxMxRWxhc3RpY3NlYXJjaCBzZWN1cml0eSBhdXRvLWNvbmZp

------------------------ [Certificate...] -------------------------

xkJsEC8VUyaO02hLXiCKfKr8E+WVGBluAYV5spQyv5DCHl++FJ5sKrEAXChCfSQZ
6XfJsA+tpEXmdW5m5yMS2bzX5sL8xALYoefQ7+rjRT53+lwjOVi5W5VY1KZnTO7x
JhJvETvmwvIoxmC1wLv2hU/6
-----END CERTIFICATE-----
subject=CN = sl-001258

issuer=CN = Elasticsearch security auto-configuration HTTP CA

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 3577 bytes and written 369 bytes
Verification: OK
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 4096 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
    Protocol  : TLSv1.3
    Cipher    : TLS_AES_256_GCM_SHA384
    Session-ID: 09AF53BCABA4A24599B27FC3C5B2017F6DB2FB6E62F74EC1D0CB2A101AEC18E7
    Session-ID-ctx:
    Resumption PSK: 4C758E20BBCF31851F4E4E4E323F39EEA6B0873BCE01DF5DB4C9E98D3B8AC0A2EE2A3C4D7E850ED701FF8F722AA5421A
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 86400 (seconds)
    TLS session ticket:
    0000 - d8 66 08 29 71 70 9a eb-8a 62 03 7b 73 ba 18 90   .f.)qp...b.{s...
    0010 - 9c 25 81 02 a6 eb 33 1c-cd 73 de ec ee 3c 8c d4   .%....3..s...<..

----------------------------[a lot of numbers and letters...]-----------------------------

    0bc0 - 3a c5 d9 5a aa 1f 55 68-12 81 b5 4a af b0 72 a7   :..Z..Uh...J..r.
    0bd0 - 82 99 5f 73 43 18 0d c7-8d ae 77 1e 9b a1         .._sC.....w...

    Start Time: 1655139281
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
    Extended master secret: no
    Max Early Data: 0
---
read R BLOCK

The return code is 0 (ok), isn't it? I guess, the certificate should be fine, shouldn't it?

Yes it looks like the certificate is good, did you configure Elasticsearch with hostname ?

Might be a good idea to try to ran the output with hostname:9200 ( in case you dont have dns servers set-up you can use /etc/hosts file )

Tell me if you can try to use hostnames i believe the CN (hostname) of your elastic node is sl-001258 ?

You have to use the same name as the one you generated certficate with.

What's about logstash DEBUG ?

I tried with the setting hosts => ["https://sl-001258:9200"]. However, the message in the Logs of Logstash remain

[2022-06-13T19:36:04,160][INFO ][logstash.outputs.Elasticsearch][main] Failed to perform request {:message=>"10.10.145.124:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>org.apache.http.NoHttpResponseException: 10.10.145.124:9200 failed to respond}

and the data is still abscent from Elasticsearch.

I activated the Debug-Mode. Here's the abstract from the output on the terminal, which is continuously being thrown:

[DEBUG] 2022-06-13 19:32:52.513 [kafka-input-worker-logstash-0] Fetcher - [Consumer clientId=logstash-0, groupId=logstash] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(apache_logs-0)) to broker 10.10.145.88:9092 (id: 0 rack: null)
[DEBUG] 2022-06-13 19:32:53.014 [kafka-input-worker-logstash-0] FetchSessionHandler - [Consumer clientId=logstash-0, groupId=logstash] Node 0 sent an incremental fetch response with throttleTimeMs = 0 for session 2111035640 with 0 response partition(s), 1 implied partition(s)
[DEBUG] 2022-06-13 19:32:53.014 [kafka-input-worker-logstash-0] Fetcher - [Consumer clientId=logstash-0, groupId=logstash] Added READ_UNCOMMITTED fetch request for partition apache_logs-0 at position FetchPosition{offset=10000, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.10.145.88:9092 (id: 0 rack: null)], epoch=0}} to node 10.10.145.88:9092 (id: 0 rack: null)
[DEBUG] 2022-06-13 19:32:53.014 [kafka-input-worker-logstash-0] FetchSessionHandler - [Consumer clientId=logstash-0, groupId=logstash] Built incremental fetch (sessionId=2111035640, epoch=155) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
[DEBUG] 2022-06-13 19:32:53.014 [kafka-input-worker-logstash-0] Fetcher - [Consumer clientId=logstash-0, groupId=logstash] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(apache_logs-0)) to broker 10.10.145.88:9092 (id: 0 rack: null)
[DEBUG] 2022-06-13 19:32:53.218 [kafka-coordinator-heartbeat-thread | logstash] AbstractCoordinator - [Consumer clientId=logstash-0, groupId=logstash] Sending Heartbeat request with generation 11 and member id logstash-0-243acc40-9c2f-4d0d-a2dc-5e2358091ea5 to coordinator 10.10.145.88:9092 (id: 2147483647 rack: null)

Do you, please, have some more ideas?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.