Logstash connection to Elasticsearch issue

Hi All,
I have a logstash on an external site and an ES within the local network.
The ES can be reached from the site where the logstash is located via an HA-Proxy (https://myapp.mycompany.hu --> http://myapp.mycompany.local:9200)

If I run a logstash pipeline like this:
input {
jdbc {
jdbc_validate_connection => true
jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/mariadb-java-client-2.5.4.jar"
jdbc_driver_class => "Java::org.mariadb.jdbc.Driver"
jdbc_connection_string => "jdbc:mariadb://127.0.0.1:3306/my_dbl"
jdbc_user => "my_db_user"
jdbc_password => "password"
statement => "SELECT * FROM table"
}
}
filter {
}
output {
elasticsearch {
hosts => ["https://myapp.mycompany.hu:443"]
index => "test_index"
user => "elastic"
password => "password"
}
}

I receive the following in the log:
[WARN ] 2020-02-12 08:05:54.977 [[main]-pipeline-manager] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx@myapp.mycompany.hu:443/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '503' contacting Elasticsearch at URL 'https://myapp.mycompany.hu:443/'"}

While if I check this from curl:
curl -XGET "https://elastic:password@myapp.mycompany.hu:443"
I get the following:
{
"name" : "node-1",
"cluster_name" : "my_cluster",
"cluster_uuid" : "ehdSQUgcTWam-m2cx_KzUg",
"version" : {
"number" : "7.5.2",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf",
"build_date" : "2020-01-15T12:11:52.313576Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

Please help me what I do wrong! Thx.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.