Critical http://10.54.120.149:9200 seems to be unreachable

hi Stephen,
as i have completed MS SQL self managed connector using Kibana UI.

by following below code blocks executed successfully
-------------------------------
mkdir -p "$HOME/elastic-connectors-sql-self" && echo "connectors:
-
  connector_id: \"cSzSsJYByc505-I2SFR3\"
  service_type: \"mssql\"
  api_key: \"XXUwNS1JMlNGVGw6dFhvOTFTWFV3ZUVZZnRrTUNRdlc4UQ==\"
elasticsearch:
  host: \https://host.docker.internal:9200\
  ssl: true
  verify_certs: false
  ca_certs: \"/usr/share/elasticsearch/config/certs/http_ca.crt\"
  api_key: \"XXzUwNS1JMlNGVGw6dFhvOTFTWFV3ZUVZZnRrTUNRdlc4UQ==\"" > "$HOME/elastic-connectors-sql-self/config.yml"
  
  
docker run `
  -v "$($env:HOME)/elastic-connectors-sql-self:/config" `
  --tty `
  --rm `
  docker.elastic.co/integrations/elastic-connectors:9.0.0 `
  /app/bin/elastic-ingest `
  -c /config/config.yml
---------------------------
but after configuring it as per below and started sync it is failing with below error.
(Sql server is my local machine hosted)
Host
USHYDMCHALLA353
Port
1433
Username
test_madhu
Password
********
Database
elasticsearch1
Comma-separated list of tables
table1
Schema
dbo
Enable SSL verification
false
Validate host
false
---error---------
File "/app/.venv/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 898, in __connect
    self.dbapi_connection = connection = pool._invoke_creator(self)
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/sqlalchemy/engine/create.py", line 640, in connect       
    return dialect.connect(*cargs, **cparams)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 580, in connect      
    return self.loaded_dbapi.connect(*cargs, **cparams)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/pytds/__init__.py", line 1352, in connect
    conn._open(sock=sock)
  File "/app/.venv/lib/python3.11/site-packages/pytds/__init__.py", line 405, in _open
    raise last_error
  File "/app/.venv/lib/python3.11/site-packages/pytds/__init__.py", line 379, in _open
    self._try_open(timeout=retry_time, sock=sock)
  File "/app/.venv/lib/python3.11/site-packages/pytds/__init__.py", line 361, in _try_open
    self._connect(host=host, port=port, instance=instance, timeout=timeout, sock=sock)
  File "/app/.venv/lib/python3.11/site-packages/pytds/__init__.py", line 292, in _connect
    raise LoginError("Cannot connect to server '{0}': {1}".format(host, e), e)
sqlalchemy.exc.OperationalError: (pytds.tds_base.LoginError) ("Cannot connect to server 'USHYDMCHALLA353': [Errno -5] No address associated with hostname", gaierror(-5, 'No address associated with hostname'))  
(Background on this error at: https://sqlalche.me/e/20/e3q8)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/connectors/services/job_scheduling.py", line 115, in _schedule
    await data_source.ping()
  File "/app/connectors/sources/mssql.py", line 500, in ping
    raise Exception(msg) from e
Exception: Can't connect to Microsoft SQL on USHYDMCHALLA353

from inside the container ... your server is not resolved... very similar to the previous issue

perhaps set for the database host...

Host
host.docker.internal

Seems like perhaps you should learn a bit more about docker...

1 Like

hi Stephen,
i have installed logstash also pull my sql server (local) synchronously to Elasticsearch.
MS SQL Connector setup and configuration is completed.

i have created below config in logstash , but failing when i run

input {
  jdbc {
    jdbc_driver_library => "/usr/share/logstash/x-pack/myjar/x-pack-sql-jdbc-8.17.4.jar"
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    jdbc_connection_string => "jdbc:sqlserver://USHYDMCHALLA353:1433;databaseName=elasticsearch1"
    jdbc_user => "test_madhu"
    jdbc_password => "**"
    schedule => "* * * * *"  # Runs every minute
    statement => "SELECT * FROM table1"
  }
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    index => "content-mssql-self-managed"
  }
}

error --

:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
^C[2025-05-09T12:56:17,478][WARN ][logstash.runner          ] SIGINT received. Shutting down.
[2025-05-09T12:56:17,624][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T12:56:17,625][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}

requesting your assistance.
Hope i can use directly my sql instance name unlike "https://host.docker.internal:9200"

Did you install logstash directly or through docker?

You are missing authentication and ssl settings in the elasticsearch output..

i have installed logstash through Docker.

``` my logstash config file -----------
input {
  jdbc {
    jdbc_driver_library => "/usr/share/logstash/x-pack/enu/jars/mssql-jdbc-12.10.0.jre11.jar"
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    jdbc_connection_string => "jdbc:sqlserver://USHYDMCHALLA353:1433;databaseName=elasticsearch1"
    jdbc_user => "test_madhu"
    jdbc_password => "**"
    schedule => "* * * * *"  # Runs every minute
    statement => "SELECT * FROM table1"
  }
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    index => "content-mssql-self-managed"
    
    # Authentication settings
    user => "elastic"
    password => "-***"
    
    # SSL settings
    ssl => true
    ssl_certificate_authorities => ["/usr/share/elasticsearch/config/certs/http_ca.crt"]  # Path to the CA certificate
    ssl_verification_mode => "none"  # Enable SSL certificate verification
  }
}
``` error -------------------------------------ca cert file exists-------------------------

sh-5.1$ /usr/share/logstash/bin/logstash -f /usr/share/logstash/config/logstash-sql_instance1.conf --path.data /usr/share/logstash/data_sql_instance1
Using bundled JDK: /usr/share/logstash/jdk
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2025-05-09T16:26:19,098][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2025-05-09T16:26:19,103][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"9.0.0", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.6+7-LTS on 21.0.6+7-LTS +indy +jit [x86_64-linux]"}
[2025-05-09T16:26:19,105][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2025-05-09T16:26:19,136][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000` (logstash default)
[2025-05-09T16:26:19,137][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000` (logstash default)
[2025-05-09T16:26:19,137][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-nesting-depth` configured to `1000` (logstash default)
[2025-05-09T16:26:19,264][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because command line options are specified
[2025-05-09T16:26:19,783][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601, :ssl_enabled=>false}
[2025-05-09T16:26:20,106][INFO ][org.reflections.Reflections] Reflections took 99 ms to scan 1 urls, producing 149 keys and 521 values
[2025-05-09T16:26:21,405][ERROR][logstash.outputs.elasticsearch] Invalid setting for elasticsearch output plugin:

  output {
    elasticsearch {
      # This setting must be a path
      # ["File does not exist or cannot be opened /usr/share/elasticsearch/config/certs/http_ca.crt"]
      ssl_certificate_authorities => ["/usr/share/elasticsearch/config/certs/http_ca.crt"]
      ...
    }
  }
[2025-05-09T16:26:21,431][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:137)", "org.logstash.execution.AbstractPipelineExt.initialize(AbstractPipelineExt.java:240)", "org.logstash.execution.AbstractPipelineExt$INVOKER$i$initialize.call(AbstractPipelineExt$INVOKER$i$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:847)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1379)", "org.jruby.ir.instructions.InstanceSuperInstr.interpret(InstanceSuperInstr.java:139)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:363)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:128)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:115)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:446)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:92)", "org.jruby.RubyClass.newInstance(RubyClass.java:949)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:446)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:92)", "org.jruby.ir.instructions.CallBase.interpret(CallBase.java:548)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:363)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:88)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:238)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:225)", "org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:228)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:476)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:293)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:324)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.ir.interpreter.Interpreter.INTERPRET_BLOCK(Interpreter.java:118)", "org.jruby.runtime.MixedModeIRBlockBody.commonYieldPath(MixedModeIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:66)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.Block.call(Block.java:144)", "org.jruby.RubyProc.call(RubyProc.java:354)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:111)", "java.base/java.lang.Thread.run(Thread.java:1583)"], :cause=>{:exception=>Java::OrgJrubyExceptions::Exception, :message=>"(ConfigurationError) Something is wrong with your configuration.", :backtrace=>["RUBY.config_init(/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:111)", "RUBY.config_init(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-12.0.2-java/lib/logstash/outputs/elasticsearch.rb:365)", "RUBY.initialize(/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:75)", "RUBY.initialize(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-mixin-ecs_compatibility_support-1.3.0-java/lib/logstash/plugin_mixins/ecs_compatibility_support/selector.rb:61)", "RUBY.initialize(/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-12.0.2-java/lib/logstash/outputs/elasticsearch.rb:252)", "org.logstash.plugins.factory.ContextualizerExt.initialize(org/logstash/plugins/factory/ContextualizerExt.java:97)", "org.jruby.RubyClass.new(org/jruby/RubyClass.java:949)", "org.logstash.plugins.factory.ContextualizerExt.initialize_plugin(org/logstash/plugins/factory/ContextualizerExt.java:80)", "org.logstash.plugins.factory.ContextualizerExt.initialize_plugin(org/logstash/plugins/factory/ContextualizerExt.java:53)", "org.jruby.RubyClass.new(org/jruby/RubyClass.java:949)", "org.logstash.config.ir.compiler.OutputDelegatorExt.initialize(org/logstash/config/ir/compiler/OutputDelegatorExt.java:79)", "org.logstash.config.ir.compiler.OutputDelegatorExt.initialize(org/logstash/config/ir/compiler/OutputDelegatorExt.java:56)", "org.logstash.plugins.factory.PluginFactoryExt.plugin(org/logstash/plugins/factory/PluginFactoryExt.java:241)", "org.logstash.execution.AbstractPipelineExt.initialize(org/logstash/execution/AbstractPipelineExt.java:240)", "RUBY.initialize(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47)", "org.jruby.RubyClass.new(org/jruby/RubyClass.java:949)", "RUBY.execute(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50)", "RUBY.converge_state(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:420)"]}}
[2025-05-09T16:26:21,446][INFO ][logstash.runner          ] Logstash shut down.
[2025-05-09T16:26:21,452][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:924) ~[jruby.jar:?]
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:883) ~[jruby.jar:?]
        at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:90) ~[?:?]

^^^
Please read the error messages and then work to figure it out...
Check the permissions... you can figure this out... I can not really provide step by step for everything .. read the error ... read the docs... look at the permissions etc.

Also the example of file list

[ "/usr/share/elasticsearch/config/certs/http_ca.crt" ]

sorry Stephen here is the error for below config
input {
  jdbc {
    jdbc_driver_library => "/usr/share/logstash/x-pack/enu/jars/mssql-jdbc-12.10.0.jre11.jar"
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    jdbc_connection_string => "jdbc:sqlserver://USHYDMCHALLA353:1433;databaseName=elasticsearch1"
    jdbc_user => "test_madhu"
    jdbc_password => "**"
    schedule => "* * * * *"  # Runs every minute
    statement => "SELECT * FROM table1"
  }
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    index => "content-mssql-self-managed"
    
    # Authentication settings
    user => "elastic"
    password => "-**s"
    
    # SSL settings
	ssl_enabled => true
    ssl_certificate_authorities => ["/usr/share/logstash/data_instance_1_sql/certifs/http_ca.crt"]  # Path to the CA certificate
    ssl_verification_mode => "none"  # Enable SSL certificate verification
  }
}
----------------------------error-----------------------------------------
n, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T17:10:02,820][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
[2025-05-09T17:10:07,830][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T17:10:07,831][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
[2025-05-09T17:10:12,842][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T17:10:12,843][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
[2025-05-09T17:10:17,850][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T17:10:17,851][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
[2025-05-09T17:10:22,857][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T17:10:22,858][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
[2025-05-09T17:10:27,870][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T17:10:27,874][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
[2025-05-09T17:10:32,892][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T17:10:32,894][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
[2025-05-09T17:10:37,915][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T17:10:37,916][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
[2025-05-09T17:10:42,926][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T17:10:42,928][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
[2025-05-09T17:10:47,936][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused>}
[2025-05-09T17:10:47,937][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused"}
^C[2025-05-09T17:10:48,566][WARN ][logstash.runner          ] SIGINT received. Shutting down.

Again you need to figure this out yourself

Error says connection refused unreachable...

Is elasticsearch running? Can you reach it from the command line?

what happens if you run from the command line

curl -k -v -u elastic:<password> https://localhost:9200

If that does not work then logstash will not work...

I Need to log off for a while good luck... read the error ... figure it out... use google these are all common questions...

This is the key line in this whole thread.

Most of your issues are errors of basic understanding, not appreciating "localhost" in different contexts, HTTP/HTTPS, file permissions, ... Nothing wrong with being new to things like docker, ELK, etc, we all started somewhere, but there is a level of basic knowledge that is assumed. You will be lucky to find someone who will want to fill in all the gaps for you, as you are not really demonstrating you are are trying yourself.

hi Team,
while i was trying to setup a MS SQL Connector for Azure SQL  i am getting below error.
any suggestions pls

/config/config-myazure-sql.yml (integrations/elastic-connectors:9.0.0)
----------------------------------------------------------------------
connectors:
-
  connector_id: "bb"
  service_type: "mssql"
elasticsearch:
  host: "https://host.docker.internal:9200"
  ssl: true
  verify_certs: false
  ca_certs: "/usr/share/elasticsearch/config/certs/http_ca.crt"
  api_key: "bb=="

-------------MS SQL (Azure SQL) connector configurations--------------------------
mkdir -p "$HOME/elastic-connectors" && echo "connectors:
-
  connector_id: \"00\"
  service_type: \"mssql\"
elasticsearch:
  host: \"https://host.docker.internal:9200\"
  ssl: true
  verify_certs: true
  ca_certs: \"/config/certificates/http_ca.crt\"
  api_key: \"00==\"" > "$HOME/elastic-connectors/config-myazure-sql.yml"
  
    

docker run `
  -v "$($env:HOME)/elastic-connectors:/config" `
  --tty `
  --rm `
  docker.elastic.co/integrations/elastic-connectors:9.0.0 `
  /app/bin/elastic-ingest `
  -c /config/config-myazure-sql.yml

--------------------------error---------------------------
[FMWK][11:53:50][INFO] Running connector service version 9.0.0
[FMWK][11:53:50][INFO] Loading config from /config/config-myazure-sql.yml
[FMWK][11:53:50][INFO] Running preflight checks
[FMWK][11:53:50][INFO] Waiting for Elasticsearch at https://host.docker.internal:9200 (so far: 0 secs)
[FMWK][11:53:50][ERROR] Could not connect to the Elasticsearch server
[FMWK][11:53:50][ERROR] Cannot connect to host host.docker.internal:9200 ssl:True [SSLCertVerificationError: (1, "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'host.docker.internal'. (_ssl.c:1006)")]
[FMWK][11:53:51][INFO] Waiting for Elasticsearch at https://host.docker.internal:9200 (so far: 1 secs)
[FMWK][11:53:51][ERROR] Could not connect to the Elasticsearch server
[FMWK][11:53:51][ERROR] Cannot connect to host host.docker.internal:9200 ssl:True [SSLCertVerificationError: (1, "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'host.docker.internal'. (_ssl.c:1006)")]
[FMWK][11:53:53][INFO] Waiting for Elasticsearch at https://host.docker.internal:9200 (so far: 3 secs)
[FMWK][11:53:53][ERROR] Could not connect to the Elasticsearch server
[FMWK][11:53:53][ERROR] Cannot connect to host host.docker.internal:9200 ssl:True [SSLCertVerificationError: (1, "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'host.docker.internal'. (_ssl.c:1006)")]
[FMWK][11:53:57][INFO] Waiting for Elasticsearch at https://host.docker.internal:9200 (so far: 7 secs)
[FMWK][11:53:57][ERROR] Could not connect to the Elasticsearch server
[FMWK][11:53:57][ERROR] Cannot connect to host host.docker.internal:9200 ssl:True [SSLCertVerificationError: (1, "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'host.docker.internal'. (_ssl.c:1006)")]

Set this to false see if it works

No Stephen.
if i set to false it is saying/failing "Azure Sql is expecting Encryption from Client".
raise tds_base.Error('Client does not have encryption enabled but it is required by server, '
sqlalchemy.exc.DBAPIError: (pytds.tds_base.Error) Client does not have encryption enabled but it is required by server, enable encryption and try connecting again
(Background on this error at: Error Messages — SQLAlchemy 2.0 Documentation)

Setting to false does not turn off encryption as far as I know

elasticsearch:
  host: "https://host.docker.internal:9200"
  ssl: true
  verify_certs: false
  ca_certs: "/usr/share/elasticsearch/config/certs/http_ca.crt"
  api_key: "bb=="

Try this...

ok Stephen.
i have a doubt

regarding below post ===============================================
[quote="Madhu_Challapalli, post:21, topic:377728"]
`'USHYDMCHALLA353': [Errno -5] No address associated with hostname",`
[/quote]

from inside the container ... your server is not resolved... very similar to the previous issue

perhaps set for the database host...

Host
host.docker.internal


what if i have my sql server host which is hosted in different server which means it is not localhost but some servername right 
then how can we refer in 
Host
host.docker.internal ??

Hi @Madhu_Challapalli We are not really here to answer generic docker and network questions. Please use google for these types of questions.

Google : how does dns in docker container work

In Docker, DNS resolution for containers relies on a multi-step process, starting with the

container's built-in DNS resolver forwarding requests to Docker's internal DNS server, which in turn handles internal domain name resolution and forwards external requests to the host's DNS server. This allows containers to easily communicate with each other by name and resolve external domains.

ok Stephen