Cannot connect to kibana through port 5601

I am trying to implement kibana and elasticsearch on an azure VM, and while I can get both of them running, I am unable to connect with kibana. After starting both programs, kibana gives me the message

" log [05:14:15.212] [info][server][Kibana][http] http server running at http://0.0.0.0:5601"

however when I paste the link in my browser, it says "0.0.0.0 refused to connect"
I have also tried following http://localhost:5601, which also wont connect and http://40.118.150.147:5601, which times out ( 40.118.150.147 took too long to respond).
The same thing happens with my curl requests:
0.0.0.0:5601 - returns nothing
localhost:5601 - returns nothing
40.118.150.147:5601 - returns "curl: (28) Failed to connect to 40.118.150.147 port 5601: Connection timed out"
I have also tried disabling my firewall before connecting and this does not change anything.
I have also tried different combinations of 0.0.0.0, 40.118.150.147, and localhost as my network.host and by server.host.
Here is my current kibana.yml file:

"../kibana-7.13.2-linux-x86_64/config/kibana.yml" 111L, 5057C                                                                                                                             1,1           Top
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

and my elasticsearch.yml file

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 'localhost'
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["40.118.150.147"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

Finally, I found a post very similar to mine here which was never resolved, and have already tried the things suggested there.

I am very lost so thank you very much for any suggestions.

Welcome to our community! :smiley:
Can you post your Kibana log?

Thanks!

Here is my kibana log:

  log   [14:44:48.821] [info][plugins-service] Plugin "timelines" is disabled.
  log   [14:44:49.021] [warning][config][deprecation] plugins.scanDirs is deprecated and is no longer used
  log   [14:44:49.022] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
  log   [14:44:49.596] [info][plugins-system] Setting up [106] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,banners,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,translations,licenseApiGuard,legacyExport,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,home,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,advancedSettings,savedObjects,visualizations,visTypeTable,visTypeMetric,visTypeVislib,visTypeVega,visTypeTimelion,features,licenseManagement,watcher,visTypeMarkdown,visTypeTagcloud,visTypeXy,tileMap,regionMap,presentationUtil,canvas,graph,timelion,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,inputControlVis,indexPatternManagement,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,lens,reporting,lists,encryptedSavedObjects,dataEnhanced,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,beatsManagement,transform,ingestPipelines,fileUpload,maps,fileDataVisualizer,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,observability,osquery,ml,securitySolution,cases,infra,monitoring,logstash,apm,uptime]
  log   [14:44:49.602] [info][plugins][taskManager] TaskManager is identified by the Kibana UUID: 1c3bc494-1db4-467b-9223-9fb422b5ecc9
  log   [14:44:50.300] [warning][config][plugins][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
  log   [14:44:50.311] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
  log   [14:44:50.459] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
  log   [14:44:50.474] [info][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
  log   [14:44:50.476] [warning][encryptedSavedObjects][plugins] Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
  log   [14:44:50.706] [warning][actions][actions][plugins] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
  log   [14:44:50.746] [warning][alerting][alerting][plugins][plugins] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
  log   [14:44:50.882] [info][monitoring][monitoring][plugins] config sourced from: production cluster
  log   [14:44:51.228] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
  log   [14:44:51.512] [info][savedobjects-service] Starting saved objects migrations
  log   [14:44:51.574] [info][savedobjects-service] [.kibana_task_manager] INIT -> OUTDATED_DOCUMENTS_SEARCH. took: 29ms.
  log   [14:44:51.624] [info][savedobjects-service] [.kibana] INIT -> OUTDATED_DOCUMENTS_SEARCH. took: 81ms.
  log   [14:44:51.759] [info][savedobjects-service] [.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH -> UPDATE_TARGET_MAPPINGS. took: 185ms.
  log   [14:44:51.767] [info][savedobjects-service] [.kibana] OUTDATED_DOCUMENTS_SEARCH -> UPDATE_TARGET_MAPPINGS. took: 143ms.
  log   [14:44:51.965] [info][savedobjects-service] [.kibana] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 198ms.
  log   [14:44:51.970] [info][savedobjects-service] [.kibana_task_manager] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 211ms.
  log   [14:44:52.513] [info][savedobjects-service] [.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 548ms.
  log   [14:44:52.514] [info][savedobjects-service] [.kibana] Migration completed after 971ms
  log   [14:44:52.518] [info][savedobjects-service] [.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 548ms.
  log   [14:44:52.519] [info][savedobjects-service] [.kibana_task_manager] Migration completed after 974ms
  log   [14:44:52.553] [info][plugins-system] Starting [106] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,banners,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,translations,licenseApiGuard,legacyExport,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,home,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,advancedSettings,savedObjects,visualizations,visTypeTable,visTypeMetric,visTypeVislib,visTypeVega,visTypeTimelion,features,licenseManagement,watcher,visTypeMarkdown,visTypeTagcloud,visTypeXy,tileMap,regionMap,presentationUtil,canvas,graph,timelion,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,inputControlVis,indexPatternManagement,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,lens,reporting,lists,encryptedSavedObjects,dataEnhanced,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,beatsManagement,transform,ingestPipelines,fileUpload,maps,fileDataVisualizer,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,observability,osquery,ml,securitySolution,cases,infra,monitoring,logstash,apm,uptime]
  log   [14:44:55.943] [info][server][Kibana][http] http server running at http://0.0.0.0:5601
  log   [14:44:56.054] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
  log   [14:44:56.349] [info][plugins][securitySolution] Dependent plugin setup complete - Starting ManifestTask
  log   [14:44:56.809] [info][plugins][reporting] Browser executable: /home/azureuser/kibana-7.13.2-linux-x86_64/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell

@cfiske1 ,
Can you please comment out all modification you did for kibana.yml and elasticsearch.yml.
No need to uncomment anything run default configuration on local and start elasticsearch and kibana. It worked for me.

Kibana: http://localhost:5601/app/home
Elastic: http://localhost:9200/

I actually got the solution, and the problem is not with elasticsearch. I had to forward the port by running
ssh azureuser@xx.xx.xx.xx -L 5601:localhost:5601

thanks for your help

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.