Kibana Status Code : 302

Hi,

I have installed Elasticsearch and Kibana on my server(calling it myserver). but i am unable to access Kibana through browser, it keeps saying This site can’t be reached

myserver refused to connect.
while browsing through logs it says after a while
{"type":"log","@timestamp":"2021-03-02T22:45:02-06:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":17568,"message":"Starting monitoring stats collection"}
{"type":"log","@timestamp":"2021-03-02T22:45:03-06:00","tags":["listening","info"],"pid":17568,"message":"Server running at http://localhost:5601"}
{"type":"log","@timestamp":"2021-03-02T22:45:04-06:00","tags":["info","http","server","Kibana"],"pid":17568,"message":"http server running at http://localhost:5601"}
{"type":"log","@timestamp":"2021-03-02T22:45:05-06:00","tags":["warning","plugins","reporting"],"pid":17568,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
{"type":"response","@timestamp":"2021-03-02T22:52:39-06:00","tags":,"pid":17568,"method":"get","statusCode":302,"req":{"url":"/","method":"get","headers":{"accept-encoding":"gzip;q=1.0,deflate;q=0.6,identity;q=0.3","accept":"/","user-agent":"Ruby","connection":"close","host":"localhost:5601"},"remoteAddress":"127.0.0.1","userAgent":"Ruby"},"res":{"statusCode":302,"responseTime":76,"contentLength":9},"message":"GET / 302 76ms - 9.0B"}

Can anybody please guide me on how to fix this issue.

Are you doing http://myserver:5601 to connect? What happens if you try to connect through an incognito window?

Thanks
Bhavya

1 Like

Perhaps take a look at this it discusses the exact error message you have

yes, that is what i am using,
I get the same result while using incognito mode too

This site can’t be reached

myserver took too long to respond.

Try:

ERR_CONNECTION_TIMED_OUT

@Marius_Dragomir can I please get some inputs on this?
Thanks!

There's no actual error in those logs so it's not that. Maybe a miss-configuration for Kibana. Can you change the host in kibana.yml to myserver:5601 instead of localhost:5601?

Hi @vansh

Can you please post your kibana.yml

Question are you trying to access Kibana from the server it's installed on or from a different server or your desktop?

Also is Kibana installed on the same server as Elasticsearch?

Hi, thankyou all for all your inputs.
I have installed both kibana and elasticsearch on "myserver" and I am trying to access it through browser using http://myserver:5601/ to connect.
Also, I did try to change host to myserver from localhost , it didn't make any difference.
Below is my kibana.yml. Do let me know what changes should i make to it.

# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.


# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# # This setting cannot end in a slash.
#server.basePath: "http://localhost:5601"

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: "http://localhost:5601"

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
 server.name: "CRE-KIBANA"
 server.host: "0.0.0.0"

# The URLs of the Elasticsearch instances to use for all your queries.
 #elasticsearch.hosts: ["http://localhost:9200"]

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
 elasticsearch.username: "kibana"
 elasticsearch.password: "kibana"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
   # Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

And below is the log from kibana.log :

{"type":"log","@timestamp":"2021-03-09T04:47:34-06:00","tags":["warning","environment"],"pid":"17955","path":"/run/kibana/kibana.pid","message":"pid file already exists at /run/kibana/kibana.pid"}
{"type":"log","@timestamp":"2021-03-09T04:47:46-06:00","tags":["info","plugins-service"],"pid":17955,"message":"Plugin "visTypeXy" is disabled."}
{"type":"log","@timestamp":"2021-03-09T04:47:46-06:00","tags":["warning","config","deprecation"],"pid":17955,"message":"Setting [elasticsearch.username] to "kibana" is deprecated. You should use the "kibana_system" user instead."}
{"type":"log","@timestamp":"2021-03-09T04:47:46-06:00","tags":["warning","config","deprecation"],"pid":17955,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.""}
{"type":"log","@timestamp":"2021-03-09T04:47:46-06:00","tags":["warning","config","deprecation"],"pid":17955,"message":"Setting [monitoring.username] to "kibana" is deprecated. You should use the "kibana_system" user instead."}
{"type":"log","@timestamp":"2021-03-09T04:47:47-06:00","tags":["info","plugins-system"],"pid":17955,"message":"Setting up [101] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,home,observability,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,visualizations,visTypeVislib,visTypeTimeseries,visTypeTimeseriesEnhanced,visTypeVega,visTypeTimelion,features,licenseManagement,dataEnhanced,watcher,canvas,visTypeTagcloud,visTypeTable,visTypeMarkdown,visTypeMetric,tileMap,regionMap,mapsOss,lensOss,inputControlVis,graph,timelion,dashboard,dashboardEnhanced,visualize,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,maps,lens,reporting,lists,dashboardMode,encryptedSavedObjects,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,ml,beatsManagement,transform,ingestPipelines,eventLog,actions,alerts,triggersActionsUi,stackAlerts,securitySolution,case,infra,monitoring,logstash,apm,uptime]"}
{"type":"log","@timestamp":"2021-03-09T04:47:47-06:00","tags":["info","plugins","taskManager"],"pid":17955,"message":"TaskManager is identified by the Kibana UUID: 06152e09-f6ce-49e1-a3c9-0fcd5c3cc566"}
{"type":"log","@timestamp":"2021-03-09T04:47:47-06:00","tags":["warning","plugins","security","config"],"pid":17955,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-09T04:47:47-06:00","tags":["warning","plugins","security","config"],"pid":17955,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2021-03-09T04:47:47-06:00","tags":["warning","plugins","reporting","config"],"pid":17955,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-09T04:47:47-06:00","tags":["warning","plugins","reporting","config"],"pid":17955,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux Red Hat Linux 7.9 OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."}
{"type":"log","@timestamp":"2021-03-09T04:47:47-06:00","tags":["warning","plugins","encryptedSavedObjects","config"],"pid":17955,"message":"Generating a random key for xpack.encryptedSavedObjects.encryptionKey. To decrypt encrypted saved objects attributes after restart, please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-09T04:47:47-06:00","tags":["warning","plugins","fleet"],"pid":17955,"message":"Fleet APIs are disabled because the Encrypted Saved Objects plugin uses an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-09T04:47:47-06:00","tags":["warning","plugins","actions","actions"],"pid":17955,"message":"APIs are disabled because the Encrypted Saved Objects plugin uses an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-09T04:47:47-06:00","tags":["warning","plugins","alerts","plugins","alerting"],"pid":17955,"message":"APIs are disabled because the Encrypted Saved Objects plugin uses an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-09T04:47:48-06:00","tags":["info","plugins","monitoring","monitoring"],"pid":17955,"message":"config sourced from: production cluster"}
{"type":"log","@timestamp":"2021-03-09T04:47:48-06:00","tags":["info","savedobjects-service"],"pid":17955,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","@timestamp":"2021-03-09T04:47:48-06:00","tags":["warning","plugins","monitoring","monitoring"],"pid":17955,"message":"X-Pack Monitoring Cluster Alerts will not be available: X-Pack plugin is not installed on the Elasticsearch cluster."}
{"type":"log","@timestamp":"2021-03-09T04:47:48-06:00","tags":["info","savedobjects-service"],"pid":17955,"message":"Starting saved objects migrations"}
{"type":"log","@timestamp":"2021-03-09T04:48:18-06:00","tags":["warning","savedobjects-service"],"pid":17955,"message":"Unable to connect to Elasticsearch. Error: Request timed out"}
{"type":"log","@timestamp":"2021-03-09T04:48:18-06:00","tags":["warning","savedobjects-service"],"pid":17955,"message":"Unable to connect to Elasticsearch. Error: master_not_discovered_exception"}
{"type":"log","@timestamp":"2021-03-09T04:49:23-06:00","tags":["warning","savedobjects-service"],"pid":17955,"message":"Unable to connect to Elasticsearch. Error: Request timed out"}
{"type":"log","@timestamp":"2021-03-09T04:54:20-06:00","tags":["warning","savedobjects-service"],"pid":17955,"message":"Unable to connect to Elasticsearch. Error: master_not_discovered_exception"}

Hi @vansh

First I formatted your post by using the </> button above please use that in the future.

2nd I am confused you said you are trying to access kibana via http://myserver:5601/
and yet in your config you clearly set it to CRE-KIBANA so I am confused. Why is it not set to myserver or should you be trying to access http://CRE-KIBANA:5601/ please be consistent.

 server.name: "CRE-KIBANA"
 server.host: "0.0.0.0"

2nd you logs clearly show that kibana can not connect with elasticsearch.

{"type":"log","@timestamp":"2021-03-09T04:48:18-06:00","tags":["warning","savedobjects-service"],"pid":17955,"message":"Unable to connect to Elasticsearch. Error: Request timed out"}

I notice in your kibana.yml, Did you actually setup elasticsearch user kibana with password kibana?

 elasticsearch.username: "kibana"
 elasticsearch.password: "kibana"

Finally can you now post your elasticsearch.yml and please format it by selecting the text and pressing the format button

hi,

The Kibana server's name. This is used for display purposes.

server.name: "CRE-KIBANA"
since this was just for naming purposes, I have given it that name .
The server which i keep mentioning as myserver is actually a rhel server named apvrd12345 (no.s are different) on which I have installed both Kibana and Elasticsearch

Did you actually setup Elasticsearch user kibana with password kibana ?-- yes , I have . should this be setup differently?

finally , below is the elasticsearch.yml:

`# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
 cluster.name: cre-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
 node.name: cre-node1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
 path.data: /var/lib/elasticsearch
#
# Path to log files:
#
 path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
 network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
 discovery.seed_hosts: ["192.168.1.4","192.168.0.10"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: tru
#e
# network.bind_host: 0.0.0.0
 http.cors.allow-origin: "*"
 http.cors.enabled: true
 http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length, Authorization
 http.cors.allow-credentials: true
 #thread_pool.bulk.queue_size: 1000
# xpack.security.enabled: false
#`

Hi @vansh so we should be getting close,

Yes setting up the usernames and password are ok.

in your elasticsearch.yml you set

 network.host: 0.0.0.0

This mean elasticsearch is bound to the network... and not localhost this has specific meaning.

Then In the kibana.yml you did not set the elasticsearch.host which means is set to the default which is localhost which is not bound to the network, that is why kibana can not find elasticsearch, elasticsearch is bound to a network and you are telling kibana to find elasticsearch on localhost, they are separate interfaces

 #elasticsearch.hosts: ["http://localhost:9200"]

so in the kibana.yml you should set elasticsearch to the network ip of the host since you bound elasticsearch to the network. (just as you did kibana)

elasticsearch.hosts: ["http://networkipofhost:9200"]

I guess I have one last question do you know if your cluster is even up and green?

Do you really have 3 nodes? I suspect there's possibility that your cluster has not fully formed either I thought you had a single node first.

Can you run the following and show the output?

curl -u user:pw http://ipofhost:9200

curl -u user:pw http://ipofhost:9200/_cat/health

I changed the elasticsearch.hosts as asked to in kibana.yml.
elasticsearch.hosts: ["http://192.168.1.4:9200"]
it is still not working .

below are the outputs to asked commands:
$ curl -u user:pw http://192.168.1.4:9200
curl: (7) Failed connect to 192.168.1.4:9200; No route to host

$ curl -u user:pw http://192.168.1.4:9200/_cat/health
curl: (7) Failed connect to 192.168.1.4:9200; No route to host

Below is the log:
{"type":"log","@timestamp":"2021-03-10T01:53:26-06:00","tags":["warning","elasticsearch","monitoring"],"pid":30655,"message":"Unable to revive connection: http://192.168.1.4:9200/"}
{"type":"log","@timestamp":"2021-03-10T01:53:26-06:00","tags":["warning","elasticsearch","monitoring"],"pid":30655,"message":"No living connections"}
{"type":"log","@timestamp":"2021-03-10T01:53:26-06:00","tags":["warning","plugins","licensing"],"pid":30655,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
{"type":"log","@timestamp":"2021-03-10T01:53:56-06:00","tags":["warning","elasticsearch","monitoring"],"pid":30655,"message":"Unable to revive connection: http://192.168.1.4:9200/"}
{"type":"log","@timestamp":"2021-03-10T01:53:56-06:00","tags":["warning","elasticsearch","monitoring"],"pid":30655,"message":"No living connections"}
{"type":"log","@timestamp":"2021-03-10T01:53:56-06:00","tags":["warning","plugins","licensing"],"pid":30655,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}

Yes so it appears your elasticsearch cluster is not up and running at all.

Those curl commands are the basic commands to check if a cluster is running.

Did you follow these steps?

So next you need to stop.and start elasticsearch search and provide the startup logs from elasticsearch. It looks like the cluster is not firming.

Among other things I think you will need to set the following to your node names properly on each node.

cluster.initial_master_nodes setting

Hi Stephen,
I made the change you asked and restarted elasticsearch, below are the logs :
Also the curl commands are still getting the previous result.
(i have removed some module loaded logs from below due to word limit)

[2021-03-11T10:29:46,073][INFO ][o.e.n.Node               ] [cre-node1] stopping ...
[2021-03-11T10:29:46,078][INFO ][o.e.x.w.WatcherService   ] [cre-node1] stopping watch service, reason [shutdown initiated]
[2021-03-11T10:29:46,080][INFO ][o.e.x.w.WatcherLifeCycleService] [cre-node1] watcher has stopped and shutdown
[2021-03-11T10:29:46,110][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [cre-node1] [controller/17666] [Main.cc@169] ML controller exiting
[2021-03-11T10:29:46,122][INFO ][o.e.x.m.p.NativeController] [cre-node1] Native controller process has stopped - no new native processes can be started
[2021-03-11T10:29:46,254][INFO ][o.e.n.Node               ] [cre-node1] stopped
[2021-03-11T10:29:46,255][INFO ][o.e.n.Node               ] [cre-node1] closing ...
[2021-03-11T10:29:46,296][INFO ][o.e.n.Node               ] [cre-node1] closed
[2021-03-11T10:29:53,883][INFO ][o.e.n.Node               ] [cre-node1] version[7.11.1], pid[9600], build[default/rpm/ff17057114c2199c9c1bbecc727003a907c0db7a/2021-02-15T13:44:09.394032Z], OS[Linux/3.10.0-1160.11.1.el7.x86_64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/15.0.1/15.0.1+9]
[2021-03-11T10:29:53,885][INFO ][o.e.n.Node               ] [cre-node1] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2021-03-11T10:29:53,886][INFO ][o.e.n.Node               ] [cre-node1] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-14592673059366803798, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms919m, -Xmx919m, -XX:MaxDirectMemorySize=482344960, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=rpm, -Des.bundled_jdk=true]
[2021-03-11T10:30:00,805][INFO ][o.e.p.PluginsService     ] [cre-node1] loaded module [aggs-matrix-stats]
[2021-03-11T10:30:00,806][INFO ][o.e.p.PluginsService     ] [cre-node1] loaded module [analysis-common]
[2021-03-11T10:30:00,820][INFO ][o.e.p.PluginsService     ] [cre-node1] loaded module [x-pack-watcher]
[2021-03-11T10:30:00,820][INFO ][o.e.p.PluginsService     ] [cre-node1] no plugins loaded
[2021-03-11T10:30:00,924][INFO ][o.e.e.NodeEnvironment    ] [cre-node1] using [1] data paths, mounts [[/var (/dev/mapper/rootvg-lv_var)]], net usable_space [8.8gb], net total_space [9.9gb], types [xfs]
[2021-03-11T10:30:00,925][INFO ][o.e.e.NodeEnvironment    ] [cre-node1] heap size [920mb], compressed ordinary object pointers [true]
[2021-03-11T10:30:01,146][INFO ][o.e.n.Node               ] [cre-node1] node name [cre-node1], node ID [MJl2F-JYQBGDcC7Z-lXr3Q], cluster name [cre-cluster], roles [transform, master, remote_cluster_client, data, ml, data_content, data_hot, data_warm, data_cold, ingest]
[2021-03-11T10:30:12,782][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [cre-node1] [controller/9772] [Main.cc@117] controller (64 bit): Version 7.11.1 (Build b7aec245e3d54f) Copyright (c) 2021 Elasticsearch BV
[2021-03-11T10:30:13,476][INFO ][o.e.x.s.a.s.FileRolesStore] [cre-node1] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2021-03-11T10:30:16,391][INFO ][o.e.t.NettyAllocator     ] [cre-node1] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=1mb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=4mb, heap_size=920mb}]
[2021-03-11T10:30:16,484][INFO ][o.e.d.DiscoveryModule    ] [cre-node1] using discovery type [zen] and seed hosts providers [settings]
[2021-03-11T10:30:17,342][INFO ][o.e.g.DanglingIndicesState] [cre-node1] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2021-03-11T10:30:18,278][INFO ][o.e.n.Node               ] [cre-node1] initialized
[2021-03-11T10:30:18,278][INFO ][o.e.n.Node               ] [cre-node1] starting ...
[2021-03-11T10:30:18,299][INFO ][o.e.x.s.c.PersistentCache] [cre-node1] persistent cache index loaded
[2021-03-11T10:30:18,465][INFO ][o.e.t.TransportService   ] [cre-node1] publish_address {10.204.104.90:9300}, bound_addresses {0.0.0.0:9300}
[2021-03-11T10:30:18,685][INFO ][o.e.b.BootstrapChecks    ] [cre-node1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2021-03-11T10:30:18,696][INFO ][o.e.c.c.Coordinator      ] [cre-node1] setting initial configuration to VotingConfiguration{MJl2F-JYQBGDcC7Z-lXr3Q}
[2021-03-11T10:30:19,005][INFO ][o.e.c.s.MasterService    ] [cre-node1] elected-as-master ([1] nodes joined)[{cre-node1}{MJl2F-JYQBGDcC7Z-lXr3Q}{5k2QpJ-nTly1AI5rszSubA}{10.204.104.90}{10.204.104.90:9300}{cdhilmrstw}{ml.machine_memory=1927581696, xpack.installed=true, transform.node=true, ml.max_open_jobs=20, ml.max_jvm_size=964689920} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{cre-node1}{MJl2F-JYQBGDcC7Z-lXr3Q}{5k2QpJ-nTly1AI5rszSubA}{10.204.104.90}{10.204.104.90:9300}{cdhilmrstw}{ml.machine_memory=1927581696, xpack.installed=true, transform.node=true, ml.max_open_jobs=20, ml.max_jvm_size=964689920}]}
[2021-03-11T10:30:19,171][INFO ][o.e.c.c.CoordinationState] [cre-node1] cluster UUID set to [Q1jbi0QkRGidUcMo4uD_9A]
[2021-03-11T10:30:19,286][INFO ][o.e.c.s.ClusterApplierService] [cre-node1] master node changed {previous [], current [{cre-node1}{MJl2F-JYQBGDcC7Z-lXr3Q}{5k2QpJ-nTly1AI5rszSubA}{10.204.104.90}{10.204.104.90:9300}{cdhilmrstw}{ml.machine_memory=1927581696, xpack.installed=true, transform.node=true, ml.max_open_jobs=20, ml.max_jvm_size=964689920}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
[2021-03-11T10:30:19,421][INFO ][o.e.x.c.t.IndexTemplateRegistry] [cre-node1] adding legacy template [.ml-anomalies-] for [ml], because it doesn't exist
[2021-03-11T10:30:19,422][INFO ][o.e.x.c.t.IndexTemplateRegistry] [cre-node1] adding legacy template [.ml-state] for [ml], because it doesn't exist
[2021-03-11T10:30:19,422][INFO ][o.e.x.c.t.IndexTemplateRegistry] [cre-node1] adding legacy template [.ml-config] for [ml], because it doesn't exist
[2021-03-11T10:30:19,422][INFO ][o.e.x.c.t.IndexTemplateRegistry] [cre-node1] adding legacy template [.ml-inference-000003] for [ml], because it doesn't exist
[2021-03-11T10:30:19,423][INFO ][o.e.x.c.t.IndexTemplateRegistry] [cre-node1] adding legacy template [.ml-meta] for [ml], because it doesn't exist
[2021-03-11T10:30:19,430][INFO ][o.e.x.c.t.IndexTemplateRegistry] [cre-node1] adding legacy template [.ml-notifications-000001] for [ml], because it doesn't exist
[2021-03-11T10:30:19,441][INFO ][o.e.x.c.t.IndexTemplateRegistry] [cre-node1] adding legacy template [.ml-stats] for [ml], because it doesn't exist
[2021-03-11T10:30:19,536][INFO ][o.e.h.AbstractHttpServerTransport] [cre-node1] publish_address {10.204.104.90:9200}, bound_addresses {0.0.0.0:9200}
[2021-03-11T10:30:19,536][INFO ][o.e.n.Node               ] [cre-node1] started
[2021-03-11T10:30:19,716][INFO ][o.e.g.GatewayService     ] [cre-node1] recovered [0] indices into cluster_state
[2021-03-11T10:30:20,592][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding template [.ml-inference-000003] for index patterns [.ml-inference-000003]
[2021-03-11T10:30:20,795][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding template [.ml-meta] for index patterns [.ml-meta]
[2021-03-11T10:30:20,878][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding template [.ml-notifications-000001] for index patterns [.ml-notifications-000001]
[2021-03-11T10:30:21,025][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding template [.ml-stats] for index patterns [.ml-stats-*]

[2021-03-11T10:30:21,756][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding component template [metrics-settings]
[2021-03-11T10:30:21,840][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding component template [synthetics-mappings]
[2021-03-11T10:30:21,911][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding component template [synthetics-settings]
[2021-03-11T10:30:21,958][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding component template [logs-settings]
[2021-03-11T10:30:22,066][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding index template [.watch-history-13] for index patterns [.watcher-history-13*]
[2021-03-11T10:30:22,202][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding index template [.triggered_watches] for index patterns [.triggered_watches*]
[2021-03-11T10:30:22,355][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding index template [.watches] for index patterns [.watches*]
[2021-03-11T10:30:22,419][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding index template [ilm-history] for index patterns [ilm-history-5*]
[2021-03-11T10:30:22,561][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding index template [.slm-history] for index patterns [.slm-history-5*]
[2021-03-11T10:30:22,615][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding template [.monitoring-alerts-7] for index patterns [.monitoring-alerts-7]
[2021-03-11T10:30:22,735][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding template [.monitoring-es] for index patterns [.monitoring-es-7-*]
[2021-03-11T10:30:22,795][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-7-*]
[2021-03-11T10:30:22,964][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-7-*]
[2021-03-11T10:30:23,066][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding template [.monitoring-beats] for index patterns [.monitoring-beats-7-*]
[2021-03-11T10:30:23,224][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding index template [metrics] for index patterns [metrics-*-*]
[2021-03-11T10:30:23,389][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding index template [synthetics] for index patterns [synthetics-*-*]
[2021-03-11T10:30:23,541][INFO ][o.e.c.m.MetadataIndexTemplateService] [cre-node1] adding index template [logs] for index patterns [logs-*-*]
[2021-03-11T10:30:23,648][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [cre-node1] adding index lifecycle policy [ml-size-based-ilm-policy]
[2021-03-11T10:30:23,860][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [cre-node1] adding index lifecycle policy [logs]
[2021-03-11T10:30:24,004][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [cre-node1] adding index lifecycle policy [metrics]
[2021-03-11T10:30:24,095][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [cre-node1] adding index lifecycle policy [synthetics]
[2021-03-11T10:30:24,184][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [cre-node1] adding index lifecycle policy [watch-history-ilm-policy]
[2021-03-11T10:30:24,257][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [cre-node1] adding index lifecycle policy [ilm-history-ilm-policy]
[2021-03-11T10:30:24,412][INFO ][o.e.l.LicenseService     ] [cre-node1] license [87bf7671-4ee4-46d9-969e-741d36417b09] mode [basic] - valid
[2021-03-11T10:30:24,415][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [cre-node1] Active license is now [BASIC]; Security is disabled
[2021-03-11T10:30:24,416][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [cre-node1] adding index lifecycle policy [slm-history-ilm-policy]

According to this log line your node's IP address is 10.204.104.90

Add the logs look pretty good

So did you try

curl -u user:pw http://10.204.104.90:9200

A lot of good information is in the logs if you look closely at them.... also perhaps review the docs on bootstrapping a cluster

BTW that could also mean that you may not be putting the right IP addresses for the other nodes you should look for what their published IPs are in their logs

 discovery.seed_hosts: ["192.168.1.4","192.168.0.10"]

Hi Stephen,
you were right , the command curl -u user:pw http://10.204.104.90:9200 gave the following result :
$ curl -u user:pw http://10.204.104.90:9200
{
"name" : "cre-node1",
"cluster_name" : "cre-cluster",
"cluster_uuid" : "Q1jbi0QkRGidUcMo4uD_9A",
"version" : {
"number" : "7.11.1",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "ff17057114c2199c9c1bbecc727003a907c0db7a",
"build_date" : "2021-02-15T13:44:09.394032Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

I restarted both after changing discovery.seed_hosts: ["10.204.104.90"

and elasticsearch.hosts: ["http://10.204.104.90 :9200"] but kibana still gives the below log:
{"type":"log","@timestamp":"2021-03-12T00:41:44-06:00","tags":["warning","elasticsearch","monitoring"],"pid":21536,"message":"Unable to revive connection: http://192.168.1.4:9200/"}
{"type":"log","@timestamp":"2021-03-12T00:41:44-06:00","tags":["warning","elasticsearch","monitoring"],"pid":21536,"message":"No living connections"}
{"type":"log","@timestamp":"2021-03-12T00:41:44-06:00","tags":["warning","plugins","licensing"],"pid":21536,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
{"type":"log","@timestamp":"2021-03-12T00:42:14-06:00","tags":["warning","elasticsearch","monitoring"],"pid":21536,"message":"Unable to revive connection: http://192.168.1.4:9200/"}
{"type":"log","@timestamp":"2021-03-12T00:42:14-06:00","tags":["warning","elasticsearch","monitoring"],"pid":21536,"message":"No living connections"}
{"type":"log","@timestamp":"2021-03-12T00:42:14-06:00","tags":["warning","plugins","licensing"],"pid":21536,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
{"type":"log","@timestamp":"2021-03-12T00:42:44-06:00","tags":["warning","elasticsearch","monitoring"],"pid":21536,"message":"Unable to revive connection: http://192.168.1.4:9200/"}
{"type":"log","@timestamp":"2021-03-12T00:42:44-06:00","tags":["warning","elasticsearch","monitoring"],"pid":21536,"message":"No living connections"}
{"type":"log","@timestamp":"2021-03-12T00:42:44-06:00","tags":["warning","plugins","licensing"],"pid":21536,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
{"type":"log","@timestamp":"2021-03-12T00:43:04-06:00","tags":["info","plugins-system"],"pid":21536,"message":"Stopping all plugins."}
{"type":"log","@timestamp":"2021-03-12T00:43:04-06:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":21536,"message":"Monitoring stats collection is stopped"}
{"type":"log","@timestamp":"2021-03-12T00:43:14-06:00","tags":["warning","plugins","licensing"],"pid":21536,"message":"License information could not be obtained from Elasticsearch due to Error: Cluster client cannot be used after it has been closed. error"}
{"type":"log","@timestamp":"2021-03-12T00:43:44-06:00","tags":["warning","plugins","licensing"],"pid":21536,"message":"License information could not be obtained from Elasticsearch due to Error: Cluster client cannot be used after it has been closed. error"}
{"type":"log","@timestamp":"2021-03-12T00:44:14-06:00","tags":["warning","plugins","licensing"],"pid":21536,"message":"License information could not be obtained from Elasticsearch due to Error: Cluster client cannot be used after it has been closed. error"}
{"type":"log","@timestamp":"2021-03-12T00:44:38-06:00","tags":["warning","environment"],"pid":"23682","path":"/run/kibana/kibana.pid","message":"pid file already exists at /run/kibana/kibana.pid"}

hey ,
I restarted Kibana again and now it shows that it is running on http://0.0.0.0:5601 but I cant access it says:
Network Error

Your request could not be processed because an error occurred contacting the web site 0.0.0.0

Please check the spelling of the web address and try again.

And for http://10.204.104.90:5601/ It says:

This site can’t be reached

10.204.104.90 took too long to respond.

  {"type":"log","@timestamp":"2021-03-12T01:01:36-06:00","tags":["info","plugins","watcher"],"pid":26931,"message":"Your basic license does not support watcher. Please upgrade your license."}
{"type":"log","@timestamp":"2021-03-12T01:01:36-06:00","tags":["info","plugins","crossClusterReplication"],"pid":26931,"message":"Your basic license does not support crossClusterReplication. Please upgrade your license."}
{"type":"log","@timestamp":"2021-03-12T01:01:36-06:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":26931,"message":"Starting monitoring stats collection"}
{"type":"log","@timestamp":"2021-03-12T01:01:39-06:00","tags":["info","http","server","Kibana"],"pid":26931,"message":"http server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2021-03-12T01:01:39-06:00","tags":["listening","info"],"pid":26931,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2021-03-12T01:01:43-06:00","tags":["warning","plugins","reporting"],"pid":26931,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}

First you would never try access Kibana through 0.0.0.0 that is a directive to bind it to the network not the actual network address.

So http://10.204.104.90:5601/ is probably correct.

2nd When you only give me a few logs it's very hard to help, I am not sure if those are Kibana logs or elasticsearch logs.

However those logs are very telling if you read them it's very clear and I'm not sure where and how all this is set up that you have tried to set up cross cluster replication watches which require a greater than basic license, so it appears you have an invalid license or you have an invalid configuration. I'm not sure if you had a trial license before or what but you're going to need to sort that out.

Hi Stephen,
below are the kibana logs

{"type":"log","@timestamp":"2021-03-12T00:43:04-06:00","tags":["info","plugins-system"],"pid":21536,"message":"Stopping all plugins."}
{"type":"log","@timestamp":"2021-03-12T00:43:04-06:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":21536,"message":"Monitoring stats collection is stopped"}
{"type":"log","@timestamp":"2021-03-12T00:43:14-06:00","tags":["warning","plugins","licensing"],"pid":21536,"message":"License information could not be obtained from Elasticsearch due to Error: Cluster client cannot be used after it has been closed. error"}
{"type":"log","@timestamp":"2021-03-12T00:43:44-06:00","tags":["warning","plugins","licensing"],"pid":21536,"message":"License information could not be obtained from Elasticsearch due to Error: Cluster client cannot be used after it has been closed. error"}
{"type":"log","@timestamp":"2021-03-12T00:44:14-06:00","tags":["warning","plugins","licensing"],"pid":21536,"message":"License information could not be obtained from Elasticsearch due to Error: Cluster client cannot be used after it has been closed. error"}
{"type":"log","@timestamp":"2021-03-12T00:44:38-06:00","tags":["warning","environment"],"pid":"23682","path":"/run/kibana/kibana.pid","message":"pid file already exists at /run/kibana/kibana.pid"}
{"type":"log","@timestamp":"2021-03-12T01:01:31-06:00","tags":["info","plugins-service"],"pid":26931,"message":"Plugin \"visTypeXy\" is disabled."}
{"type":"log","@timestamp":"2021-03-12T01:01:31-06:00","tags":["warning","config","deprecation"],"pid":26931,"message":"Setting [elasticsearch.username] to \"kibana\" is deprecated. You should use the \"kibana_system\" user instead."}
{"type":"log","@timestamp":"2021-03-12T01:01:31-06:00","tags":["warning","config","deprecation"],"pid":26931,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""}
{"type":"log","@timestamp":"2021-03-12T01:01:31-06:00","tags":["warning","config","deprecation"],"pid":26931,"message":"Setting [monitoring.username] to \"kibana\" is deprecated. You should use the \"kibana_system\" user instead."}
{"type":"log","@timestamp":"2021-03-12T01:01:31-06:00","tags":["info","plugins-system"],"pid":26931,"message":"Setting up [101] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,esUiShared,expressions,charts,bfetch,data,home,observability,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,visualizations,visTypeVislib,visTypeTimeseries,visTypeTimeseriesEnhanced,visTypeTimelion,features,licenseManagement,dataEnhanced,watcher,canvas,visTypeVega,visTypeTable,visTypeMetric,visTypeTagcloud,visTypeMarkdown,tileMap,regionMap,mapsOss,lensOss,inputControlVis,graph,timelion,dashboard,dashboardEnhanced,visualize,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,maps,lens,reporting,lists,encryptedSavedObjects,dashboardMode,beatsManagement,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,ml,transform,ingestPipelines,eventLog,actions,alerts,triggersActionsUi,stackAlerts,securitySolution,case,infra,monitoring,logstash,apm,uptime]"}
{"type":"log","@timestamp":"2021-03-12T01:01:31-06:00","tags":["info","plugins","taskManager"],"pid":26931,"message":"TaskManager is identified by the Kibana UUID: 06152e09-f6ce-49e1-a3c9-0fcd5c3cc566"}
{"type":"log","@timestamp":"2021-03-12T01:01:32-06:00","tags":["warning","plugins","security","config"],"pid":26931,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-12T01:01:32-06:00","tags":["warning","plugins","security","config"],"pid":26931,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2021-03-12T01:01:32-06:00","tags":["warning","plugins","reporting","config"],"pid":26931,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-12T01:01:32-06:00","tags":["warning","plugins","reporting","config"],"pid":26931,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux Red Hat Linux 7.9 OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."}
{"type":"log","@timestamp":"2021-03-12T01:01:32-06:00","tags":["warning","plugins","encryptedSavedObjects","config"],"pid":26931,"message":"Generating a random key for xpack.encryptedSavedObjects.encryptionKey. To decrypt encrypted saved objects attributes after restart, please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-12T01:01:32-06:00","tags":["warning","plugins","fleet"],"pid":26931,"message":"Fleet APIs are disabled because the Encrypted Saved Objects plugin uses an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-12T01:01:32-06:00","tags":["warning","plugins","actions","actions"],"pid":26931,"message":"APIs are disabled because the Encrypted Saved Objects plugin uses an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-12T01:01:32-06:00","tags":["warning","plugins","alerts","plugins","alerting"],"pid":26931,"message":"APIs are disabled because the Encrypted Saved Objects plugin uses an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-03-12T01:01:32-06:00","tags":["info","plugins","monitoring","monitoring"],"pid":26931,"message":"config sourced from: production cluster"}
{"type":"log","@timestamp":"2021-03-12T01:01:33-06:00","tags":["info","savedobjects-service"],"pid":26931,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","@timestamp":"2021-03-12T01:01:33-06:00","tags":["info","savedobjects-service"],"pid":26931,"message":"Starting saved objects migrations"}
{"type":"log","@timestamp":"2021-03-12T01:01:33-06:00","tags":["info","savedobjects-service"],"pid":26931,"message":"Creating index .kibana_task_manager_1."}
{"type":"log","@timestamp":"2021-03-12T01:01:33-06:00","tags":["info","savedobjects-service"],"pid":26931,"message":"Creating index .kibana_1."}
{"type":"log","@timestamp":"2021-03-12T01:01:35-06:00","tags":["info","savedobjects-service"],"pid":26931,"message":"Pointing alias .kibana_task_manager to .kibana_task_manager_1."}
{"type":"log","@timestamp":"2021-03-12T01:01:35-06:00","tags":["info","savedobjects-service"],"pid":26931,"message":"Pointing alias .kibana to .kibana_1."}
{"type":"log","@timestamp":"2021-03-12T01:01:35-06:00","tags":["info","savedobjects-service"],"pid":26931,"message":"Finished in 1944ms."}
{"type":"log","@timestamp":"2021-03-12T01:01:35-06:00","tags":["info","savedobjects-service"],"pid":26931,"message":"Finished in 1947ms."}
{"type":"log","@timestamp":"2021-03-12T01:01:35-06:00","tags":["info","plugins-system"],"pid":26931,"message":"Starting [101] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,esUiShared,expressions,charts,bfetch,data,home,observability,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,visualizations,visTypeVislib,visTypeTimeseries,visTypeTimeseriesEnhanced,visTypeTimelion,features,licenseManagement,dataEnhanced,watcher,canvas,visTypeVega,visTypeTable,visTypeMetric,visTypeTagcloud,visTypeMarkdown,tileMap,regionMap,mapsOss,lensOss,inputControlVis,graph,timelion,dashboard,dashboardEnhanced,visualize,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,maps,lens,reporting,lists,encryptedSavedObjects,dashboardMode,beatsManagement,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,ml,transform,ingestPipelines,eventLog,actions,alerts,triggersActionsUi,stackAlerts,securitySolution,case,infra,monitoring,logstash,apm,uptime]"}
{"type":"log","@timestamp":"2021-03-12T01:01:36-06:00","tags":["info","plugins","watcher"],"pid":26931,"message":"Your basic license does not support watcher. Please upgrade your license."}
{"type":"log","@timestamp":"2021-03-12T01:01:36-06:00","tags":["info","plugins","crossClusterReplication"],"pid":26931,"message":"Your basic license does not support crossClusterReplication. Please upgrade your license."}
{"type":"log","@timestamp":"2021-03-12T01:01:36-06:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":26931,"message":"Starting monitoring stats collection"}
{"type":"log","@timestamp":"2021-03-12T01:01:39-06:00","tags":["info","http","server","Kibana"],"pid":26931,"message":"http server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2021-03-12T01:01:39-06:00","tags":["listening","info"],"pid":26931,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2021-03-12T01:01:43-06:00","tags":["warning","plugins","reporting"],"pid":26931,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
{"type":"response","@timestamp":"2021-03-12T01:22:42-06:00","tags":[],"pid":26931,"method":"get","statusCode":302,"req":{"url":"/","method":"get","headers":{"accept-encoding":"gzip;q=1.0,deflate;q=0.6,identity;q=0.3","accept":"*/*","user-agent":"Ruby","connection":"close","host":"localhost:5601"},"remoteAddress":"127.0.0.1","userAgent":"Ruby"},"res":{"statusCode":302,"responseTime":105,"contentLength":9},"message":"GET / 302 105ms - 9.0B"}

I had shared my Elasticsearch and kibana.yml earlier and have made only the mentioned changes to them, do let me know if I need to repost them,
Thanks Stephen for all your guidance. :slight_smile: