Hi,
Need help not able to access kibana in browser
All container is running kibana, elasticsearch and logstash as ELK Stack
docker logs KIBANA-CONTAINER
{"type":"log","@timestamp":"2022-09-27T13:52:24Z","tags":["info","optimize"],"pid":1,"message":"Optimizing and caching bundles for elastalert-kibana-plugin, kibana, stateSessionStorageRedirect, status_page and timelion. This may take a few minutes"}
Browserslist: caniuse-lite is outdated. Please run next command `npm update caniuse-lite browserslist`
{"type":"log","@timestamp":"2022-09-27T13:52:42Z","tags":["info","optimize"],"pid":1,"message":"Optimization of bundles for elastalert-kibana-plugin, kibana, stateSessionStorageRedirect, status_page and timelion complete in 18.00 seconds"}
{"type":"log","@timestamp":"2022-09-27T13:52:42Z","tags":["status","plugin:kibana@undefined","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-09-27T13:52:42Z","tags":["status","plugin:elasticsearch@undefined","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-09-27T13:52:42Z","tags":["status","plugin:elastalert-kibana-plugin@1.0.4","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-09-27T13:52:42Z","tags":["status","plugin:apm_oss@undefined","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-09-27T13:52:42Z","tags":["status","plugin:console@undefined","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-09-27T13:52:42Z","tags":["status","plugin:interpreter@undefined","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-09-27T13:52:42Z","tags":["status","plugin:metrics@undefined","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-09-27T13:52:42Z","tags":["status","plugin:tile_map@undefined","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-09-27T13:52:43Z","tags":["status","plugin:timelion@undefined","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2022-09-27T13:52:43Z","tags":["status","plugin:elasticsearch@undefined","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2022-09-27T13:52:43Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
Inside Elasticsearch Container
curl -XGET localhost:9200/_cluster/allocation/explain?pretty
{
"index" : "logstash-2022.09.26-000001",
"shard" : 0,
"primary" : false,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2022-09-27T13:36:12.903Z",
"last_allocation_status" : "no_attempt"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions" : [
{
"node_id" : "y_JNQazxRmCxgT-5jiE2Rw",
"node_name" : "d3e699a3b689",
"transport_address" : "10.0.0.182:9300",
"node_attributes" : {
"ml.machine_memory" : "32877465600",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20"
},
"node_decision" : "no",
"deciders" : [
{
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[logstash-2022.09.26-000001][0], node[y_JNQazxRmCxgT-5jiE2Rw], [P], s[STARTED], a[id=_i_8ZtNjQpKI4tgG8wzlYg]]"
}
]
}
]
}
Here is kibana.yml file inside config folder
#
server.name: kibana
server.host: "0.0.0.0"
#server.host: "0"
#elasticsearch.url: http://elasticsearch:9200
#elasticsearch.hosts: http://elasticsearch:9200
elasticsearch.hosts: http://nginx
#elasticsearch.username: "elastic"
elasticsearch.username: "test"
elasticsearch.password: "xxxxx"
#elastalert-kibana-plugin.serverHost: 123.0.0.1
#elastalert-kibana-plugin.serverPort: 9000
#timeout= 90000ms
#elasticsearch.timeout: "90000ms"
#readonly:
# cluster:
# - cluster:monitor/nodes/info
# - cluster:monitor/health
# indices:
# '*':
# privileges: indices:test/mappings/fields/get, indices:test/validate/query, indices:data/read/search, indices:data/read/msearch, indices:data/read/field_stats, indices:test/get
# '.kibana':
# privileges: indices:test/exists, indices:test/mappings/fields/get, indices:test/refresh, indices:test/validate/query, indices:data/read/get, indices:data/read/mget,>
#elastalert.enabled:
elastalert-kibana-plugin.serverHost: elastalert
elastalert-kibana-plugin.serverPort: 3030
Here is Elasticsearch Container Logs
{"type": "server", "timestamp": "2022-09-27T13:36:07,311+0000", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "loaded module [x-pack-sql]" }
{"type": "server", "timestamp": "2022-09-27T13:36:07,312+0000", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "loaded module [x-pack-watcher]" }
{"type": "server", "timestamp": "2022-09-27T13:36:07,312+0000", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "no plugins loaded" }
ype": "deprecation", "timestamp": "2022-09-27T13:36:09,081+0000", "level": "WARN", "component": "o.e.d.c.s.Settings", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "[discovery.zen.minimum_master_nodes] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version." }
{"type": "server", "timestamp": "2022-09-27T13:36:10,248+0000", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" }
{"type": "server", "timestamp": "2022-09-27T13:36:10,922+0000", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "[controller/365] [Main.cc@109] controller (64 bit): Version 7.0.1 (Build 6a88928693d862) Copyright (c) 2019 Elasticsearch BV" }
{"type": "server", "timestamp": "2022-09-27T13:36:11,352+0000", "level": "DEBUG", "component": "o.e.a.ActionModule", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "Using REST wrapper from plugin org.elasticsearch.xpack.security.Security" }
{"type": "server", "timestamp": "2022-09-27T13:36:11,656+0000", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "using discovery type [single-node] and seed hosts providers [settings]" }
{"type": "server", "timestamp": "2022-09-27T13:36:12,431+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "initialized" }
{"type": "server", "timestamp": "2022-09-27T13:36:12,431+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "starting ..." }
{"type": "server", "timestamp": "2022-09-27T13:36:12,606+0000", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "publish_address {10.0.0.182:9300}, bound_addresses {0.0.0.0:9300}" }
{"type": "server", "timestamp": "2022-09-27T13:36:12,773+0000", "level": "INFO", "component": "o.e.c.s.MasterService", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "elected-as-master ([1] nodes joined)[{d3e699a3b689}{y_JNQazxRmCxgT-5jiE2Rw}{2SkuEQD-QDmOZSCBR9jhrQ}{10.0.0.182}{10.0.0.182:9300}{ml.machine_memory=32877465600, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 11, version: 81, reason: master node changed {previous [], current [{d3e699a3b689}{y_JNQazxRmCxgT-5jiE2Rw}{2SkuEQD-QDmOZSCBR9jhrQ}{10.0.0.182}{10.0.0.182:9300}{ml.machine_memory=32877465600, xpack.installed=true, ml.max_open_jobs=20}]}" }
{"type": "server", "timestamp": "2022-09-27T13:36:12,869+0000", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "message": "master node changed {previous [], current [{d3e699a3b689}{y_JNQazxRmCxgT-5jiE2Rw}{2SkuEQD-QDmOZSCBR9jhrQ}{10.0.0.182}{10.0.0.182:9300}{ml.machine_memory=32877465600, xpack.installed=true, ml.max_open_jobs=20}]}, term: 11, version: 81, reason: Publication{term=11, version=81}" }
{"type": "server", "timestamp": "2022-09-27T13:36:12,901+0000", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "cluster.uuid": "tNQ2MNtaQZue361eZfYlCA", "node.id": "y_JNQazxRmCxgT-5jiE2Rw", "message": "publish_address {10.0.0.182:9200}, bound_addresses {0.0.0.0:9200}" }
{"type": "server", "timestamp": "2022-09-27T13:36:12,902+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "cluster.uuid": "tNQ2MNtaQZue361eZfYlCA", "node.id": "y_JNQazxRmCxgT-5jiE2Rw", "message": "started" }
{"type": "server", "timestamp": "2022-09-27T13:36:13,098+0000", "level": "WARN", "component": "o.e.x.s.a.s.m.NativeRoleMappingStore", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "cluster.uuid": "tNQ2MNtaQZue361eZfYlCA", "node.id": "y_JNQazxRmCxgT-5jiE2Rw", "message": "Failed to clear cache for realms [[]]" }
{"type": "server", "timestamp": "2022-09-27T13:36:13,140+0000", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "cluster.uuid": "tNQ2MNtaQZue361eZfYlCA", "node.id": "y_JNQazxRmCxgT-5jiE2Rw", "message": "license [1914ca97-6b8e-463d-bfd8-da570e22f2a6] mode [basic] - valid" }
{"type": "server", "timestamp": "2022-09-27T13:36:13,161+0000", "level": "INFO", "component": "o.e.g.GatewayService", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "cluster.uuid": "tNQ2MNtaQZue361eZfYlCA", "node.id": "y_JNQazxRmCxgT-5jiE2Rw", "message": "recovered [2] indices into cluster_state" }
{"type": "server", "timestamp": "2022-09-27T13:36:13,691+0000", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "cluster.uuid": "tNQ2MNtaQZue361eZfYlCA", "node.id": "y_JNQazxRmCxgT-5jiE2Rw", "message": "Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2022.09.26-000001][0]] ...])." }
{"type": "server", "timestamp": "2022-09-27T14:01:31,625+0000", "level": "DEBUG", "component": "o.e.a.a.c.a.TransportClusterAllocationExplainAction", "cluster.name": "docker-cluster", "node.name": "d3e699a3b689", "cluster.uuid": "tNQ2MNtaQZue361eZfYlCA", "node.id": "y_JNQazxRmCxgT-5jiE2Rw", "message": "explaining the allocation for [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false], found shard [[logstash-2022.09.26-000001][0], node[null], [R], recovery_source[peer recovery], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2022-09-27T13:36:12.903Z], delayed=false, allocation_status[no_attempt]]]" }
But in Browser im getting :- https://IP-ADDRESS:5601/
# 503 Service Unavailable
No server is available to handle this request.