Kibana server is not ready yet 7.6.1 in browser

hlo Juanjo sir,
elasticsearch, kibana , wazuh api, wazuh manager , file beat all running well
but in browser kibana server is not ready yet to solves this problem i m already install the full ubuntu now i m using 18.04 ubuntu

cat /var/log/elasticsearch/elasticsearch.log | grep -i -E "error|warn"

output
[2020-03-16T03:59:12,934][INFO ][o.e.n.Node ] [node-1] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-13352531301192792164, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=1073741824, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=deb, -Des.bundled_jdk=true]

when i am try to run this comand the output is this
root@JARVICE:~# curl api_user:api_pass@api_url:55000/version
curl: (6) Could not resolve host: api_url

i am also try to check indexes

root@JARVICE:~# curl "http://localhost:9200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana_task_manager_1 p5selkBWQ2apfIwTF4D1WA 1 0 0 0 283b 283b
green open .apm-agent-configuration 8RYBT7AlTX6EBN-jz-o0vA 1 0 0 0 283b 283b
green open .kibana_1 FVPzhgDXRI-FDo3C0pGMNw 1 0 0 0 283b 283b
green open wazuh-alerts-3.x-2020.03.15 Pf9orgh4TFy4ofVdHFPT8w 3 0 551 0 990.5kb 990.5kb

after this out i delete once .kibana_1 and kibana* but when we restart the system it again generate the same error
i am already check or match all the versions

Can you please share all of your elasticsearch and Kibana logs for when this happens?

THis is the output of """cat /var/log/elasticsearch/elasticsearch.log""" part =1

[2020-03-17T17:08:18,849][INFO ][o.e.c.s.MasterService ] [node-1] elected-as-master ([1] nodes joined)[{node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20} elect leader, BECOME_MASTER_TASK, FINISH_ELECTION], term: 10, version: 146, delta: master node changed {previous , current [{node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20}]}
[2020-03-17T17:08:23,852][INFO ][o.e.c.s.ClusterApplierService] [node-1] master node changed {previous , current [{node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20}]}, term: 10, version: 146, reason: Publication{term=10, version=146}
[2020-03-17T17:08:24,214][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2020-03-17T17:08:24,214][INFO ][o.e.n.Node ] [node-1] started
[2020-03-17T17:08:26,934][INFO ][o.e.l.LicenseService ] [node-1] license [73a6222b-11e8-41d5-b479-421883f6a673] mode [basic] - valid
[2020-03-17T17:08:26,935][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [node-1] Active license is now [BASIC]; Security is disabled
[2020-03-17T17:08:26,945][INFO ][o.e.g.GatewayService ] [node-1] recovered [1] indices into cluster_state
[2020-03-17T17:08:27,647][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [.kibana_task_manager_1] creating index, cause [api], templates , shards [1]/[1], mappings [_doc]
[2020-03-17T17:08:27,651][INFO ][o.e.c.r.a.AllocationService] [node-1] updating number_of_replicas to [0] for indices [.kibana_task_manager_1]
[2020-03-17T17:08:29,942][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [.kibana_1] creating index, cause [api], templates , shards [1]/[1], mappings [_doc]
[2020-03-17T17:08:29,943][INFO ][o.e.c.r.a.AllocationService] [node-1] updating number_of_replicas to [0] for indices [.kibana_1]
[2020-03-17T17:08:34,522][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[wazuh-alerts-3.x-2020.03.16][1], [wazuh-alerts-3.x-2020.03.16][2], [wazuh-alerts-3.x-2020.03.16][0]]]).
[2020-03-17T17:08:47,927][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_1][0]]]).
[2020-03-17T17:09:00,722][WARN ][o.e.g.PersistedClusterStateService] [node-1] writing cluster state took [12810ms] which is above the warn threshold of [10s]; wrote global metadata [false] and metadata for [1] indices and skipped [2] unchanged indices
[2020-03-17T17:09:00,722][INFO ][o.e.c.c.C.CoordinatorPublication] [node-1] after [12.8s] publication of cluster state version [155] is still waiting for {node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20} [SENT_PUBLISH_REQUEST]
[2020-03-17T17:09:09,901][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [wazuh] for index patterns [wazuh-alerts-3.x-, wazuh-archives-3.x-]
[2020-03-17T17:09:10,567][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [wazuh-alerts-3.x-2020.03.17] creating index, cause [auto(bulk api)], templates [wazuh], shards [3]/[0], mappings [_doc]
[2020-03-17T17:09:14,059][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[wazuh-alerts-3.x-2020.03.17][2], [wazuh-alerts-3.x-2020.03.17][0]]]).
[2020-03-17T17:25:34,102][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [wazuh-alerts-3.x-2020.03.17/36wfMY0tTk-gaJVdM77doA] update_mapping [_doc]
root@JARVICE:/home/hunt# clear

root@JARVICE:/home/hunt# cat /var/log/elasticsearch/elasticsearch.log
[2020-03-17T17:07:41,848][INFO ][o.e.e.NodeEnvironment ] [node-1] using [1] data paths, mounts [[/ (/dev/sda9)]], net usable_space [342.6gb], net total_space [371.2gb], types [ext4]
[2020-03-17T17:07:41,862][INFO ][o.e.e.NodeEnvironment ] [node-1] heap size [2.9gb], compressed ordinary object pointers [true]
[2020-03-17T17:07:42,506][INFO ][o.e.n.Node ] [node-1] node name [node-1], node ID [jeZkZeseTHG_XfyaYaaYGQ], cluster name [elasticsearch]
[2020-03-17T17:07:42,506][INFO ][o.e.n.Node ] [node-1] version[7.6.1], pid[988], build[default/deb/aa751e09be0a5072e8570670309b1f12348f023b/2020-02-29T00:15:25.529771Z], OS[Linux/4.15.0-20-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/13.0.2/13.0.2+8]
[2020-03-17T17:07:42,507][INFO ][o.e.n.Node ] [node-1] JVM home [/usr/share/elasticsearch/jdk]
[2020-03-17T17:07:42,508][INFO ][o.e.n.Node ] [node-1] JVM arguments [-Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Xms3g, -Xmx3g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/tmp/elasticsearch-8091960405776381503, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=1610612736, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=deb, -Des.bundled_jdk=true]
[2020-03-17T17:07:48,134][INFO ][o.e.p.PluginsService ] [node-1] loaded module [aggs-matrix-stats]
[2020-03-17T17:07:48,134][INFO ][o.e.p.PluginsService ] [node-1] loaded module [analysis-common]
[2020-03-17T17:07:48,135][INFO ][o.e.p.PluginsService ] [node-1] loaded module [flattened]
[2020-03-17T17:07:48,135][INFO ][o.e.p.PluginsService ] [node-1] loaded module [frozen-indices]
[2020-03-17T17:07:48,135][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-common]
[2020-03-17T17:07:48,135][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-geoip]
[2020-03-17T17:07:48,135][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-user-agent]
[2020-03-17T17:07:48,135][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-expression]
[2020-03-17T17:07:48,136][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-mustache]
[2020-03-17T17:07:48,136][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-painless]
[2020-03-17T17:07:48,136][INFO ][o.e.p.PluginsService ] [node-1] loaded module [mapper-extras]
[2020-03-17T17:07:48,136][INFO ][o.e.p.PluginsService ] [node-1] loaded module [parent-join]
[2020-03-17T17:07:48,136][INFO ][o.e.p.PluginsService ] [node-1] loaded module [percolator]
[2020-03-17T17:07:48,136][INFO ][o.e.p.PluginsService ] [node-1] loaded module [rank-eval]
[2020-03-17T17:07:48,137][INFO ][o.e.p.PluginsService ] [node-1] loaded module [reindex]
[2020-03-17T17:07:48,137][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repository-url]
[2020-03-17T17:07:48,137][INFO ][o.e.p.PluginsService ] [node-1] loaded module [search-business-rules]
[2020-03-17T17:07:48,137][INFO ][o.e.p.PluginsService ] [node-1] loaded module [spatial]
[2020-03-17T17:07:48,137][INFO ][o.e.p.PluginsService ] [node-1] loaded module [systemd]
[2020-03-17T17:07:48,137][INFO ][o.e.p.PluginsService ] [node-1] loaded module [transform]
[2020-03-17T17:07:48,138][INFO ][o.e.p.PluginsService ] [node-1] loaded module [transport-netty4]
[2020-03-17T17:07:48,138][INFO ][o.e.p.PluginsService ] [node-1] loaded module [vectors]
[2020-03-17T17:07:48,138][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-analytics]
[2020-03-17T17:07:48,138][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-ccr]
[2020-03-17T17:07:48,138][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-core]
[2020-03-17T17:07:48,138][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-deprecation]
[2020-03-17T17:07:48,138][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-enrich]
[2020-03-17T17:07:48,139][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-graph]
[2020-03-17T17:07:48,139][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-ilm]
[2020-03-17T17:07:48,139][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-logstash]
[2020-03-17T17:07:48,139][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-ml]
[2020-03-17T17:07:48,139][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-monitoring]
[2020-03-17T17:07:48,139][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-rollup]
[2020-03-17T17:07:48,139][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-security]
[2020-03-17T17:07:48,140][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-sql]
[2020-03-17T17:07:48,140][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-voting-only-node]
[2020-03-17T17:07:48,140][INFO ][o.e.p.PluginsService ] [node-1] loaded module [x-pack-watcher]
[2020-03-17T17:07:48,140][INFO ][o.e.p.PluginsService ] [node-1] no plugins loaded
[2020-03-17T17:07:56,921][INFO ][o.e.x.s.a.s.FileRolesStore] [node-1] parsed [0] roles from file [/etc/elasticsearch/roles.yml]

THis is the output of """cat /var/log/elasticsearch/elasticsearch.log""" part =2

[2020-03-17T17:07:56,921][INFO ][o.e.x.s.a.s.FileRolesStore] [node-1] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2020-03-17T17:07:58,079][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-1] [controller/1732] [Main.cc@110] controller (64 bit): Version 7.6.1 (Build 6eb6e036390036) Copyright (c) 2020 Elasticsearch BV
[2020-03-17T17:07:59,904][DEBUG][o.e.a.ActionModule ] [node-1] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2020-03-17T17:08:01,010][INFO ][o.e.d.DiscoveryModule ] [node-1] using discovery type [zen] and seed hosts providers [settings]
[2020-03-17T17:08:02,760][INFO ][o.e.n.Node ] [node-1] initialized
[2020-03-17T17:08:02,761][INFO ][o.e.n.Node ] [node-1] starting ...
[2020-03-17T17:08:03,316][WARN ][i.n.u.i.MacAddressUtil ] [node-1] Failed to find a usable hardware address from the network interfaces; using random bytes: 5e:95:19:2a:42:65:47:95
[2020-03-17T17:08:04,049][INFO ][o.e.t.TransportService ] [node-1] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2020-03-17T17:08:12,738][INFO ][o.e.c.c.Coordinator ] [node-1] cluster UUID [k5TSkJ3uTQKJZOjjoO3H4w]
[2020-03-17T17:08:18,823][INFO ][o.e.c.c.JoinHelper ] [node-1] failed to join {node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20} with JoinRequest{sourceNode={node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20}, optionalJoin=Optional[Join{term=9, lastAcceptedTerm=8, lastAcceptedVersion=145, sourceNode={node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20}, targetNode={node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20}}]}
org.elasticsearch.transport.RemoteTransportException: [node-1][127.0.0.1:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: incoming term 9 does not match current term 10
at org.elasticsearch.cluster.coordination.CoordinationState.handleJoin(CoordinationState.java:225) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:980) ~[elasticsearch-7.6.1.jar:7.6.1]
at java.util.Optional.ifPresent(Optional.java:176) ~[?:?]
at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:525) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:491) ~[elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:368) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:355) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:478) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$0(JoinHelper.java:125) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257) [x-pack-security-7.6.1.jar:7.6.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315) [x-pack-security-7.6.1.jar:7.6.1]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:63) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:750) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:692) [elasticsearch-7.6.1.jar:7.6.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.6.1.jar:7.6.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:830) [?:?]
[2020-03-17T17:08:18,849][INFO ][o.e.c.s.MasterService ] [node-1] elected-as-master ([1] nodes joined)[{node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20} elect leader, BECOME_MASTER_TASK, FINISH_ELECTION], term: 10, version: 146, delta: master node changed {previous , current [{node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20}]}
[2020-03-17T17:08:23,852][INFO ][o.e.c.s.ClusterApplierService] [node-1] master node changed {previous , current [{node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20}]}, term: 10, version: 146, reason: Publication{term=10, version=146}
[2020-03-17T17:08:24,214][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2020-03-17T17:08:24,214][INFO ][o.e.n.Node ] [node-1] started
[2020-03-17T17:08:26,934][INFO ][o.e.l.LicenseService ] [node-1] license [73a6222b-11e8-41d5-b479-421883f6a673] mode [basic] - valid
[2020-03-17T17:08:26,935][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [node-1] Active license is now [BASIC]; Security is disabled
[2020-03-17T17:08:26,945][INFO ][o.e.g.GatewayService ] [node-1] recovered [1] indices into cluster_state
[2020-03-17T17:08:27,647][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [.kibana_task_manager_1] creating index, cause [api], templates , shards [1]/[1], mappings [_doc]
[2020-03-17T17:08:27,651][INFO ][o.e.c.r.a.AllocationService] [node-1] updating number_of_replicas to [0] for indices [.kibana_task_manager_1]
[2020-03-17T17:08:29,942][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [.kibana_1] creating index, cause [api], templates , shards [1]/[1], mappings [_doc]
[2020-03-17T17:08:29,943][INFO ][o.e.c.r.a.AllocationService] [node-1] updating number_of_replicas to [0] for indices [.kibana_1]
[2020-03-17T17:08:34,522][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[wazuh-alerts-3.x-2020.03.16][1], [wazuh-alerts-3.x-2020.03.16][2], [wazuh-alerts-3.x-2020.03.16][0]]]).
[2020-03-17T17:08:47,927][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_1][0]]]).
[2020-03-17T17:09:00,722][WARN ][o.e.g.PersistedClusterStateService] [node-1] writing cluster state took [12810ms] which is above the warn threshold of [10s]; wrote global metadata [false] and metadata for [1] indices and skipped [2] unchanged indices
[2020-03-17T17:09:00,722][INFO ][o.e.c.c.C.CoordinatorPublication] [node-1] after [12.8s] publication of cluster state version [155] is still waiting for {node-1}{jeZkZeseTHG_XfyaYaaYGQ}{r3K8ic26Tu6Kh1_t462Okw}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=8244793344, xpack.installed=true, ml.max_open_jobs=20} [SENT_PUBLISH_REQUEST]
[2020-03-17T17:09:09,901][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [wazuh] for index patterns [wazuh-alerts-3.x-, wazuh-archives-3.x-]
[2020-03-17T17:09:10,567][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [wazuh-alerts-3.x-2020.03.17] creating index, cause [auto(bulk api)], templates [wazuh], shards [3]/[0], mappings [_doc]
[2020-03-17T17:09:14,059][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[wazuh-alerts-3.x-2020.03.17][2], [wazuh-alerts-3.x-2020.03.17][0]]]).
[2020-03-17T17:25:34,102][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [wazuh-alerts-3.x-2020.03.17/36wfMY0tTk-gaJVdM77doA] update_mapping [_doc]

sir when i run this command""" cat /usr/share/kibana/optimize/wazuh-logs/wazuhapp.log | grep -i -E "error|warn""""""
than its output is
cat: /usr/share/kibana/optimize/wazuh-logs/wazuhapp.log: No such file or directory

and when i ran this command
root@JARVICE:/home/hunt# curl 127.0.0.1:9200/_cat/indices/
""""""""
the output is
green open .kibana_task_manager_1 eaZ46z7lQJ-KfHIM7q1FtA 1 0 0 0 283b 283b
green open wazuh-alerts-3.x-2020.03.17 36wfMY0tTk-gaJVdM77doA 3 0 47 0 285.6kb 285.6kb
green open wazuh-alerts-3.x-2020.03.16 rspSTwA-RUSEe_w8aD8dPw 3 0 2 0 14kb 14kb
green open .kibana_1 yK8LVN5CRS-cFY5BQpJLfw 1 0 0 0 283b 283b

and when in try to delete kibana* or kibana_1 then the problem is not solved bute agaiin it generate the same indexes after restart process

curl api_user:api_pass@api_url:55000/version

output is
curl: (6) Could not resolve host: api_url

Without any Kibana logs it's hard to say why the Kibana server is not ready.

ok sir so what commands or what else i can do for collect kibana logs

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.