Curl gets Kibana is not ready


(Ralf Kronen) #1

Hi,

when i run a curl command:
curl -XGET 'localhost:5601' i get the answer 'Kibana server is not ready yet'

curl -XGET 'localhost:9200/_cat/indices?v 

response me:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
red open .kibana_1 0UGV8n-MTPO1znprUjmV1Q 1 0

I delete the .kibana_1 index, restart es and kibana, but allwasy i get the same result. In the syslog i found the following entry:
Mar 18 16:18:52 riqata kibana[895]: {"type":"log","@timestamp":"2019-03-18T15:18:52Z","tags":["warning","migrations"],"pid":895,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

ps -ef | grep -i kibana

kibana 895 1 3 16:18 ? 00:00:22 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
ralf 1404 944 0 16:29 pts/0 00:00:00 grep -i kibana

Anybody an idea?

Cheers
Ralf


#2

Hi @UnitFactory,

It seems you're hitting this issue. Would you mind going through solutions explained there and in Saved object migrations?

Best,
Oleg


(Ralf Kronen) #3

Hi Oleg,

i read this issues and documentations, but it is a new installation. I install ES and Kibana on a new blank server. Or did i understand something wrong?

Cheers
Ralf


(Tiago Costa) #4

Hi @UnitFactory,

Have you installed both Elasticsearch and Kibana in the last version 6.6.2 in a blank server? It is very strange that you're having this problem in that case. Are you sure you don't have any other kibana instance behind the same address or with access to your localhost elasticsearch?

Cheers,


(Ralf Kronen) #5

Hi,

curl -XGET 'http://localhost:9200'
{
  "name" : "node-1",
  "cluster_name" : "my-cluster",
  "cluster_uuid" : "-p0ycCv1Q2aTnJ7VJ8nmuQ",
  "version" : {
    "number" : "6.6.2",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "3bd3e59",
    "build_date" : "2019-03-06T15:16:26.864148Z",
    "build_snapshot" : false,
    "lucene_version" : "7.6.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

and Kibana has the same Version.

Ports:
netstat -tulpn | grep 5601
(No info could be read for "-p": geteuid()=1000 but you should be root.)
tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN -

sudo lsof -i -P -n | grep 5601
node        895        kibana   18u  IPv4  12892      0t0  TCP 127.0.0.1:5601 (LISTEN)

Service:
sudo systemctl status kibana

● kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; enabled)
   Active: active (running) since Mon 2019-03-18 21:05:00 CET; 3min 47s ago
 Main PID: 12807 (node)
   CGroup: /system.slice/kibana.service
           └─12807 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

Mar 18 21:05:10 my.server kibana[12807]: {"type":"log","@timestamp":"2019-03-18T20:05:10Z","tags":["status","plugin:remote_clusters@6.6.2","info"],"pid":12807,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
Mar 18 21:05:10 my.server kibana[12807]: {"type":"log","@timestamp":"2019-03-18T20:05:10Z","tags":["status","plugin:cross_cluster_replication@6.6.2","info"],"pid":12807,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
Mar 18 21:05:10 my.server kibana[12807]: {"type":"log","@timestamp":"2019-03-18T20:05:10Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":12807,"message":"Starting monitoring stats collection"}
Mar 18 21:05:10 my.server kibana[12807]: {"type":"log","@timestamp":"2019-03-18T20:05:10Z","tags":["status","plugin:security@6.6.2","info"],"pid":12807,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
Mar 18 21:05:11 my.server kibana[12807]: {"type":"log","@timestamp":"2019-03-18T20:05:11Z","tags":["reporting","warning"],"pid":12807,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
Mar 18 21:05:11 my.server kibana[12807]: {"type":"log","@timestamp":"2019-03-18T20:05:11Z","tags":["status","plugin:reporting@6.6.2","info"],"pid":12807,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
Mar 18 21:05:11 my.server kibana[12807]: {"type":"log","@timestamp":"2019-03-18T20:05:11Z","tags":["license","info","xpack"],"pid":12807,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
Mar 18 21:05:12 my.server kibana[12807]: {"type":"log","@timestamp":"2019-03-18T20:05:12Z","tags":["reporting","browser-driver","warning"],"pid":12807,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
Mar 18 21:05:12 my.server kibana[12807]: {"type":"log","@timestamp":"2019-03-18T20:05:12Z","tags":["info","migrations"],"pid":12807,"message":"Creating index .kibana_1."}
Mar 18 21:05:12 my.server kibana[12807]: {"type":"log","@timestamp":"2019-03-18T20:05:12Z","tags":["warning","migrations"],"pid":12807,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

Cheers Ralf


(Tiago Costa) #6

Hi @UnitFactory,

I just download the kibana and elastic search 6.6.2 version from our official page for a mac and just ran and initialised elasticsearch and then kibana. Everything worked fine!

If that is happening to you, somehow the .kibana_1 was created. Just stop and kill every kibana processes you have running on background and then delete the .kibana_1 index. Start kibana and everything should be working.

curl -X DELETE "localhost:9200/.kibana_1"

and then

./bin/kibana to start it if you are in the kibana root path

Cheers
Tiago


(system) closed #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.