We were having our Kibana running, but by accident a team member change the size of our EC2 and Kibana went down. We had a BAD GATEWAY error
Our first step was to restart the service
sudo service kibana restart
With this we were able to put it up again, but we're not receiving data, and all past data isn't available.
I went to the 3 nodes of the cluster and did a curl -X GET "localhost:9200/_cluster/health?pretty" and the status is green
{
"cluster_name" : "elasticsearch-logs",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 1499,
"active_shards" : 2998,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
So I went to the kibana logs and found the following:
cat /var/log/kibana/kibana.stderr | grep -i -E "(error|warning)"
And this message appears frequently. I'm assuming there's a service not up, but not sure, getting this idea based on this question
"body":"{\"index\":{\"_type\":\"kibana_stats\"}}\n{\"kibana\":{\"uuid\":\"4a52783f-7d1c-404c-8c7f-2c2ccda9ab21\",\"name\":\"Kibana\",\"index\":\".kibana\",\"host\":\"0.0.0.0\",\"transport_address\":\"0.0.0.0:5601\",\"version\":\"6.5.1\",\"snapshot\":false,\"status\":\"red\"},\"cloud\":{\"name\":\"aws\",\"id\":\"i-007c06cb971c5bcad\",\"vm_type\":\"t3.micro\",\"region\":\"us-east-1\",\"zone\":\"us-east-1c\",\"metadata\":{\"marketplaceProductCodes\":null,\"version\":\"2017-09-30\",\"imageId\":\"ami-09479453c5cde9639\",\"pendingTime\":\"2019-03-11T14:30:32Z\",\"kernelId\":null,\"ramdiskId\":null,\"architecture\":\"x86_64\"}},\"usage\":{\"xpack\":{\"spaces\":{\"available\":false,\"enabled\":false}},\"infraops\":{\"last_24_hours\":{\"hits\":{\"infraops_hosts\":0,\"infraops_docker\":0,\"infraops_kubernetes\":0,\"logs\":0}}}}}\n",
"statusCode":503,
"response":"{\"error\":{\"root_cause\":[{\"type\":\"cluster_block_exception\",\"reason\":\"blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];\"}],\"type\":\"cluster_block_exception\",\"reason\":\"blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];\"},\"status\":503}"
}Unhandled rejection[
cluster_block_exception
]blocked by:[
SERVICE_UNAVAILABLE/1/state not recovered / initialized
];::{
"path":"/_xpack/monitoring/_bulk",
"query":{
"system_id":"kibana",
"system_api_version":"6",
"interval":"10000ms"
},