Kibana Error: 503 Service Unavailable

Hi, I'm new with Kibana. Client asked me to solve this problem with Kibana.
They can't reach web GUI, gettin 404 Not found error from nginx.

I'm checking /var/log/messages, this is what I see:

Sep 12 10:32:12 syslog kibana:

  {
    "type":"response",
    "@timestamp":"2019-09-12T13:32:12Z",
    "tags":[],
    "pid":27714,
    "method":"get",
    "statusCode":503,
    "req":
      {
        "url":"/app/kibana",
        "method":"get",
        "headers":
          {
            "host":"localhost:5601",
            "user-agent":"check_http/v2.1.1 (nagios-plugins 2.1.1)",
            "accept":"*/*",
            "x-forwarded-for":"172.16.220.231",
            "x-forwarded-host":"kibana.obiwan.company.com.py",
            "x-forwarded-server":"kibana.obiwan.customer.com.py",
            "connection":"Keep-Alive"
          },
        "remoteAddress":"127.0.0.1",
        "userAgent":"127.0.0.1"},
        "res":{"statusCode":503,
        "responseTime":10,
        "contentLength":9
      },
    "message":"GET /app/kibana 503 10ms - 9.0B"
  }

This is running on a CentOS Linux. It was working fine until yesterday.
I know this is a very poor troubleshoot, but I'm not familiar with this implementation or this product.

I'll very appreciate any help.

Hmm, 503 is "service unavailable", have you verified that Kibana is up and running? Could you provide the Kibana logs?

Hi! Thanks for your answer. I spend some time troubleshooting and now I think I made some progress.
I guess maybe it's an ES problem. After running this command: curl -XGET localhost:9200/_cluster/allocation/explain?pretty, I get that there was shards unallocated.
This is a single node environment. Every index has 5 shards and 1 replica (all primary shards are allocated but replica shards are not), so explain API said that it's impossible to allocate a replica shard in the same node that primary, so I set number_of_replicas to 0, but I'm stuck again.

This is the actual status:

{
  "index" : "un_indice",
  "shard" : 4,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2019-09-12T17:34:12.622Z",
"last_allocation_status" : "no"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
{
  "node_id" : "identificador",
  "node_name" : "nombre",
  "transport_address" : "127.0.0.1:9300",
  "node_decision" : "no",
  "weight_ranking" : 1,
  "deciders" : [
    {
      "decider" : "enable",
      "decision" : "NO",
      "explanation" : "no allocations are allowed due to {}"
    }
  ]
}
  ]
}

This is the health command output:

{
  "cluster_name" : "elasticsearch",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 13414,
  "active_shards" : 13414,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 99.9925456578457
}

Before set number_of_replicas to 0, health command showed 13414 unassigned shards.