Elasticsearch is initializing

Hi,
My ES version is 2.3.0.
My Kibana version is 4.5.0
I can start ES. But when I start kibana It didn't work true. I have not log file.
My error :

[root@localhost kibana-4.5.0-linux-x64]# bin/kibana
  log   [10:07:57.945] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
  log   [10:07:58.097] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:07:58.101] [info][status][plugin:marvel] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:07:58.467] [info][status][plugin:sense] Status changed from uninitialized to green - Ready
  log   [10:07:58.512] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
  log   [10:07:58.515] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
  log   [10:07:58.519] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
  log   [10:07:58.578] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
  log   [10:07:58.582] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
  log   [10:07:58.650] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
  log   [10:08:07.591] [info][listening] Server running at http://0.0.0.0:5601
  log   [10:08:07.909] [error][status][plugin:elasticsearch] Status changed from yellow to red - Elasticsearch is still initializing the kibana index.

What is the problem?

Check your ES status and logs to find out what is happening.

and
when I write this code:

url -XGET 'http://localhost:9200/_cluster/health?pretty=true'

result:

{
  "cluster_name" : "elasticsearch",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 4,
  "unassigned_shards" : 52,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 13,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 612,
  "active_shards_percent_as_number" : 0.0
}

I think my license over. What can I do?

An expired trail license won't stop shards from being assigned.
However you can check by removing the plugin(s) and then restarting.

I did but it dosen't work true. I have same error.

I check my ES log file at 29.04.2016 :

license will expire on [Wednesday, May 04, 2016]. If you have a new license, please update it.
# Otherwise, please reach out to your support contact.
# 
# Commercial plugins operate with reduced functionality on license expiration:
# - marvel
#  - The agent will stop collecting cluster and indices metrics
#  - The agent will stop to automatically clean up indices older than [marvel.history.duration]
[2016-04-29 09:11:35,789][INFO ][gateway                  ] [Sayge] recovered [17] indices into cluster_state
[2016-04-29 09:11:40,175][INFO ][cluster.metadata         ] [Sayge] [.marvel-es-1-2016.04.29] creating index, cause [auto(bulk api)], templates [.marvel-es-1], shards [1]/[1], mappings [shards, node, _default_, index_stats, index_recovery, cluster_state, cluster_stats, indices_stats, node_stats]
[2016-04-29 09:11:55,241][INFO ][cluster.metadata         ] [Sayge] [.marvel-es-1-2016.04.29] update_mapping [indices_stats]
[2016-04-29 09:11:55,518][INFO ][cluster.metadata         ] [Sayge] [.marvel-es-1-2016.04.29] update_mapping [node_stats]
[2016-04-29 09:11:55,638][INFO ][cluster.metadata         ] [Sayge] [.marvel-es-1-2016.04.29] update_mapping [cluster_stats]
[2016-04-29 09:12:16,375][INFO ][cluster.routing.allocation] [Sayge] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.marvel-es-1-2016.04.28][0]] ...]).
[2016-04-29 09:18:16,051][INFO ][node                     ] [Zartra] version[2.3.0], pid[770], build[8371be8/2016-03-29T07:54:48Z]
[2016-04-29 09:18:16,081][INFO ][node                     ] [Zartra] initializing ...
[2016-04-29 09:18:20,610][INFO ][plugins                  ] [Zartra] modules [reindex, lang-expression, lang-groovy], plugins [license, marvel-agent], sites []
[2016-04-29 09:18:21,079][INFO ][env                      ] [Zartra] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [11.4gb], net total_space [28.7gb], spins? [unknown], types [rootfs]
[2016-04-29 09:18:21,079][INFO ][env                      ] [Zartra] heap size [1015.6mb], compressed ordinary object pointers [true]
[2016-04-29 09:18:21,079][WARN ][env                      ] [Zartra] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-04-29 09:18:34,014][INFO ][node                     ] [Zartra] initialized
[2016-04-29 09:18:34,014][INFO ][node                     ] [Zartra] starting ...
[2016-04-29 09:18:34,610][INFO ][transport                ] [Zartra] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}, {[::2]:9300}
[2016-04-29 09:18:34,637][INFO ][discovery                ] [Zartra] elasticsearch/JFnFl2ICTae1jI-rk_Ps6g
[2016-04-29 09:18:37,844][INFO ][cluster.service          ] [Zartra] new_master {Zartra}{JFnFl2ICTae1jI-rk_Ps6g}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-04-29 09:18:37,925][INFO ][http                     ] [Zartra] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}, {[::2]:9200}
[2016-04-29 09:18:37,925][INFO ][node                     ] [Zartra] started
[2016-04-29 09:18:38,686][INFO ][license.plugin.core      ] [Zartra] license [8772c2d4-dcac-493f-8483-646d8306ea87] - valid
[2016-04-29 09:18:38,696][ERROR][license.plugin.core      ] [Zartra] 
#

What state is your cluster, looks to be yellow there?

Thank you. I removed all off them. I installed again. It was solved.