I am trying to modify Kibana source code from kibana 5.0 snapshot, when I did
npm run elasticsearch
kibana@5.0.0-alpha5 elasticsearch /home/sharon/kibana
grunt esvm:dev:keepalive
(node:15401) fs: re-evaluating native module sources is not supported. If you are using the graceful-fs module, please update it to a more recent version.
(node:15401) fs: re-evaluating native module sources is not supported. If you are using the graceful-fs module, please update it to a more recent version.
Running "esvm:dev:keepalive" (esvm) task
starting up "dev" cluster
Keeping elasticsearch alive, to shutdown press command/control+c
INFO - - cluster - Downloading & installing from "master" branch.
INFO - - cluster - Installing plugins
INFO - - cluster - Starting 1 nodes
INFO - ? - ? - [2016-08-11 17:38:16,301][INFO ][node ] initializing ...
INFO - ApJKe4s - env - using [1] data paths, mounts [[/home (/dev/mapper/centos-home)]], net usable_space [46.2gb], net total_space [47.4gb], spins? [possibly], types [xfs]
INFO - ApJKe4s - env - heap size [1.9gb], compressed ordinary object pointers [true]
INFO - ApJKe4s - node - node name [ApJKe4s] derived from node ID; set [node.name] to override
INFO - ApJKe4s - node - version[5.0.0-alpha5-SNAPSHOT], pid[15497], build[227463c/2016-08-02T16:59:22.724Z], OS[Linux/3.10.0-327.13.1.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_11/25.11-b03]
INFO - ? - ? - [2016-08-11 17:38:17,643][INFO ][io.netty.util.internal.PlatformDependent] Your platform does not provide complete low-level API for accessing direct buffers reliably. Unless explicitly requested, heap buffer will always be preferred to avoid potential system unstability.
...
WARN - ApJKe4s - bootstrap - initial heap size [268435456] not equal to maximum heap size [2147483648]; this can cause resize pauses and prevents mlockall from locking the entire heap
WARN - ApJKe4s - bootstrap - max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
WARN - ApJKe4s - bootstrap - please set [discovery.zen.minimum_master_nodes] to a majority of the number of master eligible nodes in your cluster
INFO - ApJKe4s - cluster.service - new_master {ApJKe4s}{ApJKe4sxTOKkn88ADNx6Ug}{iV2QdNUMS7i9hwRP6EgVIw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
INFO - ApJKe4s - http - publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
INFO - ApJKe4s - node - started
Started 1 Elasticsearch nodes.
...
INFO - ApJKe4s - gateway - recovered [2] indices into cluster_state
INFO - ApJKe4s - cluster.routing.allocation - Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-0][0], [logstash-0][0]] ...]).
If I use curl to access this mocked elasticsearch instance, it shows
so If I use the ip address of the same server, it shows connection refused , but if I use localhost or 127.0.0.1, then it works