.... ES logs continued (bound to localhost)
[2016-06-20 14:59:02,820][INFO ][env ] [varun] using [1] data paths, mounts [[/ (/dev/sda6)]], net usable_space [65.8gb], net total_space [88.5gb], spins? [possibly], types [ext4]
[2016-06-20 14:59:02,820][INFO ][env ] [varun] heap size [990.7mb], compressed ordinary object pointers [true]
[2016-06-20 14:59:04,570][INFO ][node ] [varun] initialized
[2016-06-20 14:59:04,570][INFO ][node ] [varun] starting ...
[2016-06-20 14:59:04,629][INFO ][transport ] [varun] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2016-06-20 14:59:04,634][INFO ][discovery ] [varun] elasticsearch/pYQs1HlhTKiZ6JDMGUm6Bw
[2016-06-20 14:59:07,692][INFO ][cluster.service ] [varun] new_master {varun}{pYQs1HlhTKiZ6JDMGUm6Bw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-06-20 14:59:07,708][INFO ][http ] [varun] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2016-06-20 14:59:07,708][INFO ][node ] [varun] started
[2016-06-20 14:59:07,894][INFO ][gateway ] [varun] recovered [1] indices into cluster_state
[2016-06-20 14:59:08,478][INFO ][cluster.routing.allocation] [varun] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
^C[2016-06-20 15:11:45,416][INFO ][node ] [varun] stopping ...
^C[2016-06-20 15:11:45,597][INFO ][node ] [varun] stopped
[2016-06-20 15:11:45,598][INFO ][node ] [varun] closing ...
[2016-06-20 15:11:45,661][INFO ][node ] [varun] closed
varun@varun-SVE14113ENW:~/elasticsearch-2.3.2/bin$ ./elasticsearch
[2016-06-20 15:12:49,245][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-06-20 15:12:49,245][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
[2016-06-20 15:12:49,246][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-06-20 15:12:49,246][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'varun' mlockall
varun soft memlock unlimited
varun hard memlock unlimited
[2016-06-20 15:12:49,246][WARN ][bootstrap ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2016-06-20 15:12:49,796][INFO ][node ] [varun] version[2.3.2], pid[12042], build[b9e4a6a/2016-04-21T16:03:47Z]
[2016-06-20 15:12:49,796][INFO ][node ] [varun] initializing ...
[2016-06-20 15:12:51,025][INFO ][plugins ] [varun] modules [lang-groovy, reindex, lang-expression], plugins [hq, head], sites [head, hq]
[2016-06-20 15:12:51,065][INFO ][env ] [varun] using [1] data paths, mounts [[/ (/dev/sda6)]], net usable_space [65.8gb], net total_space [88.5gb], spins? [possibly], types [ext4]
[2016-06-20 15:12:51,085][INFO ][env ] [varun] heap size [990.7mb], compressed ordinary object pointers [true]
[2016-06-20 15:12:55,432][INFO ][node ] [varun] initialized
[2016-06-20 15:12:55,432][INFO ][node ] [varun] starting ...
[2016-06-20 15:12:55,573][INFO ][transport ] [varun] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2016-06-20 15:12:55,578][INFO ][discovery ] [varun] elasticsearch/BZnOKoWXR8eyYX1GbO2LMg
[2016-06-20 15:12:58,643][INFO ][cluster.service ] [varun] new_master {varun}{BZnOKoWXR8eyYX1GbO2LMg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-06-20 15:12:58,672][INFO ][http ] [varun] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2016-06-20 15:12:58,672][INFO ][node ] [varun] started
[2016-06-20 15:12:59,011][INFO ][gateway ] [varun] recovered [1] indices into cluster_state
[2016-06-20 15:12:59,715][INFO ][cluster.routing.allocation] [varun] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).