Got around this one. This was probably due to uneven memory allocation to the VM installed locally.
However the elasticsearch process is being killed on another machine with ~2 GB of system memory.
Below is the invocation log -
[admin@ELKTEST elasticsearch-5.4.0]$ ./bin/elasticsearch
[2017-05-17T05:52:41,135][INFO ][o.e.n.Node ] [] initializing ...
[2017-05-17T05:52:41,618][INFO ][o.e.e.NodeEnvironment ] [uzJHHwZ] using [1] data paths, mounts [[/home (/dev/mapper/rootvg-home_lv)]], net usable_space [2.3gb], net total_space [2.9gb], spins? [possibly], types [ext4]
[2017-05-17T05:52:41,619][INFO ][o.e.e.NodeEnvironment ] [uzJHHwZ] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-05-17T05:52:41,623][INFO ][o.e.n.Node ] node name [uzJHHwZ] derived from node ID [uzJHHwZiSkyqddIMA9qKnw]; set [node.name] to override
[2017-05-17T05:52:41,624][INFO ][o.e.n.Node ] version[5.4.0], pid[6010], build[780f8c4/2017-04-28T17:43:27.229Z], OS[Linux/3.10.0-327.22.2.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_101/25.101-b13]
[2017-05-17T05:52:47,046][INFO ][o.e.p.PluginsService ] [uzJHHwZ] loaded module [aggs-matrix-stats]
[2017-05-17T05:52:47,047][INFO ][o.e.p.PluginsService ] [uzJHHwZ] loaded module [ingest-common]
[2017-05-17T05:52:47,047][INFO ][o.e.p.PluginsService ] [uzJHHwZ] loaded module [lang-expression]
[2017-05-17T05:52:47,047][INFO ][o.e.p.PluginsService ] [uzJHHwZ] loaded module [lang-groovy]
[2017-05-17T05:52:47,047][INFO ][o.e.p.PluginsService ] [uzJHHwZ] loaded module [lang-mustache]
[2017-05-17T05:52:47,047][INFO ][o.e.p.PluginsService ] [uzJHHwZ] loaded module [lang-painless]
[2017-05-17T05:52:47,048][INFO ][o.e.p.PluginsService ] [uzJHHwZ] loaded module [percolator]
[2017-05-17T05:52:47,048][INFO ][o.e.p.PluginsService ] [uzJHHwZ] loaded module [reindex]
[2017-05-17T05:52:47,048][INFO ][o.e.p.PluginsService ] [uzJHHwZ] loaded module [transport-netty3]
[2017-05-17T05:52:47,048][INFO ][o.e.p.PluginsService ] [uzJHHwZ] loaded module [transport-netty4]
[2017-05-17T05:52:47,049][INFO ][o.e.p.PluginsService ] [uzJHHwZ] no plugins loaded
[2017-05-17T05:52:54,932][INFO ][o.e.d.DiscoveryModule ] [uzJHHwZ] using discovery type [zen]
[2017-05-17T05:53:00,960][INFO ][o.e.n.Node ] initialized
[2017-05-17T05:53:00,963][INFO ][o.e.n.Node ] [uzJHHwZ] starting ...
[2017-05-17T05:53:02,538][INFO ][o.e.t.TransportService ] [uzJHHwZ] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2017-05-17T05:53:05,514][INFO ][o.e.m.j.JvmGcMonitorService] [uzJHHwZ] [gc][4] overhead, spent [301ms] collecting in the last [1s]
[2017-05-17T05:53:07,437][INFO ][o.e.c.s.ClusterService ] [uzJHHwZ] new_master {uzJHHwZ}{uzJHHwZiSkyqddIMA9qKnw}{MkUr8HHoQ7mRC9fdq58Ktw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-05-17T05:53:08,624][INFO ][o.e.h.n.Netty4HttpServerTransport] [uzJHHwZ] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2017-05-17T05:53:09,171][INFO ][o.e.n.Node ] [uzJHHwZ] started
[2017-05-17T05:53:09,703][INFO ][o.e.g.GatewayService ] [uzJHHwZ] recovered [0] indices into cluster_state
Killed
Here is the trailing messages from dmesg -
[1124233.764011] [ 5997] 249 5997 36396 1 70 352 0 sshd
[1124233.765125] [ 5998] 249 5998 13175 0 29 137 0 sftp-server
[1124233.766194] [ 6010] 249 6010 1062691 386225 1173 182024 0 java
[1124233.767205] [ 6077] 0 6077 15852 95 31 0 0 sshd
[1124233.768178] Out of memory: Kill process 6010 (java) score 776 or sacrifice child
[1124233.769158] Killed process 6010 (java) total-vm:4250764kB, anon-rss:1544900kB, file-rss:0kB