So the logs are:
[2019-05-21T16:37:31,416][INFO ][o.e.e.NodeEnvironment ] [lYWaGy6] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [38.2gb], net total_space [49.9gb], types [rootfs]
[2019-05-21T16:37:31,420][INFO ][o.e.e.NodeEnvironment ] [lYWaGy6] heap size [989.8mb], compressed ordinary object pointers [true]
[2019-05-21T16:37:31,430][INFO ][o.e.n.Node ] [lYWaGy6] node name derived from node ID [lYWaGy6oQsaEQcgAyxSFFw]; set [node.name] to override
[2019-05-21T16:37:31,430][INFO ][o.e.n.Node ] [lYWaGy6] version[6.8.0], pid[90851], build[default/rpm/65b6179/2019-05-15T20:06:13.172855Z], OS[Linux/3.10.0-957.10.1.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_212/25.212-b04]
[2019-05-21T16:37:31,430][INFO ][o.e.n.Node ] [lYWaGy6] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-4704341660773096082, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=rpm]
...
[2019-05-21T16:37:35,951][INFO ][o.e.x.s.a.s.FileRolesStore] [lYWaGy6] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2019-05-21T16:37:36,592][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [lYWaGy6] [controller/91239] [Main.cc@109] controller (64 bit): Version 6.8.0 (Build e6cf25e2acc5ec) Copyright (c) 2019 Elasticsearch BV
[2019-05-21T16:37:37,030][DEBUG][o.e.a.ActionModule ] [lYWaGy6] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-05-21T16:37:37,227][INFO ][o.e.d.DiscoveryModule ] [lYWaGy6] using discovery type [zen] and host providers [settings]
[2019-05-21T16:37:37,946][INFO ][o.e.n.Node ] [lYWaGy6] initialized
[2019-05-21T16:37:37,947][INFO ][o.e.n.Node ] [lYWaGy6] starting ...
[2019-05-21T16:37:38,107][INFO ][o.e.t.TransportService ] [lYWaGy6] publish_address {10.150.2.3:9300}, bound_addresses {[::]:9300}
[2019-05-21T16:37:38,128][INFO ][o.e.b.BootstrapChecks ] [lYWaGy6] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-05-21T16:37:41,221][INFO ][o.e.c.s.MasterService ] [lYWaGy6] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {lYWaGy6}{lYWaGy6oQsaEQcgAyxSFFw}{QirsHidvTIabVaH82WLuww}{10.150.2.3}{10.150.2.3:9300}{ml.machine_memory=269954039808, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2019-05-21T16:37:41,240][INFO ][o.e.c.s.ClusterApplierService] [lYWaGy6] new_master {lYWaGy6}{lYWaGy6oQsaEQcgAyxSFFw}{QirsHidvTIabVaH82WLuww}{10.150.2.3}{10.150.2.3:9300}{ml.machine_memory=269954039808, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {lYWaGy6}{lYWaGy6oQsaEQcgAyxSFFw}{QirsHidvTIabVaH82WLuww}{10.150.2.3}{10.150.2.3:9300}{ml.machine_memory=269954039808, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-05-21T16:37:41,321][INFO ][o.e.h.n.Netty4HttpServerTransport] [lYWaGy6] publish_address {10.150.2.3:9200}, bound_addresses {[::]:9200}
[2019-05-21T16:37:41,322][INFO ][o.e.n.Node ] [lYWaGy6] started
[2019-05-21T16:37:41,652][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [lYWaGy6] Failed to clear cache for realms [[]]
[2019-05-21T16:37:41,697][INFO ][o.e.l.LicenseService ] [lYWaGy6] license [b0e46bca-dda0-4aea-af32-10693aab36f0] mode [basic] - valid
[2019-05-21T16:37:41,704][INFO ][o.e.g.GatewayService ] [lYWaGy6] recovered [1] indices into cluster_state
[2019-05-21T16:37:42,113][INFO ][o.e.c.r.a.AllocationService] [lYWaGy6] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[gene][1]] ...]).
[2019-05-21T16:47:35,397][INFO ][o.e.n.Node ] [lYWaGy6] stopping ...
[2019-05-21T16:47:35,480][INFO ][o.e.x.w.WatcherService ] [lYWaGy6] stopping watch service, reason [shutdown initiated]
[2019-05-21T16:47:35,908][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [lYWaGy6] [controller/91239] [Main.cc@148] Ml controller exiting
[2019-05-21T16:47:35,909][INFO ][o.e.x.m.p.NativeController] [lYWaGy6] Native controller process has stopped - no new native processes can be started
[2019-05-21T16:47:35,999][INFO ][o.e.n.Node ] [lYWaGy6] stopped
[2019-05-21T16:47:35,999][INFO ][o.e.n.Node ] [lYWaGy6] closing ...
[2019-05-21T16:47:36,015][INFO ][o.e.n.Node ] [lYWaGy6] closed
Something at 2019-05-21T16:47:35,397
is closing your node:
[2019-05-21T16:47:35,397][INFO ][o.e.n.Node ] [lYWaGy6] stopping ...
Any idea why this is happening?