Elasticsearch Status Active Failed

Hello,
OS = centos7, install in vm
java version

==========================================================================

openjdk version "1.8.0_102"
OpenJDK Runtime Environment (build 1.8.0_102-b14)
OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode)

==========================================================================

the first time when install elasticsearch is successful, and its status running (curl -X GET 'http://localhost:9200'). I follow the https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html (Installing from the RPM repository)

==========================================================================

● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2017-09-11 18:02:52 EDT; 11min ago
Docs: http://www.elastic.co
Process: 1119 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 1130 (java)
CGroup: /system.slice/elasticsearch.service
└─1130 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInit...

Sep 11 18:02:52 192.168.1.9 systemd[1]: Starting Elasticsearch...
Sep 11 18:02:52 192.168.1.9 systemd[1]: Started Elasticsearch.

[root@192 cenelk]# curl -X GET 'http://localhost:9200' {
"name" : "bCTNIfd",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "eeS1QsUmQC69-vMZ8ZFftQ",
"version" : {
"number" : "5.6.0",
"build_hash" : "781a835",
"build_date" : "2017-09-07T03:09:58.087Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}

==============================================================================

keep install kibana also successful and running status, after install logstash and logstash status running, but elasticsearch status failed, is there any problem. can u help me??

==============================================================================
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2017-09-11 19:16:20 EDT; 11min ago
Docs: http://www.elastic.co
Process: 6121 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
Process: 6117 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 6121 (code=exited, status=1/FAILURE)

Sep 11 19:16:18 192.168.1.9 systemd[1]: Started Elasticsearch.
Sep 11 19:16:19 192.168.1.9 elasticsearch[6121]: OpenJDK 64-Bit Server VM warning: INFO: os::com...12)
Sep 11 19:16:19 192.168.1.9 elasticsearch[6121]: #
Sep 11 19:16:19 192.168.1.9 elasticsearch[6121]: # There is insufficient memory for the Java Run...ue.
Sep 11 19:16:19 192.168.1.9 elasticsearch[6121]: # Native memory allocation (mmap) failed to map...ry.
Sep 11 19:16:19 192.168.1.9 elasticsearch[6121]: # An error report file with more information is...as:
Sep 11 19:16:19 192.168.1.9 elasticsearch[6121]: # /tmp/hs_err_pid6121.log
Sep 11 19:16:20 192.168.1.9 systemd[1]: elasticsearch.service: main process exited, code=exited...LURE
Sep 11 19:16:20 192.168.1.9 systemd[1]: Unit elasticsearch.service entered failed state.
Sep 11 19:16:20 192.168.1.9 systemd[1]: elasticsearch.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

======================================================================

That's probably why.
How much memory does your host have, how much did you give Elasticsearch?

Maybe his memory is unlimited.
and i have 1 vm again but still its status active failed

● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: signal) since Mon 2017-09-11 14:36:42 WIB; 18h ago
Docs: http://www.elastic.co
Process: 8869 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DIR} (code=killed, signal=KILL)
Process: 8865 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 8869 (code=killed, signal=KILL)

Sep 11 14:24:40 localhost.localdomain systemd[1]: Starting Elasticsearch...
Sep 11 14:24:40 localhost.localdomain systemd[1]: Started Elasticsearch.
Sep 11 14:36:42 localhost.localdomain systemd[1]: elasticsearch.service: main process exited, code=killed, status=9/KILL
Sep 11 14:36:42 localhost.localdomain systemd[1]: Unit elasticsearch.service entered failed state.
Sep 11 14:36:42 localhost.localdomain systemd[1]: elasticsearch.service failed.

Look at the actual Elasticsearch logs, they should tell you more.

is this the log, /etc/var/log/elasticsearch/elasticsearch.log

[2017-09-11T09:33:52,646][WARN ][o.e.m.j.JvmGcMonitorService] [qs5K1Ku] [gc][young][335528][2079] duration [20.3s], collections [1]/[3.7s], total [20.3s$
[2017-09-11T09:35:45,970][WARN ][o.e.m.j.JvmGcMonitorService] [qs5K1Ku] [gc][335528] overhead, spent [20.3s] collecting in the last [3.7s]
[2017-09-11T14:26:54,605][INFO ][o.e.n.Node ] [] initializing ...
[2017-09-11T14:26:56,242][INFO ][o.e.e.NodeEnvironment ] [qs5K1Ku] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [16.6gb], net total_$
[2017-09-11T14:26:56,242][INFO ][o.e.e.NodeEnvironment ] [qs5K1Ku] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-09-11T14:26:56,393][INFO ][o.e.n.Node ] node name [qs5K1Ku] derived from node ID [qs5K1Ku0S8ic_FQOuqI-YQ]; set [node.name] to overri$
[2017-09-11T14:26:56,393][INFO ][o.e.n.Node ] version[5.5.2], pid[8869], build[b2f0c09/2017-08-14T12:33:14.154Z], OS[Linux/3.10.0-514.el7.$
[2017-09-11T14:26:56,393][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=7$
[2017-09-11T14:27:07,974][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [aggs-matrix-stats]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [ingest-common]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [lang-expression]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [lang-groovy]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [lang-mustache]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [lang-painless]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [parent-join]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [percolator]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [reindex]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [transport-netty3]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [transport-netty4]
[2017-09-11T14:27:07,976][INFO ][o.e.p.PluginsService ] [qs5K1Ku] no plugins loaded
[2017-09-11T14:27:23,757][INFO ][o.e.d.DiscoveryModule ] [qs5K1Ku] using discovery type [zen]
[2017-09-11T14:27:30,731][INFO ][o.e.n.Node ] initialized
[2017-09-11T14:27:30,735][INFO ][o.e.n.Node ] [qs5K1Ku] starting ...
[2017-09-11T14:27:32,324][INFO ][o.e.t.TransportService ] [qs5K1Ku] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-09-11T14:27:36,515][INFO ][o.e.c.s.ClusterService ] [qs5K1Ku] new_master {qs5K1Ku}{qs5K1Ku0S8ic_FQOuqI-YQ}{nlD3EGMVT--2SVoSx5bEDA}{127.0.0.1}{12$
[2017-09-11T14:27:37,961][INFO ][o.e.h.n.Netty4HttpServerTransport] [qs5K1Ku] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1$
[2017-09-11T14:27:37,961][INFO ][o.e.n.Node ] [qs5K1Ku] started
[2017-09-11T14:27:40,351][INFO ][o.e.g.GatewayService ] [qs5K1Ku] recovered [1] indices into cluster_state
[2017-09-11T14:27:42,347][INFO ][o.e.m.j.JvmGcMonitorService] [qs5K1Ku] [gc][11] overhead, spent [628ms] collecting in the last [1.5s]
[2017-09-11T14:27:43,485][INFO ][o.e.c.r.a.AllocationService] [qs5K1Ku] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[$

Yes, but there's nothing that that suggests Elasticsearch is stopping.

is this the log /var/log/elasticsearch/elasaticsearch.log

2017-09-11T09:33:52,646][WARN ][o.e.m.j.JvmGcMonitorService] [qs5K1Ku] [gc][young][335528][2079] duration [20.3s], collections [1]/[3.7s], total [20.3s]/[1.8m], memor memory [128.2mb]->[128.2mb]/[1.9gb], all_pools {[young] [66.5mb]->[66.5mb]/[66.5mb]}{[survivor] [104kb]->[104kb]/[8.3mb]}{[old] [61.5mb]->[61.5mb]/[1.9gb]}
[2017-09-11T09:35:45,970][WARN ][o.e.m.j.JvmGcMonitorService] [qs5K1Ku] [gc][335528] overhead, spent [20.3s] collecting in the last [3.7s]
[2017-09-11T14:26:54,605][INFO ][o.e.n.Node ] [] initializing ...
[2017-09-11T14:26:56,242][INFO ][o.e.e.NodeEnvironment ] [qs5K1Ku] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [16.6gb], net total_space [21.9gb],spins? [unknown], types [rootfs]

[2017-09-11T14:26:56,242][INFO ][o.e.e.NodeEnvironment ] [qs5K1Ku] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-09-11T14:26:56,393][INFO ][o.e.n.Node ] node name [qs5K1Ku] derived from node ID [qs5K1Ku0S8ic_FQOuqI-YQ]; set [node.name] to override
[2017-09-11T14:26:56,393][INFO ][o.e.n.Node ] version[5.5.2], pid[8869], build[b2f0c09/2017-08-14T12:33:14.154Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_102/25.102-b14]
[2017-09-11T14:26:56,393][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSI$
[2017-09-11T14:27:07,974][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [aggs-matrix-stats]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [ingest-common]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [lang-expression]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [lang-groovy]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [lang-mustache]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [lang-painless]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [parent-join]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [percolator]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [reindex]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [transport-netty3]
[2017-09-11T14:27:07,975][INFO ][o.e.p.PluginsService ] [qs5K1Ku] loaded module [transport-netty4]
[2017-09-11T14:27:07,976][INFO ][o.e.p.PluginsService ] [qs5K1Ku] no plugins loaded
[2017-09-11T14:27:23,757][INFO ][o.e.d.DiscoveryModule ] [qs5K1Ku] using discovery type [zen]
[2017-09-11T14:27:30,731][INFO ][o.e.n.Node ] initialized
[2017-09-11T14:27:30,735][INFO ][o.e.n.Node ] [qs5K1Ku] starting ...
[2017-09-11T14:27:32,324][INFO ][o.e.t.TransportService ] [qs5K1Ku] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-09-11T14:27:36,515][INFO ][o.e.c.s.ClusterService ] [qs5K1Ku] new_master {qs5K1Ku}{qs5K1Ku0S8ic_FQOuqI-YQ}{nlD3EGMVT--2SVoSx5bEDA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-09-11T14:27:37,961][INFO ][o.e.h.n.Netty4HttpServerTransport] [qs5K1Ku] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2017-09-11T14:27:37,961][INFO ][o.e.n.Node ] [qs5K1Ku] started
[2017-09-11T14:27:40,351][INFO ][o.e.g.GatewayService ] [qs5K1Ku] recovered [1] indices into cluster_state
[2017-09-11T14:27:42,347][INFO ][o.e.m.j.JvmGcMonitorService] [qs5K1Ku] [gc][11] overhead, spent [628ms] collecting in the last [1.5s]
[2017-09-11T14:27:43,485][INFO ][o.e.c.r.a.AllocationService] [qs5K1Ku] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).

so, why elasticsearch status active failed??? is there anything to install

HI @yrizal

Sep 11 19:16:19 192.168.1.9 elasticsearch[6121]: # There is insufficient memory for the Java Run...ue.
Sep 11 19:16:19 192.168.1.9 elasticsearch[6121]: # Native memory allocation (mmap) failed to map...ry.

Looking at this issue it seems space problem which you have given to Elasticsearch. If your VM size is not enough to work properly with elasticsearch you can customize the ES size via editing heap size in "jvm.options" file of elasticsearch.

Also for second logs which you have provide there is no full logs. Can you please provide full logs file of elasticsearch for better understanding the issue.
Also, Make sure you installed fresh elasticsearch on new VM.
have you created new VM or cloned previous VM? Because sometimes previous data is kept in elasticsearch folder due to this user is unable to start the service.

ok I will try, but how many minimum requirements for elasticsearch

and i have 1 vm again, the first time after i install elasticsearch can work, but after install logstash, elasticsearch status active failed, with this problem,

Sep 11 14:24:40 localhost.localdomain systemd[1]: Starting Elasticsearch...
Sep 11 14:24:40 localhost.localdomain systemd[1]: Started Elasticsearch.
Sep 11 14:36:42 localhost.localdomain systemd[1]: elasticsearch.service: main process exited, code=killed, status=9/KILL
Sep 11 14:36:42 localhost.localdomain systemd[1]: Unit elasticsearch.service entered failed state.
Sep 11 14:36:42 localhost.localdomain systemd[1]: elasticsearch.service failed.

i will try re-install

What size is the VM? RAM, disk, CPU etc.

VMware eSXi
CPU HPproliant DL380 Gen9
RAM 8GB
Disk 1 TB

i used 1 vm for Elasticsearch with specification :
Disk 30GB Thin Provisioning
RAM 2GB

The default heap for Elasticsearch is 2GB, you will need to change that.

Make sure delete all previous data folder before installing Elasticearch.
As you said that after installing Logstash , fail to start elasticsearch. for this better way to read the logs for logstash and elasticserach both.
And also make sure compatible version you are installing for both.
for checking version compatibility yiu can follow the below link:
https://www.elastic.co/support/matrix#show_compatibilityhttps://www.elastic.co/support/matrix#show_compatibility

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.