ElasticSearch Dies After 40 Seconds

I was having issues with an installation of ElasticSearch, so I uninstalled and re-installed the service using yum (v5.6.16). Now, when I start the service, it runs for 40 seconds and then dies. There is nothing in the elasticsearch.log file (there is no elasticsearch.log file) and journalctl -f only shows the following:

Aug 16 09:07:15 ctl systemd[1]: Starting Elasticsearch...
Aug 16 09:07:15 ctl systemd[1]: Started Elasticsearch.
Aug 16 09:07:15 ctl polkitd[693]: Unregistered Authentication Agent for unix-process:12751:1295190471 (system bus name :1.552730, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Aug 16 09:07:55 ctl systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Aug 16 09:07:56 ctl systemd[1]: Unit elasticsearch.service entered failed state.
Aug 16 09:07:56 ctl systemd[1]: elasticsearch.service failed.

Any ideas as to what could be causing this or where I could look to find the issue?

Java Info:
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)

Hey,

can you check /var/log/elasticsearch and journalctl after trying to start Elasticsearch?

--Alex

There is nothing in the elasticsearch.log file (there is no elasticsearch.log file) and journalctl -f is what I posted above. the elasticsearch user is the owner of the elasticsearch log folder so it should be able to write a file if it needs to.

Aug 16 09:07:15 ctl polkitd[693]: Unregistered Authentication Agent for unix-process:12751:1295190471 (system bus name :1.552730, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)

Sounds like you have an issue with SELinux. This is not an issue with Elasticsearch, but an issue with your environment.

ElasticSearch v6.5.4 was running on this machine but need to be downgraded. I downgraded to 5.6.16 and could not make it start. So I removed it completely to start over. Why would v6.5.4 work and the . lower versions not?

Can you please provide the logfiles, otherwise it is impossible to help.

That is my issue. There are no log files.

No elasticsearch.log file is created in /var/log/elasticsearch and I included the journalctl -f in the beginning of this post. Do I need to manually create an elasticsearch.log file so elasticsearch can write to it?

Is there another log file I should be looking for?

is systemctl status for the elasticsearch service showing anything?

When I first start elasticsearch, here is the systemctl status:

[root@ctl elasticsearch]# service elasticsearch status
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-08-22 08:59:43 CDT; 15s ago
     Docs: http://www.elastic.co
  Process: 24798 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
 Main PID: 24800 (java)
   CGroup: /system.slice/elasticsearch.service
           └─24800 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSIni...

Aug 22 08:59:43 ctl systemd[1]: Starting Elasticsearch...
Aug 22 08:59:43 ctl systemd[1]: Started Elasticsearch.

Here is the here is the systemctl status when I run it again 40+ seconds later:

[root@ctl elasticsearch]# service elasticsearch status
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2019-08-22 09:00:23 CDT; 30s ago
     Docs: http://www.elastic.co
  Process: 24800 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
  Process: 24798 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
 Main PID: 24800 (code=exited, status=1/FAILURE)

Aug 22 08:59:43 ctl systemd[1]: Starting Elasticsearch...
Aug 22 08:59:43 ctl systemd[1]: Started Elasticsearch.
Aug 22 09:00:23 ctl systemd[1]: elasticsearch.service: main process exited,...RE
Aug 22 09:00:23 ctl systemd[1]: Unit elasticsearch.service entered failed state.
Aug 22 09:00:23 ctl systemd[1]: elasticsearch.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

see the last line of the second snippet, maybe there is more information?

The missing information was not helpful:

[root@ctl ~]# systemctl -l status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2019-08-22 09:00:23 CDT; 7h ago
Docs: http://www.elastic.co
Process: 24800 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p {PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs={LOG_DIR} -Edefault.path.data={DATA_DIR} -Edefault.path.conf={CONF_DIR} (code=exited, status=1/FAILURE)
Process: 24798 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 24800 (code=exited, status=1/FAILURE)

Aug 22 08:59:43 ctl systemd[1]: Starting Elasticsearch...
Aug 22 08:59:43 ctl systemd[1]: Started Elasticsearch.
Aug 22 09:00:23 ctl systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Aug 22 09:00:23 ctl systemd[1]: Unit elasticsearch.service entered failed state.
Aug 22 09:00:23 ctl systemd[1]: elasticsearch.service failed.

weird, there are no more loglines in there compared to the previous output.

can you try run journalctl -f in one terminal, then start elasticsearch in another terminal one more time and share the output of date && find /var/log/elasticsearch -ls after that, plus the journalctl output? Still hoping for some more information...

I apologize for the delayed response. Security requires anyone logging into the server to be supervised and we had trouble aligning our schedules with the Holiday.

journalctl -f
Sep 06 08:43:10 ctl polkitd[693]: Registered Authentication Agent for unix-process:20662:1476485705 (system bus name :1.636844 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Sep 06 08:43:10 ctl systemd[1]: Starting Elasticsearch...
Sep 06 08:43:10 ctl systemd[1]: Started Elasticsearch.
Sep 06 08:43:10 ctl polkitd[693]: Unregistered Authentication Agent for unix-process:20662:1476485705 (system bus name :1.636844, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Sep 06 08:43:50 ctl systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Sep 06 08:43:50 ctl systemd[1]: Unit elasticsearch.service entered failed state.
Sep 06 08:43:50 ctl systemd[1]: elasticsearch.service failed.

date && find /var/log/elasticsearch -ls
Fri Sep 6 08:45:14 CDT 2019
570449667 0 drwxr-x--- 2 elasticsearch elasticsearch 229 Sep 6 08:40 /var/log/elasticsearch
570449670 0 -rw-r--r-- 1 elasticsearch elasticsearch 0 Aug 16 08:51 /var/log/elasticsearch/sugarcrm_deprecation.log
570449671 0 -rw-r--r-- 1 elasticsearch elasticsearch 0 Aug 16 08:51 /var/log/elasticsearch/sugarcrm_index_search_slowlog.log
570449672 0 -rw-r--r-- 1 elasticsearch elasticsearch 0 Aug 16 08:51 /var/log/elasticsearch/sugarcrm_index_indexing_slowlog.log
570449669 392 -rw-r--r-- 1 elasticsearch elasticsearch 399875 Aug 16 09:07 /var/log/elasticsearch/sugarcrm-2019-08-16.log
570449675 4 -rw-r--r-- 1 elasticsearch elasticsearch 2 Aug 22 16:47 /var/log/elasticsearch/elasticsearch.log
570449673 160 -rw-r--r-- 1 elasticsearch elasticsearch 159949 Aug 22 16:48 /var/log/elasticsearch/sugarcrm-2019-08-22.log
570449674 384 -rw-r--r-- 1 elasticsearch elasticsearch 239925 Sep 6 08:43 /var/log/elasticsearch/sugarcrm.log

the /var/log/elasticsearch/sugarcrm.log file seems like a good candidate to look into according to its timestamp

The following was at the beginning of the log file:

[2019-08-22T09:00:21,090][WARN ][o.e.b.JNANatives         ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2019-08-22T09:00:21,096][WARN ][o.e.b.JNANatives         ] This can result in part of the JVM being swapped out.
[2019-08-22T09:00:21,096][WARN ][o.e.b.JNANatives         ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2019-08-22T09:00:21,097][WARN ][o.e.b.JNANatives         ] These can be adjusted by modifying /etc/security/limits.conf, for example:
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited
[2019-08-22T09:00:21,097][WARN ][o.e.b.JNANatives         ] If you are logged in interactively, you will have to re-login for the new limits to take effect.

I entered
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

into the /etc/security/limits.conf file, rebooted and got the same error. I then entered

* soft memlock unlimited
* hard memlock unlimited

into the /etc/security/limits.conf file, rebooted and got the same error.

Any thoughts on why implementing the fix that the log file suggested did not resolve the issue. It is like it is ignoring the change or at least it is unable to see it.

sharing your exact setup and error messages after that change could help a lot. Did you reboot the system and/or relogin?

I restarted the system after each of the changes.

What information files/statuses/configs would help you determine my setup?

the output from the /var/log/elasticsearch log is the same?

I uninstalled v5.6.16 completely and installed v6.2.4. That did not resolve the issue, but starting from scratch I was able to start seeing errors so I could dig deeper

First, I had an issue with an incompatibility with the keystore file. Once I removed that I was able to see that there was an issue with the ElasticSearch data files. I deleted the data files since I knew they could be re-created and ElasticSearch came up and stayed up.

Thanks for your help. It is greatly appreciated!