Elasticsearch fails to start on VM Boot

I am running Ubuntu 20. Upon starting my machine, kibana fails, saying it can't connect to localhost:9200. I check on Elasticsearch, and I see this:

ā— elasticsearch.service - Elasticsearch Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled) Active: failed (Result: timeout) since Sun 2021-01-10 09:32:32 EST; 1min 42s ago Docs: https://www.elastic.co Process: 824 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=143) Main PID: 824 (code=exited, status=143) Jan 10 09:31:16 elk systemd[1]: Starting Elasticsearch... Jan 10 09:32:31 elk systemd[1]: elasticsearch.service: start operation timed out. Terminating. Jan 10 09:32:32 elk systemd[1]: elasticsearch.service: Failed with result 'timeout'. Jan 10 09:32:32 elk systemd[1]: Failed to start Elasticsearch.

And upon shutdown, it gets stuck here for some reason:
2021-01-10_09-29 .
I have to force shut it down.

I do a service elasticsearch start, and it starts up just fine, and everything is resolved. But upon a reboot, it does the same thing again.

Please don't post pictures of text, they are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them :slight_smile:

Can you directly check the Elasticsearch logs?

Sure, I am checking elasticsearch.log. I did a reboot. But I don't see any errors after it being stopped, then started. But it still fails to come up.
Here are some errors that I did find:

[2021-01-12T07:18:35,149][INFO ][o.e.x.i.IndexLifecycleRunner] [elk] policy [filebeat] for index [filebeat-7.10.0-2020.12.06] on an error step due to a transient error, moving back to the failed step [check-rollover-ready] for execution. retry attempt [1489]
[2021-01-12T07:18:35,149][INFO ][o.e.x.i.IndexLifecycleRunner] [elk] policy [filebeat] for index [filebeat-7.10.0-2020.12.18] on an error step due to a transient error, moving back to the failed step [check-rollover-ready] for execution. retry attempt [692]
[2021-01-12T07:18:35,150][ERROR][o.e.x.i.IndexLifecycleRunner] [elk] policy [filebeat] for index [filebeat-7.10.0-2021.01.02] failed on step [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]. Moving to ERROR step
java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [filebeat-7.10.0] does not point to index [filebeat-7.10.0-2021.01.02]
        at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:114) [x-pack-core-7.10.1.jar:7.10.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:174) [x-pack-ilm-7.10.1.jar:7.10.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:327) [x-pack-ilm-7.10.1.jar:7.10.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:265) [x-pack-ilm-7.10.1.jar:7.10.1]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183) [x-pack-core-7.10.1.jar:7.10.1]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:216) [x-pack-core-7.10.1.jar:7.10.1]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]
[2021-01-12T07:18:35,151][ERROR][o.e.x.i.IndexLifecycleRunner] [elk] policy [filebeat] for index [filebeat-7.10.0-2021.01.08] failed on step [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]. Moving to ERROR step
java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [filebeat-7.10.0] does not point to index [filebeat-7.10.0-2021.01.08]
        at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:114) [x-pack-core-7.10.1.jar:7.10.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:174) [x-pack-ilm-7.10.1.jar:7.10.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:327) [x-pack-ilm-7.10.1.jar:7.10.1]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:265) [x-pack-ilm-7.10.1.jar:7.10.1]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183) [x-pack-core-7.10.1.jar:7.10.1]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:216) [x-pack-core-7.10.1.jar:7.10.1]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]

I assume this is not related to startup of the service though.

Here is after the reboot:

[2021-01-12T07:20:20,450][INFO ][o.e.n.Node               ] [elk] stopped
[2021-01-12T07:20:20,451][INFO ][o.e.n.Node               ] [elk] closing ...
[2021-01-12T07:20:20,532][INFO ][o.e.n.Node               ] [elk] closed
[2021-01-12T07:21:49,176][INFO ][o.e.n.Node               ] [elk] version[7.10.1], pid[800], build[default/deb/1c34507e66d7db1211f66f3513706fdf548736aa/2020-12-05T01:00:33.671820Z], OS[Linux/5.4.0-60-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/15.0.1/15.0.1+9]
[2021-01-12T07:21:49,213][INFO ][o.e.n.Node               ] [elk] JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]
[2021-01-12T07:21:49,214][INFO ][o.e.n.Node               ] [elk] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-17815586022187814373, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:MaxDirectMemorySize=536870912, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=deb, -Des.bundled_jdk=true]
[2021-01-12T07:22:16,043][INFO ][o.e.p.PluginsService     ] [elk] loaded module [aggs-matrix-stats]
[2021-01-12T07:22:16,045][INFO ][o.e.p.PluginsService     ] [elk] loaded module [analysis-common]
[2021-01-12T07:22:16,045][INFO ][o.e.p.PluginsService     ] [elk] loaded module [constant-keyword]
[2021-01-12T07:22:16,046][INFO ][o.e.p.PluginsService     ] [elk] loaded module [flattened]
[2021-01-12T07:22:16,046][INFO ][o.e.p.PluginsService     ] [elk] loaded module [frozen-indices]
[2021-01-12T07:22:16,047][INFO ][o.e.p.PluginsService     ] [elk] loaded module [ingest-common]
[2021-01-12T07:22:16,047][INFO ][o.e.p.PluginsService     ] [elk] loaded module [ingest-geoip]
[2021-01-12T07:22:16,047][INFO ][o.e.p.PluginsService     ] [elk] loaded module [ingest-user-agent]
[2021-01-12T07:22:16,048][INFO ][o.e.p.PluginsService     ] [elk] loaded module [kibana]
...
[2021-01-12T07:22:16,070][INFO ][o.e.p.PluginsService     ] [elk] no plugins loaded
[2021-01-12T07:22:16,399][INFO ][o.e.e.NodeEnvironment    ] [elk] using [1] data paths, mounts [[/ (/dev/mapper/ubuntu--vg-ubuntu--lv)]], net usable_space [55.3gb], net total_space [79.5gb], types [ext4]
[2021-01-12T07:22:16,401][INFO ][o.e.e.NodeEnvironment    ] [elk] heap size [1gb], compressed ordinary object pointers [true]
[2021-01-12T07:22:20,449][INFO ][o.e.n.Node               ] [elk] node name [elk], node ID [hKWW9WC0TAKHr38HOxJIfA], cluster name [elasticsearch], roles [transform, master, remote_cluster_client, data, ml, data_content, data_hot, data_warm, data_cold, ingest]

Looks like it was still starting up when systemd decided to time out and kill it. Try a longer timeout.

Thank you. That fixed it. I increased from 75 to 6000. Do I also need to set the logstash timeout to prevent it from having no limit? How do I fix this to prevent it from hanging upon shutdown?

1 Like

Probably best to ask your Logstash questions on the Logstash forum, I don't know how to answer them.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.