Elasticsearch service stops, site not available; fresh install on Ubuntu 18.04


#1

Hello all;

New to Elasticstack, and already running into troubles :frowning:

I have a fresh Ubuntu 18.04 VM just created a few days ago. Basic install, have only added SSH and Webmin functionality.

Yesterday, started installing Elasticstack, starting with Elasticsearch as per https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html (installed via repository).

Once I got to the part of using cURL to test that all was running, I get

$ curl -X GET "localhost:9200/
curl: (7) Failed to connect to localhost port 9200: Connection refused

So far in troubleshooting I have checked:

  • Also tried 127.0.0.1, and also the local static-assigned IP
  • Firewall is disabled
  • I saw some hints online that (the user posting was on Ubuntu 16.04) it might help to update /etc/elasticsearch/elasticsearch.yml and put a specific IP on the network.host setting. I tried setting to localhost, loopback, and static IP. Testing the cURL on each one returned the same error.
  • If I restart the service and immediately show status on the service, it will show green/started. However, showing status again after that will come back as failed:

$ service elasticsearch status
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2018-06-21 16:09:35 CDT; 39s ago
Docs: http://www.elastic.co
Process: 9809 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 9809 (code=exited, status=1/FAILURE)

Jun 21 16:09:35 DBSRVLOG01 elasticsearch[9809]: at org.yaml.snakeyaml.scanner.ScannerImpl.stalePossibleSimpleKeys(ScannerImpl.java:465)
Jun 21 16:09:35 DBSRVLOG01 elasticsearch[9809]: at org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
Jun 21 16:09:35 DBSRVLOG01 elasticsearch[9809]: at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
Jun 21 16:09:35 DBSRVLOG01 elasticsearch[9809]: at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:557)
Jun 21 16:09:35 DBSRVLOG01 elasticsearch[9809]: at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:157)
Jun 21 16:09:35 DBSRVLOG01 elasticsearch[9809]: at org.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:167)
Jun 21 16:09:35 DBSRVLOG01 elasticsearch[9809]: at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:340)
Jun 21 16:09:35 DBSRVLOG01 elasticsearch[9809]: ... 13 more
Jun 21 16:09:35 DBSRVLOG01 systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Jun 21 16:09:35 DBSRVLOG01 systemd[1]: elasticsearch.service: Failed with result 'exit-code'.

  • I saw in elasticsearch.yml that logs should be saved to /etc/log/elasticsearch , but when I try to change to that directory, I get access denied. Looking at permissions for subfolders inside of the /etc/log folder:

drwxr-x--- 2 elasticsearch elasticsearch 4096 Jun 21 16:09 elasticsearch

  • Installed package of Elasticsearch is 6.3

We're excited to try out Elasticstack as a possible replacement for our current solution, so any help would be appreciated!


(Thomas Dasch) #2

Did you install Java 8? How did you install Elasticsearch? Would it be possible for you to post your elasticsearch.yml file please?


(Mark Walkom) #3

What's in /var/log/elasticsearch/elasticsearch.log?


#4

@tdasch

Java was installed with

apt install default-jre

Checking the installed package, I get

$ java -version
openjdk version "10.0.1" 2018-04-17
OpenJDK Runtime Environment (build 10.0.1+10-Ubuntu-3ubuntu1)
OpenJDK 64-Bit Server VM (build 10.0.1+10-Ubuntu-3ubuntu1, mixed mode)

As for installation, I followed https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html

In essence, downloaded key, installed the HTTP-transport repo, updated the .list file, then installed the package.

elasticsearch.yml:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: <statically-assigned local IP>
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
# Allow automatic creating of required indexes
#
action.auto_create_index: *

#5

sudo nano /var/log/elasticsearch/elasticsearch.log

Gives me a blank file.


(Mark Walkom) #6

It's a good idea to use less or more on log files rather than opening them in an editor :slight_smile:


#7

True - do that most of time. I just try a few different ways when having issues to check for matching results. In this case, same results.

Just doing less /var/log/elasticsearch/elasticsearch.log gives permissions denied. Running it sudo still says no such file or directory.


(Thomas Dasch) #8

What is present in /var/log/elasticsearch ?

Also, doesn't action.auto_create_index: need a + or - to go with the * ? I was looking at this doc to reference.


#9

I can't get into /var/log/elasticserch . If I try going into to the file itself as sudo (/var/log/elasticsearch/elasticsearch.log) , it has not been created.

That all makes me think it's a permissions thing. I posted earlier the permissions I'm seeing on the /var/log/elasticsearch folder, but I'm not aware if that is correct or should be something different?

Thanks for the reference on the action.auto_create_index: piece. I just added the * based on what I saw at the initial installation page to allow Elasticsearch to do what it needs automatically, but it wasn't 100% clear if only the * was needed, or there should be something else there. For giving it 100% full autonomy, should I just remove that variable from the .yml?


#10

The action.auto_create_index: piece was it. Looks like having that mis-configured was preventing it from starting up correctly (and not even writing the log)

I just removed that entry from my config file, restarted the service, and did service elasticsearch status checks for 30 seconds or so. Stayed green/running on all of them this time instead of stopping after a few seconds.

Did a test of curl -X GET "10.49.65.27:9200/" and received the correct response of

{
"name" : "4C-WLn3",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "QESOlcaeSVeqevcZFefOWg",
"version" : {
"number" : "6.3.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "424e937",
"build_date" : "2018-06-11T23:38:03.357887Z",
"build_snapshot" : false,
"lucene_version" : "7.3.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

/var/log/elasticsearch/elasticsearch.log is now being created as well.

Thank you for a 2nd set of eyes and getting me down the right path!


(system) #11

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.