Fail start Elastic

Hello experts,

I have plan forward rsyslog to elasticsearch.
I also installation Rsyslog v8 and Elastic v1.5.2 both on same server.
But when start Elastic with error message:
bin/elasticsearch start
Failed to configure logging...
org.elasticsearch.ElasticsearchException: Failed to load logging configuration
at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:139)
at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:89)
at org.elasticsearch.bootstrap.Bootstrap.setupLogging(Bootstrap.java:100)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:184)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:97)
at java.nio.file.Files.readAttributes(Files.java:1686)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:109)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:69)
at java.nio.file.Files.walkFileTree(Files.java:2602)
at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:123)
... 4 more
log4j:WARN No appenders could be found for logger (node).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Here is some info our system may useful:
curl localhost:9200/_nodes/process?pretty
{
"cluster_name" : "elasticsearch",
"nodes" : {
"qzzvM8CTQdC1TvyqYkqxQg" : {
"name" : "Lin Sun",
"transport_address" : "inet[/10.126.122.28:9301]",
"host" : "server01.global",
"ip" : "10.126.122.28",
"version" : "1.5.2",
"build" : "62ff986",
"http_address" : "inet[/10.126.122.28:9200]",
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 9432,
"max_file_descriptors" : 4096,
"mlockall" : false
}
},
"yU_EVRjCQHGatyrOUUQLmw" : {
"name" : "logstash-server01.global-5661-2016",
"transport_address" : "inet[/10.126.122.28:9300]",
"host" : "server01.global",
"ip" : "10.126.122.28",
"version" : "1.1.1",
"build" : "f1585f0",
"attributes" : {
"client" : "true",
"data" : "false"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 5661,
"max_file_descriptors" : 16384,
"mlockall" : false
}
}
}
}

curl http://localhost:9200
{
"status" : 200,
"name" : "Lin Sun",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.5.2",
"build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c",
"build_timestamp" : "2015-04-27T09:21:06Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}

My elasticsearch.yml file is located in /etc/elasticsearch

Any help is very very appreciated,

Regards,

What about /etc/elasticsearch/logging.yml?

Hello,

It seems is default.
##################### Elasticsearch Configuration Example #####################

# This file contains an overview of various configuration settings,
# targeted at operations staff. Application developers should
# consult the guide at http://elasticsearch.org/guide.
#
# The installation procedure is covered at
# http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html.
#
# Elasticsearch comes with reasonable defaults for most settings,
# so you can try it out without bothering with configuration.
#
# Most of the time, these defaults are just fine for running a production
# cluster. If you're fine-tuning your cluster, or wondering about the
# effect of certain configuration option, please do ask on the
# mailing list or IRC channel [http://elasticsearch.org/community].

# Any element in the configuration can be replaced with environment variables
# by placing them in ${...} notation. For example:
#
#node.rack: ${RACK_ENV_VAR}

# For information on supported formats and syntax for the config file, see
# http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html

################################### Cluster ###################################

# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
#cluster.name: elasticsearch

#################################### Node #####################################

# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
#node.name: "Franz Kafka"

# Every node can be configured to allow or deny being eligible as the master,
# and to allow or deny to store the data.
#
# Allow this node to be eligible as a master node (enabled by default):
#
#node.master: true
#
# Allow this node to store data (enabled by default):
#
#node.data: true

# You can exploit these settings to design advanced cluster topologies.
#
# 1. You want this node to never become a master node, only to hold data.
# This will be the "workhorse" of your cluster.
#
#node.master: false
#node.data: true
#
# 2. You want this node to only serve as a master: to not store any data and
# to have free resources. This will be the "coordinator" of your cluster.
#
#node.master: true
#node.data: false
#
# 3. You want this node to be neither master nor data node, but
# to act as a "search load balancer" (fetching data from nodes,
# aggregating results, etc.)
#
#node.master: false
#node.data: false

# Use the Cluster Health API [http://localhost:9200/_cluster/health], the
# Node Info API [http://localhost:9200/_nodes] or GUI tools
# such as http://www.elasticsearch.org/overview/marvel/,
# http://github.com/karmi/elasticsearch-paramedic,
# http://github.com/lukas-vlcek/bigdesk and
# http://mobz.github.com/elasticsearch-head to inspect the cluster state.

# A node can have generic attributes associated with it, which can later be used
# for customized shard allocation filtering, or allocation awareness. An attribute
# is a simple key value pair, similar to node.key: value, here is an example:
#
#node.rack: rack314

# By default, multiple nodes are allowed to start from the same installation location
# to disable it, set the following:
#node.max_local_storage_nodes: 1

#################################### Index ####################################

# You can set a number of options (such as shard/replica options, mapping
# or analyzer definitions, translog settings, ...) for indices globally,
# in this file.
#
# Note, that it makes more sense to configure index settings specifically for
# a certain index, either when creating it or by using the index templates API.
#
# See http://elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html and
# http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html
# for more information.

# Set the number of shards (splits) of an index (5 by default):
#
#index.number_of_shards: 5

# Set the number of replicas (additional copies) of an index (1 by default):
#
#index.number_of_replicas: 1

# Note, that for development on a local machine, with small indices, it usually
# makes sense to "disable" the distributed features:
#
#index.number_of_shards: 1
#index.number_of_replicas: 0

# These settings directly affect the performance of index and search operations
# in your cluster. Assuming you have enough machines to hold shards and
# replicas, the rule of thumb is:
#
# 1. Having more shards enhances the indexing performance and allows to
# distribute a big index across machines.
# 2. Having more replicas enhances the search performance and improves the
# cluster availability.
#
# The "number_of_shards" is a one-time setting for an index.
#
# The "number_of_replicas" can be increased or decreased anytime,
# by using the Index Update Settings API.
#
# Elasticsearch takes care about load balancing, relocating, gathering the
# results from nodes, etc. Experiment with different settings to fine-tune
# your setup.

# Use the Index Status API (http://localhost:9200/A/_status) to inspect
# the index status.

#################################### Paths ####################################

# Path to directory containing configuration (this file and logging.yml):
#
#path.conf: /path/to/conf

# Path to directory where to store index data allocated for this node.
#
#path.data: /path/to/data
#
# Can optionally include more than one location, causing data to be striped across
# the locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
#
#path.data: /path/to/data1,/path/to/data2

# Path to temporary files:
#
#path.work: /path/to/work

# Path to log files:
#
#path.logs: /path/to/logs

# Path to where plugins are installed:
#
#path.plugins: /path/to/plugins

#################################### Plugin ###################################

# If a plugin listed here is not installed for current node, the node will not start.
#
#plugin.mandatory: mapper-attachments,lang-groovy

################################### Memory ####################################

# Elasticsearch performs poorly when JVM starts swapping: you should ensure that
# it never swaps.
#
# Set this property to true to lock the memory:
#
#bootstrap.mlockall: true

# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set
# to the same value, and that the machine has enough memory to allocate
# for Elasticsearch, leaving enough memory for the operating system itself.
#
# You should also make sure that the Elasticsearch process is allowed to lock
# the memory, eg. by using ulimit -l unlimited.

############################## Network And HTTP ###############################

# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).

# Set the bind address specifically (IPv4 or IPv6):
#
#network.bind_host: 192.168.0.1

# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
#network.publish_host: 192.168.0.1

# Set both 'bind_host' and 'publish_host':
#
#network.host: 192.168.0.1

# Set a custom port for the node to node communication (9300 by default):
#
#transport.tcp.port: 9300

# Enable compression for all communication between nodes (disabled by default):
#
#transport.tcp.compress: true

# Set a custom port to listen for HTTP traffic:
#
#http.port: 9200

# Set a custom allowed content length:
#
#http.max_content_length: 100mb

# Disable HTTP completely:
#
#http.enabled: false

################################### Gateway ###################################

# The gateway allows for persisting the cluster state between full cluster
# restarts. Every change to the state (such as adding an index) will be stored
# in the gateway, and when the cluster starts up for the first time,
# it will read its state from the gateway.

# There are several types of gateway implementations. For more information, see
# http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway.html.

# The default gateway type is the "local" gateway (recommended):
#
#gateway.type: local

# Settings below control how and when to start the initial recovery process on
# a full cluster restart (to reuse as much local data as possible when using shared
# gateway).

# Allow recovery process after N nodes in a cluster are up:
#
#gateway.recover_after_nodes: 1

# Set the timeout to initiate the recovery process, once the N nodes
# from previous setting are up (accepts time value):
#
#gateway.recover_after_time: 5m

# Set how many nodes are expected in this cluster. Once these N nodes
# are up (and recover_after_nodes is met), begin recovery process immediately
# (without waiting for recover_after_time to expire):
#
#gateway.expected_nodes: 2

############################# Recovery Throttling #############################

# These settings allow to control the process of shards allocation between
# nodes during initial recovery, replica allocation, rebalancing,
# or when adding and removing nodes.

# Set the number of concurrent recoveries happening on a node:
#
# 1. During the initial recovery
#
#cluster.routing.allocation.node_initial_primaries_recoveries: 4
#
# 2. During adding/removing nodes, rebalancing, etc
#
#cluster.routing.allocation.node_concurrent_recoveries: 2

# Set to throttle throughput when recovering (eg. 100mb, by default 20mb):
#
#indices.recovery.max_bytes_per_sec: 20mb

# Set to limit the number of open concurrent streams when
# recovering a shard from a peer:
#
#indices.recovery.concurrent_streams: 5

################################## Discovery ##################################

# Discovery infrastructure ensures nodes can be found within a cluster
# and master node is elected. Multicast discovery is the default.

# Set to ensure a node sees N other master eligible nodes to be considered
# operational within the cluster. This should be set to a quorum/majority of
# the master-eligible nodes in the cluster.
#
#discovery.zen.minimum_master_nodes: 1

# Set the time to wait for ping responses from other nodes when discovering.
# Set this option to a higher value on a slow or congested network
# to minimize discovery failures:
#
#discovery.zen.ping.timeout: 3s

# For more information, see
# http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html

# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
#
# 1. Disable multicast discovery (enabled by default):
#
#discovery.zen.ping.multicast.enabled: false
#
# 2. Configure an initial list of master nodes in the cluster
# to perform discovery when new nodes (master or data) are started:
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]

# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
#
# You have to install the cloud-aws plugin for enabling the EC2 discovery.
#
# For more information, see
# http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html
#
# See http://elasticsearch.org/tutorials/elasticsearch-on-ec2/
# for a step-by-step tutorial.

# GCE discovery allows to use Google Compute Engine API in order to perform discovery.
#
# You have to install the cloud-gce plugin for enabling the GCE discovery.
#
# For more information, see https://github.com/elasticsearch/elasticsearch-cloud-gce.

# Azure discovery allows to use Azure API in order to perform discovery.
#
# You have to install the cloud-azure plugin for enabling the Azure discovery.
#
# For more information, see https://github.com/elasticsearch/elasticsearch-cloud-azure.

################################## Slow Log ##################################

# Shard level query and fetch threshold logging.

#index.search.slowlog.threshold.query.warn: 10s
#index.search.slowlog.threshold.query.info: 5s
#index.search.slowlog.threshold.query.debug: 2s
#index.search.slowlog.threshold.query.trace: 500ms

#index.search.slowlog.threshold.fetch.warn: 1s
#index.search.slowlog.threshold.fetch.info: 800ms
#index.search.slowlog.threshold.fetch.debug: 500ms
#index.search.slowlog.threshold.fetch.trace: 200ms

#index.indexing.slowlog.threshold.index.warn: 10s
#index.indexing.slowlog.threshold.index.info: 5s
#index.indexing.slowlog.threshold.index.debug: 2s
#index.indexing.slowlog.threshold.index.trace: 500ms

################################## GC Logging ################################

#monitor.jvm.gc.young.warn: 1000ms
#monitor.jvm.gc.young.info: 700ms
#monitor.jvm.gc.young.debug: 400ms

#monitor.jvm.gc.old.warn: 10s
#monitor.jvm.gc.old.info: 5s
#monitor.jvm.gc.old.debug: 2s

################################## Security ################################

# Uncomment if you want to enable JSONP as a valid return transport on the
# http server. With this enabled, it may pose a security risk, so disabling
# it unless you need it is recommended (it is disabled by default).
#
#http.jsonp.enable: true

I am newbie, please point me the right direction.

Thanks a lot

Never mind elasticsearch.yml. It's logging.yml I'm asking about. Do you have such a file in /etc/elasticsearch?

Hi,
I had a file:
cat logging.yml
# you can override this using by setting a system property, for example -Des.logger.level=DEBUG
es.logger.level: INFO
rootLogger: ${es.logger.level}, console, file
logger:
# log action execution errors for easier debugging
action: DEBUG
# reduce the logging for aws, too much is logged under the default INFO
com.amazonaws: WARN

# gateway
#gateway: DEBUG
#index.gateway: DEBUG

# peer shard recovery
#indices.recovery: DEBUG

# discovery
#discovery: TRACE

index.search.slowlog: TRACE, index_search_slow_log_file
index.indexing.slowlog: TRACE, index_indexing_slow_log_file

additivity:
index.search.slowlog: false
index.indexing.slowlog: false

appender:
console:
type: console
layout:
type: consolePattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

file:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

# Use the following log4j-extras RollingFileAppender to enable gzip compression of log files.
# For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html
#file:
#type: extrasRollingFile
#file: ${path.logs}/${cluster.name}.log
#rollingPolicy: timeBased
#rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz
#layout:
#type: pattern
#conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

index_search_slow_log_file:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}_index_search_slowlog.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

index_indexing_slow_log_file:
type: dailyRollingFile
file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

Any help is appreciated,

Is there really no indentation in the file? If not the file isn't valid YAML which would explain why Elasticsearch isn't able to load it. Where did you get the file from? It's not the stock one from the Elasticsearch distribution.

Hello,

I had v1.5.2 from repo as guide on community. But before that i installed elasticsearch v1.1.
How can i do now?

Regards,

I had v1.5.2 from repo as guide on community. But before that i installed elasticsearch v1.1.
How can i do now?

Sorry, I have no idea what you're asking. Please answer my previous questions about your logging.yml.

Hello,
I think i had logging.yml from v1.1 before. But after i installed v1.5 ít ia fail.
How i can do now?
Regards,

Using the logging.yml that ships with the ES version you're using would be a good way of getting back to a sane state. Once that's done you can start adding your customizations.

Hello,

Which you mean? Could you please show me clearly. I am newbie in ES.

Any suggestion is very appreciated,

Regards,

How did you install Elasticsearch?

Hello,

I installed from repo

Here is the list file that i had:
ls -l /etc/elasticsearch/
total 52
-rw-r--r-- 1 root root 13500 May 14 15:27 elasticsearch.yml
-rw-r--r-- 1 root root 13208 Nov 8 2014 elasticsearch.yml.org
-rw-r--r-- 1 root root 13260 Nov 8 2014 elasticsearch.yml.rpmsave
-rw-r--r-- 1 root root 2030 Apr 27 16:34 logging.yml

ls -l /usr/share/elasticsearch/
total 44
drwxr-xr-x 2 root root 4096 May 13 13:26 bin
drwxr-xr-x 3 root root 4096 May 13 13:50 data
drwxr-xr-x 3 root root 4096 May 13 13:26 lib
-rw-r--r-- 1 root root 11358 Apr 27 16:34 LICENSE.txt
-rw-r--r-- 1 root root 150 Apr 27 16:34 NOTICE.txt
drwxr-xr-x 3 elasticsearch elasticsearch 4096 May 13 13:49 plugins
-rw-r--r-- 1 root root 8499 Apr 27 16:34 README.textile

Any suggestion is very very appreciated,

Then you should be using service elasticsearch start instead of calling bin\elasticsearch.

Hello,

I just using service elasticsearch start instead of calling bin\elasticsearch and changed permission to 777 on /var/log/elasticsearch.
But still error when checking the status:

[root@subsys]# service elasticsearch start
Starting elasticsearch:                                    [  OK  ]
[root@subsys]# service elasticsearch status
elasticsearch dead but subsys locked

Although,GO to :
cd /var/lock/subsys
delete elasticsearch file
rm elasticsearch
stop the elasticsearch
and start the elasticsearch
But elasticsearch still dead

I also checked elasticsearch.log file on /var/log/elasticsearch:

[root@elasticsearch]# cat elasticsearch.log

    [2015-05-15 13:08:46,170][INFO ][node                     ] [Aldebron] version[1.5.2], pid[32730], build[62ff986/2015-04-27T09:21:06Z]
    [2015-05-15 13:08:46,171][INFO ][node                     ] [Aldebron] initializing ...
    [2015-05-15 13:08:46,175][INFO ][plugins                  ] [Aldebron] loaded [], sites [head]
    [2015-05-15 13:08:46,210][ERROR][bootstrap                ] Exception
    org.elasticsearch.ElasticsearchIllegalStateException: Failed to created node environment
            at org.elasticsearch.node.internal.InternalNode.<init>(InternalNode.java:162)
            at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
            at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:70)
            at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:213)
            at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
    Caused by: java.nio.file.AccessDeniedException: /var/lib/elasticsearch/elasticsearch/nodes/1
            at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
            at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
            at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
            at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:383)
            at java.nio.file.Files.createDirectory(Files.java:630)
            at java.nio.file.Files.createAndCheckIsDirectory(Files.java:734)
            at java.nio.file.Files.createDirectories(Files.java:720)
            at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:105)
            at org.elasticsearch.node.internal.InternalNode.<init>(InternalNode.java:160)
            ... 4 more
    [root@elasticsearch]# java -version
    java version "1.7.0_79"
    OpenJDK Runtime Environment (rhel-2.5.5.1.el6_6-x86_64 u79-b14)
    OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
    You have mail in /var/spool/mail/root

Where configuration i am wrong?

Regards,

That is the problem.
Did you do any manual permissions changes after the install process?

Hello there. I also installed from the PPA on kubuntu and ran into the same problem, elastic wouldn't start with the above error of access denied when run as a service with systemctl.
The problem I think is that user elasticsearch does not have permission to write in /usr/share/elasticsearch which belongs to root where it tries to create a data folder to set up the nodes. I got around the problem by creating a folder /usr/share/elasticsearch/data myself and chown it to elasticsearch:elasticsearch and I could start the service! The log4j problem probably belongs to the same category.

regards, Thomas

Hello,

Now, I can start elasticsearch with change permission for elasticsearch

Thanks for all support.

I know this is a old post but I am facing the same error in CentOS and I just can't figure how to use the new data folder created

try
chown elasticsearch:elasticsearch /path/to/newdatafolder

All folders and files and logs etc need to be owned by elasticsearch:elasticsearch (owner:group)