Facing failed to create repository thrown exception

Hi Team,

can anyone help me to resolve this issue?

i have setting path.repo like this path.repo: ["/usr/share/elasticsearch/esbackup"] , It is in aws elasticserver but while creating the repository facing this error

curl command

curl -XPUT '23.45.44.4:10090/_snapshot/productionjobsindexbackup' -d '{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/esbackup",
"compress": "true"
}
}'

Caused by: org.elasticsearch.repositories.RepositoryException: [productionjobsindexbackup] location [/usr/share/elasticsearch/esbackup] doesn't match any of the locations specified by path.repo because this setting is empty
at org.elasticsearch.repositories.fs.FsRepository.(FsRepository.java:91) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.repositories.RepositoriesModule.lambda$new$0(RepositoriesModule.java:49) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.repositories.RepositoriesService.createRepository(RepositoriesService.java:383) ~[elasticsearch-5.6.2.jar:5.6.2]
at org.elasticsearch.repositories.RepositoriesService.registerRepository(RepositoriesService.java:356) ~[elasticsearch-5.6.2.jar:5.6.2

Hi Team,
Any update on this issue?can you help me to resolve this issue?

Thanks in advance

It seems the setting is not found. Did you restart Elasticsearch after adding path.repo?

1 Like

yes @Christian_Dahlqvist , i have restarted the elasticsearch after adding the path.repo .

How did you install Elasticsearch?

yes @Christian_Dahlqvist , i have installed elasticsearch using debian package (https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html#deb)in aws ec2 server.accessing the elasticsearch using public ip in my local.

Can you show your config file? Is it located in the default /etc/elasticsearch directory?

yes @Christian_Dahlqvist, its located in /etc/elasticsearch/elasticsearch.yml .. in my local Es able to take the snapshot and restore is working fine, in aws server only facing this issue. this is my configuration file for your reference.

======================== Elasticsearch Configuration =========================

NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

Please consult the documentation for further information on configuration options:

https://www.elastic.co/guide/en/elasticsearch/reference/index.html

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

cluster.name: tech-prod

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name: ${HOSTNAME}

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

#path.data: /path/to/data

Path to log files:

#path.logs: /path/to/logs

path.repo: ["/usr/share/elasticsearch/esbackup"]

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 172.30.0.198

Set a custom port for HTTP:

http.port: 10090

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.zen.ping.unicast.hosts: ["172.30.0.198","172.30.0.199"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

discovery.zen.minimum_master_nodes: 2

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

gateway.recover_after_nodes: 2

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

Can you please format the config file correctly using the UI tools? It is hard to spot errors as it is currently formatted.

yes @Christian_Dahlqvist, can you check now ?

============== Elasticsearch Configuration ================

Please consult the documentation for further information on configuration options:

https://www.elastic.co/guide/en/elasticsearch/reference/index.html

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

cluster.name: techfetch-prod

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name: ${HOSTNAME}

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

#path.data: /path/to/data

Path to log files:

#path.logs: /path/to/logs

path.repo: ["/usr/share/elasticsearch/esbackup"]

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 172.30.0.101

Set a custom port for HTTP:

http.port: 10090

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.zen.ping.unicast.hosts: ["172.30.0.101","172.30.0.104"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

discovery.zen.minimum_master_nodes: 2

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

gateway.recover_after_nodes: 2

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

That is not any better. Please format the config as code using the </> tool.

is this ok for you @Christian_Dahlqvist or need to format? can you send me to the website link (</>)for formatting tool?

======================== Elasticsearch Configuration ========

NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

Please consult the documentation for further information on configuration options:

https://www.elastic.co/guide/en/elasticsearch/reference/index.html

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

cluster.name: techfetch-prod

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name: $ {

HOSTNAM

}

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

#path.data: /path/to/data

Path to log files:

#path.logs: /path/to/logs

path.repo: ["/usr/share/elasticsearch/esbackup"]

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 172.30.0.101

Set a custom port for HTTP:

http.port: 10090

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.zen.ping.unicast.hosts: ["172.30.0.101","172.30.0.104"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

discovery.zen.minimum_master_nodes: 2

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

gateway.recover_after_nodes: 2

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

Hi @Christian_Dahlqvist,

Any update on this issue ?

No, I have not spotted anything that looks wrong, so am not sure what is going on.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.