{"error":"RemoteTransportException[[servername][inet[/xx.xxx.xxx.xx:9300]][cluster:admin/repository/put]]; nested:
RepositoryException[[stgdevsnapshots] failed to create repository]; nested: ConfigurationException[Guice configuration e
rrors:\r\n\r\n1) No implementation for org.elasticsearch.repositories.Repository was bound.\r\n at org.elasticsearch.re
positories.Repository\r\n\r\n1 error]; ","status":500}
I found this thread where someone was getting the same error in which someone suggested enabling tracing, which I did. I used this:
PUT _cluster/settings
{"transient" : {"logger.cloud.azure" : "TRACE", "logger.repositories.azure" : "TRACE"}}
Is that still valid? (The thread is from 2014.) I'm not seeing any relevant output in the log after doing running that. It appears that the cause of the issue the triggered the thread was an incorrectly configured .yml w/r/t the azure storage account. I don't think that's the case here because 1) I've seen ES not even start if the cloud settings in the .yml are badly formed, 2) I've confirmed that the name and key are correct, and 3) (more generally) we've done this in two other environments and had no issue. Two differences come to mind between the enviroments that worked and this one:
In this case is that the ES search machines are on premises as opposed to in azure.
In this case, we didn't run the command to install the plug in "plugin install elasticsearch/elasticsearch-cloud-azure/2.8.2" but rather only copied the cloud-azure directory from machines that worked. We did that for a reason. Does running the plugin install command do anything other than copy the plugin directory over? If I run plugin --list, "cloud-azure" is returned.
Could you share your elasticsearch.yml here?
You can obviously replace your real credentials with XXXXX but please keep the formatting intact and format your code with </> icon.
Yes, we did restart them. I'm including the details you requested below. Please let me know what you see. Thanks for your help!
elasticsearch.yml
node.data: ${datanode}
cluster.routing.allocation.awareness.attributes: zone_id
cluster.name: xoistaging
indices.store.throttle.max_bytes_per_sec : 100mb
index.refresh_interval: 30s
index.translog.flush_threshold_size: 1gb
bootstrap.mlockall: true
http.enabled: true
path.data: ${pathdata}
path.work: ${pathwork}
path.logs: ${pathlogs}
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["Server1","Server2","Server3"]
discovery.zen.ping.timeout: 30s
marvel.agent.exporter.es.hosts: ["Marvel1:9200","Marvel2:9200","Marvel3:9200"]
# Defines the weight factor for shards allocated on a node (float). Defaults to 0.45f. Raising this raises the tendency to equalize the number of shards across all nodes in the cluster.
cluster.routing.allocation.balance.shard: 0.75f
#Defines a factor to the number of shards per index allocated on a specific node (float). Defaults to 0.55f. Raising this raises the tendency to equalize the number of shards per index across all nodes in the cluster.
cluster.routing.allocation.balance.index: 0.75f
# Block initial recovery after a full cluster restart until N nodes are started:
gateway.recover_after_nodes: 12
# Disable starting multiple nodes on a single system:
node.max_local_storage_nodes: 1
cloud:
azure:
storage:
account: storageaccountname
key: xxx
GET _cat/plugins?v:
name component version type url
Server1 marvel 1.3.1 j/s /_plugin/marvel/
Server1 cloud-azure 2.8.2 j
Server1 head NA s /_plugin/head/
Server2 marvel 1.3.1 j/s /_plugin/marvel/
Server2 cloud-azure 2.8.2 j
Server2 head NA s /_plugin/head/
Server3 marvel 1.3.1 j/s /_plugin/marvel/
Server3 cloud-azure 2.8.2 j
Server3 head NA s /_plugin/head/
Server4 marvel 1.3.1 j/s /_plugin/marvel/
Server4 cloud-azure 2.8.2 j
Server4 head NA s /_plugin/head/
Server4 marvel 1.3.1 j/s /_plugin/marvel/
Server4 cloud-azure 2.8.2 j
Server4 head NA s /_plugin/head/
Server5 marvel 1.3.1 j/s /_plugin/marvel/
Server5 cloud-azure 2.8.2 j
Server5 head NA s /_plugin/head/
Server6 marvel 1.3.1 j/s /_plugin/marvel/
Server6 cloud-azure 2.8.2 j
Server6 head NA s /_plugin/head/
Server7 marvel 1.3.1 j/s /_plugin/marvel/
Server7 cloud-azure 2.8.2 j
Server7 head NA s /_plugin/head/
Server8 marvel 1.3.1 j/s /_plugin/marvel/
Server8 cloud-azure 2.8.2 j
Server8 head NA s /_plugin/head/
Server9 marvel 1.3.1 j/s /_plugin/marvel/
Server9 cloud-azure 2.8.2 j
Server9 head NA s /_plugin/head/
Server10 marvel 1.3.1 j/s /_plugin/marvel/
Server10 cloud-azure 2.8.2 j
Server10 head NA s /_plugin/head/
Server11 marvel 1.3.1 j/s /_plugin/marvel/
Server11 cloud-azure 2.8.2 j
Server11 head NA s /_plugin/head/
Server12 marvel 1.3.1 j/s /_plugin/marvel/
Server12 cloud-azure 2.8.2 j
Server12 head NA s /_plugin/head/
Server13 marvel 1.3.1 j/s /_plugin/marvel/
Server13 cloud-azure 2.8.2 j
Server13 head NA s /_plugin/head/
Server14 marvel 1.3.1 j/s /_plugin/marvel/
Server14 cloud-azure 2.8.2 j
Server14 head NA s /_plugin/head/
Server15 marvel 1.3.1 j/s /_plugin/marvel/
Server15 cloud-azure 2.8.2 j
Server15 head NA s /_plugin/head/
Server16 marvel 1.3.1 j/s /_plugin/marvel/
Server16 cloud-azure 2.8.2 j
Server16 head NA s /_plugin/head/
Server17 marvel 1.3.1 j/s /_plugin/marvel/
Server17 cloud-azure 2.8.2 j
Server17 head NA s /_plugin/head/
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.