Error in creating repository in azure using azure cloud plugin?

I created storage account in azure but when i create a repository in azure it is showing error like this.

I given command as
PUT _snapshot/azurerepository
{
"type": "azure"
}

Response

{
  "error": {
    "root_cause": [
      {
        "type": "repository_verification_exception",
        "reason": "[azurerepository] can not initialize container elasticsearch-snapshots"
      }
    ],
    "type": "repository_verification_exception",
    "reason": "[azurerepository] can not initialize container elasticsearch-snapshots",
    "caused_by": {
      "type": "storage_exception",
      "reason": "An unknown failure occurred : Connection refused: connect",
      "caused_by": {
        "type": "connect_exception",
        "reason": "Connection refused: connect"
      }
    }
  },
  "status": 500
}

Thanks

Any stacktrace in logs? May be change log level to debug ?

You mean by giving like these:

curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" : {
"logger.cloud.azure" : "DEBUG",
"logger.repositories.azure" : "DEBUG"
}
}'

Yes. Then look at elasticsearch logs.

Hi David,
These are the logs. Can you help me in sorting out this?

[2017-02-09 14:12:56,693][INFO ][cluster.routing.allocation.decider] [Marvin Flumm] low disk watermark [85%] exceeded on [bbAWnyH3TpuTUTtTy38mog][Marvin Flumm][C:\ElasticSearch 2.4.0\elasticsearch-2.4.0\data\elasticsearch\nodes\0] free: 10.3gb[12.8%], replicas will not be assigned to this node
[2017-02-09 14:13:22,409][DEBUG][repositories.azure       ] [Marvin Flumm] using container [elasticsearch-snapshots], chunk_size [64mb], compress [false], base_path [null]
[2017-02-09 14:13:22,409][INFO ][repositories             ] [Marvin Flumm] update repository [azure-backup]
[2017-02-09 14:13:22,421][TRACE][cloud.azure.storage      ] [Marvin Flumm] selecting a client for account [null], mode [PRIMARY_ONLY]
[2017-02-09 14:13:26,600][ERROR][cloud.azure.storage      ] [Marvin Flumm] can not access container [elasticsearch-snapshots]
[2017-02-09 14:13:26,600][DEBUG][repositories.azure       ] [Marvin Flumm] container [elasticsearch-snapshots] does not exist. Creating...
[2017-02-09 14:13:26,600][TRACE][cloud.azure.storage      ] [Marvin Flumm] selecting a client for account [null], mode [PRIMARY_ONLY]
[2017-02-09 14:13:26,600][TRACE][cloud.azure.storage      ] [Marvin Flumm] creating container [elasticsearch-snapshots]
[2017-02-09 14:13:26,709][INFO ][cluster.routing.allocation.decider] [Marvin Flumm] low disk watermark [85%] exceeded on [bbAWnyH3TpuTUTtTy38mog][Marvin Flumm][C:\ElasticSearch 2.4.0\elasticsearch-2.4.0\data\elasticsearch\nodes\0] free: 10.3gb[12.8%], replicas will not be assigned to this node
[2017-02-09 14:13:30,679][WARN ][repositories.azure       ] [Marvin Flumm] can not initialize container [elasticsearch-snapshots]: [An unknown failure occurred : Connection refused: connect]
[2017-02-09 14:13:30,679][WARN ][rest.suppressed          ] path: /_snapshot/azure-backup, params: {repository=azure-backup}
RepositoryVerificationException[[azure-backup] can not initialize container elasticsearch-snapshots]; nested: StorageException[An unknown failure occurred : Connection refused: connect]; nested: ConnectException[Connection refused: connect];
	at org.elasticsearch.repositories.azure.AzureRepository.startVerification(AzureRepository.java:183)
	at org.elasticsearch.repositories.RepositoriesService.verifyRepository(RepositoriesService.java:211)
	at org.elasticsearch.repositories.RepositoriesService$VerifyingRegisterRepositoryListener.onResponse(RepositoriesService.java:436)
	at org.elasticsearch.repositories.RepositoriesService$VerifyingRegisterRepositoryListener.onResponse(RepositoriesService.java:421)
	at org.elasticsearch.cluster.AckedClusterStateUpdateTask.onAllNodesAcked(AckedClusterStateUpdateTask.java:63)
	at org.elasticsearch.cluster.service.InternalClusterService$SafeAckedClusterStateTaskListener.onAllNodesAcked(InternalClusterService.java:733)
	at org.elasticsearch.cluster.service.InternalClusterService$AckCountDownListener.onNodeAck(InternalClusterService.java:1013)
	at org.elasticsearch.cluster.service.InternalClusterService$DelegetingAckListener.onNodeAck(InternalClusterService.java:952)
	at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:637)
	at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.microsoft.azure.storage.StorageException: An unknown failure occurred : Connection refused: connect
	at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:66)
	at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:199)
	at com.microsoft.azure.storage.blob.CloudBlobContainer.exists(CloudBlobContainer.java:717)
	at com.microsoft.azure.storage.blob.CloudBlobContainer.createIfNotExists(CloudBlobContainer.java:328)
	at com.microsoft.azure.storage.blob.CloudBlobContainer.createIfNotExists(CloudBlobContainer.java:304)
	at org.elasticsearch.cloud.azure.storage.AzureStorageServiceImpl.createContainer(AzureStorageServiceImpl.java:165)
	at org.elasticsearch.cloud.azure.blobstore.AzureBlobStore.createContainer(AzureBlobStore.java:117)
	at org.elasticsearch.repositories.azure.AzureRepository.startVerification(AzureRepository.java:179)
	... 14 more
Caused by: java.net.ConnectException: Connection refused: connect
	at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
	at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
	at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
	at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:264)
	at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367)
	at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
	at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1546)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
	at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
	at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
	at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:119)
	... 20 more

This looks strange to me. Which versions are you using?

Also can you share your elasticsearch.yml file? Please replace all credentials before posting.

I am using ES 2.4.0
My elasticsearch.yml file is

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
# node.name: node-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.memory_lock: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# network.host: 192.168.0.1#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true
script.engine.groovy.inline.search: on
script.engine.groovy.inline.aggs: on
script.groovy.sandbox.enabled: true
path.repo: ["/ElasticSearch 2.4.0/elasticsearch-2.4.0/Backup"]
cloud:
    azure:
        storage:
            my_account:
                account: myaccountname
                key: mykeyname

Thanks..

I wonder if you are behind a firewall or a proxy by any chance.

Thanks .. David

I will look through and let you know

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.