Elasticsearch Docker container cannot Snapshot to Google Cloud Storage (GCS)

Editors Note: this post is cross-posted on StackOverflow. Please respond in whichever forum the Elasticsearch community finds most appropriate for debugging user issues. Please note that the Elastic Co forum does not allow this post to contain URLs pointing to the Stackoverflow post, nor Medium or GitHub URLs which are referenced below. I will happily do whatever the Elastic Co community expects to help the next users finding this post, as we solve the task together.

This simple task is starkly difficult to debug using the resources online. Let me explain the use case, then the three lines of code that are expected to perform the task; then the errors, and the permissions conflicts that appear to be innate to the problem. Please advise.

Use Case:

Elasticsearch provides one of the few competitors to Google Cloud Search and Google Search. Elasticsearch is the managed service stack build on top of open-source Solr.

Elasticsearch creates backups of its index, called Snapshots. Elasticsearch manages an internal infrastructure using Google Cloud Services and its standard Service Account permissions files to create a feature for these Snapshots to be uploaded to and downloaded from a Google Storage bucket, instead of the machine running the Elasticsearch server. These internal GCS services are managed by adding a Service Account json file through the bin/elasticsearch-keystore add-file mechanism and registering the expectation using the PUT _snapshot/BACKUP_NAME/ mechanism. Then, Snapshots are created or restored using a similar series of PUT and GET calls to the Elasticsearch server.

Elasticsearch conveniently hosts a Docker container so that the entire installation and launch service can be containerized: docker.elastic.co/elasticsearch/elasticsearch:8.8.2

This tutorial describes the process, with appropriate links to official Elasticsearch pages, despite being from 2019:

The process takes four lines of code. The docker container is running on my local laptop, and my Service Account has been used for making Google Storage client calls using my own stack of Google client calls in other applications. The problem is in location appropriate documentation of the workflow from Elasticsearch necessary to launch this feature for software developers.

Problem

Attempting to build that workflow creates a read-permissions error "java.security.AccessControlException","error.message":"access denied \"java.io.FilePermission\" for a specific file, usr/share/elasticsearch/.config/gcloud/active_config. This file does not exist within the Docker container: however, creating this file does not resolve the error. Neither does setting the internal filesystem permissions to chmod 1777. The source of this error is the java.security.FilePermission permission structure of the Java system running Elasticsearch inside the Docker container. Strangely, very, very few forums or search results appear on Google which identify this problem or walk the user through a straightforward solution: that is the over-arching purpose of this ticket as we solve this relatively simple, ground-foundation Elasticsearch troubleshooting. A few forums from years ago mention that user control over java.security has been disabled from Elasticsearch 5.0 forward, which leaves me confused and at a loss for moving ahead. I suspect the problem is one or two lines of configuration file added after the Docker is launched.

Logs

{"@timestamp":"2023-07-22T17:41:34.353Z", "log.level": "INFO", "message":"put repository [cheese7]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[05b6814dda47][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.repositories.RepositoriesService","elasticsearch.cluster.uuid":"034Ez40kSKiiocCGr8Gw3A","elasticsearch.node.id":"toF_prk7Thm3JYGtsBDBPw","elasticsearch.node.name":"05b6814dda47","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2023-07-22T17:41:34.464Z", "log.level": "WARN", "message":"failed to load default project id", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[05b6814dda47][snapshot][T#1]","log.logger":"org.elasticsearch.repositories.gcs.GoogleCloudStorageService","elasticsearch.cluster.uuid":"034Ez40kSKiiocCGr8Gw3A","elasticsearch.node.id":"toF_prk7Thm3JYGtsBDBPw","elasticsearch.node.name":"05b6814dda47","elasticsearch.cluster.name":"docker-cluster","error.type":"java.security.AccessControlException","error.message":"access denied (\"java.io.FilePermission\" \"/usr/share/elasticsearch/.config/gcloud/active_config\" \"read\")","error.stack_trace":"java.security.AccessControlException: access denied (\"java.io.FilePermission\" \"/usr/share/elasticsearch/.config/gcloud/active_config\" \"read\")\n\tat java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:488)\n\tat java.base/java.security.AccessController.checkPermission(AccessController.java:1071)\n\tat java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:411)\n\tat java.base/java.lang.SecurityManager.checkRead(SecurityManager.java:742)\n\tat java.base/java.io.FileInputStream.<init>(FileInputStream.java:147)\n\tat com.google.common.io.Files$FileByteSource.openStream(Files.java:132)\n\tat com.google.common.io.Files$FileByteSource.openStream(Files.java:122)\n\tat com.google.common.io.ByteSource$AsCharSource.openStream(ByteSource.java:474)\n\tat com.google.common.io.CharSource.openBufferedStream(CharSource.java:126)\n\tat com.google.common.io.CharSource.readFirstLine(CharSource.java:316)\n\tat com.google.cloud.ServiceOptions.getActiveGoogleCloudConfig(ServiceOptions.java:396)\n\tat com.google.cloud.ServiceOptions.getGoogleCloudProjectId(ServiceOptions.java:413)\n\tat com.google.cloud.ServiceOptions.getDefaultProjectId(ServiceOptions.java:384)\n\tat java.base/java.security.AccessController.doPrivileged(AccessController.java:571)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedIOException(SocketAccess.java:33)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.createStorageOptions(GoogleCloudStorageService.java:196)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.createClient(GoogleCloudStorageService.java:172)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.client(GoogleCloudStorageService.java:112)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.client(GoogleCloudStorageBlobStore.java:125)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.lambda$writeBlobMultipart$8(GoogleCloudStorageBlobStore.java:474)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.lambda$doPrivilegedVoidIOException$0(SocketAccess.java:43)\n\tat java.base/java.security.AccessController.doPrivileged(AccessController.java:571)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedVoidIOException(SocketAccess.java:42)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlobMultipart(GoogleCloudStorageBlobStore.java:474)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlob(GoogleCloudStorageBlobStore.java:264)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobContainer.writeBlob(GoogleCloudStorageBlobContainer.java:80)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobContainer.writeBlobAtomic(GoogleCloudStorageBlobContainer.java:95)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:1707)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.repositories.RepositoriesService$4.doRun(RepositoriesService.java:481)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1623)\n"}
{"@timestamp":"2023-07-22T17:41:34.466Z", "log.level": "WARN", "message":"failed to load default project id fallback", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[05b6814dda47][snapshot][T#1]","log.logger":"org.elasticsearch.repositories.gcs.GoogleCloudStorageService","elasticsearch.cluster.uuid":"034Ez40kSKiiocCGr8Gw3A","elasticsearch.node.id":"toF_prk7Thm3JYGtsBDBPw","elasticsearch.node.name":"05b6814dda47","elasticsearch.cluster.name":"docker-cluster","error.type":"java.net.UnknownHostException","error.message":"metadata.google.internal","error.stack_trace":"java.net.UnknownHostException: metadata.google.internal\n\tat java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:560)\n\tat java.base/java.net.Socket.connect(Socket.java:666)\n\tat java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:178)\n\tat java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:532)\n\tat java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:637)\n\tat java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:280)\n\tat java.base/sun.net.www.http.HttpClient.New(HttpClient.java:385)\n\tat java.base/sun.net.www.http.HttpClient.New(HttpClient.java:407)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1308)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1241)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1127)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1056)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1657)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1581)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.getDefaultProjectId(GoogleCloudStorageService.java:252)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.lambda$createStorageOptions$4(GoogleCloudStorageService.java:208)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.lambda$doPrivilegedVoidIOException$0(SocketAccess.java:43)\n\tat java.base/java.security.AccessController.doPrivileged(AccessController.java:571)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedVoidIOException(SocketAccess.java:42)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.createStorageOptions(GoogleCloudStorageService.java:207)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.createClient(GoogleCloudStorageService.java:172)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.client(GoogleCloudStorageService.java:112)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.client(GoogleCloudStorageBlobStore.java:125)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.lambda$writeBlobMultipart$8(GoogleCloudStorageBlobStore.java:474)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.lambda$doPrivilegedVoidIOException$0(SocketAccess.java:43)\n\tat java.base/java.security.AccessController.doPrivileged(AccessController.java:571)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedVoidIOException(SocketAccess.java:42)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlobMultipart(GoogleCloudStorageBlobStore.java:474)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlob(GoogleCloudStorageBlobStore.java:264)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobContainer.writeBlob(GoogleCloudStorageBlobContainer.java:80)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobContainer.writeBlobAtomic(GoogleCloudStorageBlobContainer.java:95)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:1707)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.repositories.RepositoriesService$4.doRun(RepositoriesService.java:481)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1623)\n"}
{"@timestamp":"2023-07-22T17:41:34.477Z", "log.level": "WARN", "message":"failed to load Application Default Credentials", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[05b6814dda47][snapshot][T#1]","log.logger":"org.elasticsearch.repositories.gcs.GoogleCloudStorageService","elasticsearch.cluster.uuid":"034Ez40kSKiiocCGr8Gw3A","elasticsearch.node.id":"toF_prk7Thm3JYGtsBDBPw","elasticsearch.node.name":"05b6814dda47","elasticsearch.cluster.name":"docker-cluster","error.type":"java.io.IOException","error.message":"The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.","error.stack_trace":"java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\n\tat com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:134)\n\tat com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:125)\n\tat com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:97)\n\tat java.base/java.security.AccessController.doPrivileged(AccessController.java:571)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedIOException(SocketAccess.java:33)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.createStorageOptions(GoogleCloudStorageService.java:220)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.createClient(GoogleCloudStorageService.java:172)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.client(GoogleCloudStorageService.java:112)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.client(GoogleCloudStorageBlobStore.java:125)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.lambda$writeBlobMultipart$8(GoogleCloudStorageBlobStore.java:474)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.lambda$doPrivilegedVoidIOException$0(SocketAccess.java:43)\n\tat java.base/java.security.AccessController.doPrivileged(AccessController.java:571)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedVoidIOException(SocketAccess.java:42)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlobMultipart(GoogleCloudStorageBlobStore.java:474)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlob(GoogleCloudStorageBlobStore.java:264)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobContainer.writeBlob(GoogleCloudStorageBlobContainer.java:80)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobContainer.writeBlobAtomic(GoogleCloudStorageBlobContainer.java:95)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:1707)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.repositories.RepositoriesService$4.doRun(RepositoriesService.java:481)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1623)\n"}
{"@timestamp":"2023-07-22T17:41:34.479Z", "log.level": "WARN", "message":"path: /_snapshot/cheese7, params: {repository=cheese7}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[05b6814dda47][snapshot][T#1]","log.logger":"rest.suppressed","elasticsearch.cluster.uuid":"034Ez40kSKiiocCGr8Gw3A","elasticsearch.node.id":"toF_prk7Thm3JYGtsBDBPw","elasticsearch.node.name":"05b6814dda47","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.repositories.RepositoryVerificationException","error.message":"[cheese7] path  is not accessible on master node","error.stack_trace":"org.elasticsearch.repositories.RepositoryVerificationException: [cheese7] path  is not accessible on master node\nCaused by: java.security.AccessControlException: access denied (\"java.io.FilePermission\" \"/usr/share/elasticsearch/.config/gcloud/active_config\" \"read\")\n\tat java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:488)\n\tat java.base/java.security.AccessController.checkPermission(AccessController.java:1071)\n\tat java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:411)\n\tat java.base/java.lang.SecurityManager.checkRead(SecurityManager.java:742)\n\tat java.base/java.io.FileInputStream.<init>(FileInputStream.java:147)\n\tat com.google.common.io.Files$FileByteSource.openStream(Files.java:132)\n\tat com.google.common.io.Files$FileByteSource.openStream(Files.java:122)\n\tat com.google.common.io.ByteSource$AsCharSource.openStream(ByteSource.java:474)\n\tat com.google.common.io.CharSource.openBufferedStream(CharSource.java:126)\n\tat com.google.common.io.CharSource.readFirstLine(CharSource.java:316)\n\tat com.google.cloud.ServiceOptions.getActiveGoogleCloudConfig(ServiceOptions.java:396)\n\tat com.google.cloud.ServiceOptions.getGoogleCloudProjectId(ServiceOptions.java:413)\n\tat com.google.cloud.ServiceOptions.getDefaultProjectId(ServiceOptions.java:384)\n\tat com.google.cloud.ServiceOptions.getDefaultProject(ServiceOptions.java:356)\n\tat com.google.cloud.ServiceOptions.<init>(ServiceOptions.java:302)\n\tat com.google.cloud.storage.StorageOptions.<init>(StorageOptions.java:117)\n\tat com.google.cloud.storage.StorageOptions.<init>(StorageOptions.java:34)\n\tat com.google.cloud.storage.StorageOptions$Builder.build(StorageOptions.java:112)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.createStorageOptions(GoogleCloudStorageService.java:235)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.createClient(GoogleCloudStorageService.java:172)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageService.client(GoogleCloudStorageService.java:112)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.client(GoogleCloudStorageBlobStore.java:125)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.lambda$writeBlobMultipart$8(GoogleCloudStorageBlobStore.java:474)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.lambda$doPrivilegedVoidIOException$0(SocketAccess.java:43)\n\tat java.base/java.security.AccessController.doPrivileged(AccessController.java:571)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedVoidIOException(SocketAccess.java:42)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlobMultipart(GoogleCloudStorageBlobStore.java:474)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobStore.writeBlob(GoogleCloudStorageBlobStore.java:264)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobContainer.writeBlob(GoogleCloudStorageBlobContainer.java:80)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageBlobContainer.writeBlobAtomic(GoogleCloudStorageBlobContainer.java:95)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.repositories.blobstore.BlobStoreRepository.startVerification(BlobStoreRepository.java:1707)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.repositories.RepositoriesService$4.doRun(RepositoriesService.java:481)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)\n\tat org.elasticsearch.server@8.8.2/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1623)\n"}

I have specifically reverted to earlier attempts to provide the most detailed error logs.
There are solution attempts that I have tried that solve some of the problems, but that seem to be non-essential once the solution itself is determined. These include adding the project id specifically in the PUT _snapshot/BACKUP_NAME/ call. Attempts to put the Credential Files data directly in that call, which worked for some users in the past, do not work wither. I suspect the error is a very simple permissions problem I mistook along the way.

References

  1. This walks through the specific steps taken. The current version of Docker already installs the GCS plug-in as a standard package, though I have tried with and without installation. Strangely, instructions to restart the Elasticsearch service without restarting the Docker container are also absent, and could help the user building search.
  2. This issue discusses the same core problem; though this users is able to embed the Credential File information directly in the PUT request. This did not work for me, and this is a non-standard, not-secure approach.
  3. This discusses the general problem of Java security and file reading in Elasticsearch.
  4. This addresses how updated Elasticsearch 5.0+ services are not allowed to access local files. This seems to imply that the keystore mechanism is supposed to read the local file and integrate its contents directly into the Elasticsearch keystore. Diagnosing this confirms that the keystore is aware of the file. It may be that I made an error in this step.

My Build

This is the docker run command:
docker run --rm -p 9200:9200 -p 9300:9300 -e "xpack.security.enabled=false" -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:8.8.2

This is the sequence of client calls being made (commands are displayed in my logs):

INFO:elasticsearch:GET http://localhost:9200/ [status:200 request:0.002s]
INFO:__main__:Cluster connection information: {'name': '69a39f48d8da', 'cluster_name': 'docker-cluster', 'cluster_uuid': 'd4c_Vr9wQdapS99Tozhwhw', 'version': {'number': '8.8.2', 'build_flavor': 'default', 'build_type': 'docker', 'build_hash': '98e1271edf932a480e4262a471281f1ee295ce6b', 'build_date': '2023-06-26T05:16:16.196344851Z', 'build_snapshot': False, 'lucene_version': '9.6.0', 'minimum_wire_compatibility_version': '7.17.0', 'minimum_index_compatibility_version': '7.0.0'}, 'tagline': 'You Know, for Search'}
INFO:elasticsearch:GET http://localhost:9200/_cluster/health [status:200 request:0.012s]
INFO:__main__:Cluster health: {'cluster_name': 'docker-cluster', 'status': 'green', 'timed_out': False, 'number_of_nodes': 1, 'number_of_data_nodes': 1, 'active_primary_shards': 0, 'active_shards': 0, 'relocating_shards': 0, 'initializing_shards': 0, 'unassigned_shards': 0, 'delayed_unassigned_shards': 0, 'number_of_pending_tasks': 0, 'number_of_in_flight_fetch': 0, 'task_max_waiting_in_queue_millis': 0, 'active_shards_percent_as_number': 100.0}
INFO:base.core.lightweight_utilities.process:Executing CMD shell OFF: docker cp base/_db/credentials/BASENAME_REMOVED.json 69a39f48d8da:/usr/share/elasticsearch/BASENAME_REMOVED.json
INFO:base.core.lightweight_utilities.process:Executing CMD complete: 0.033 seconds.
INFO:base.core.lightweight_utilities.process:Executing CMD shell OFF: docker exec -t 69a39f48d8da bin/elasticsearch-keystore add-file --force gcs.client.default.credentials_file /usr/share/elasticsearch/BASENAME_REMOVED.json
INFO:base.core.lightweight_utilities.process:Executing CMD complete: 0.753 seconds.
INFO:base.core.lightweight_utilities.process:Executing CMD shell OFF: docker exec -t 69a39f48d8da bin/elasticsearch-plugin install repository-gcs
INFO:base.core.lightweight_utilities.process:Executing CMD complete: 0.589 seconds.
WARNING:elasticsearch:PUT http://localhost:9200/_snapshot/cheese7 [status:500 request:0.213s]
Traceback (most recent call last):
  File "/Users/USERNAME_REMOVED/miniconda3/envs/ML/lib/python3.10/site-packages/elasticsearch/transport.py", line 466, in perform_request
    raise e
  File "/Users/USERNAME_REMOVED/miniconda3/envs/ML/lib/python3.10/site-packages/elasticsearch/transport.py", line 427, in perform_request
    status, headers_response, data = connection.perform_request(
  File "/Users/USERNAME_REMOVED/miniconda3/envs/ML/lib/python3.10/site-packages/elasticsearch/connection/http_urllib3.py", line 291, in perform_request
    self._raise_error(response.status, raw_data)
  File "/Users/USERNAME_REMOVED/miniconda3/envs/ML/lib/python3.10/site-packages/elasticsearch/connection/base.py", line 328, in _raise_error
    raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
elasticsearch.exceptions.TransportError: TransportError(500, 'repository_verification_exception', '[cheese7] path  is not accessible on master node')

Who Am I?

I have been a software engineer for ten years with thousands of hours of development in these sectors. I am totally befuddled by the lack of search results for novice users, which has driven me to create this online post. Developers working on similar problems in search or other applications of development are invited to email me at my starlight email.

Hello and welcome,

I do not use docker but I can comment on a couple of things about the sequence of commands you are using:

This is not needed on version 8.X, the gcs repository plugin is already builtin in Elasticsearch, the old documentation about the plugin mentions this and has a link to the configuration documentation.

This is correct, but from those commands log it seems that you added the credentials after your Elasticsearch was started, so the credentials will not work until you reload the secure settings, you need to make the following request to your cluster:

POST /_nodes/reload_secure_settings

This will refresh the authentication details.

This is related to the previous comment, the master node cannot access the google storage cloud because the credentials were added after elasticsearch was running, you need to refresh the secure settings before trying to create the repository.

To resume the steps needed to create a snapshot:

  1. Add the json with the credentials to the keystore on every master and data node.
  2. Make a request to reload the secure settings.
  3. Register the repository
  4. Create the snapshot
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.