Hi.
Before, it had two nodes in a cluster (node1 and node2). It also had a backup repository shared by NFS between the two machines on the nodes. Everything worked perfectly, the backups were done correctly and I could restore them.
Now I have added a new node to the cluster (node3). This new node is working correctly, but when I check it in the repository that already existed, I get an error.
Can anybody help me?
In elasticsearch.yml of node 3:
cluster.name: my_cluster
node.name: node3
node.master: true
node.data: true
node.ingest: true
path.data: /mnt/almacenamiento/elasticsearch_data/data
path.logs: /mnt/almacenamiento/elasticsearch_data/logs
bootstrap.memory_lock: true
network.host: 0.0.0.0
transport.host: 0.0.0.0
transport.tcp.port: 9300
path.repo: ["/media/backup01"]
discovery.seed_hosts: ["ip-host1:9300", "ip-host2:9300", "ip-host3:9300"]
cluster.initial_master_nodes: ["node1", "node2", "node3"]
The path of "path.repo" is set correctly, just like the other nodes that do work. Also with the same permissions. "chmod 777 -R /media/backup01".
Next I leave the error that appears when I proceed to verify the repository:
{
"error": {
"root_cause": [
{
"type": "repository_verification_exception",
"reason": "[Repo_backups] [[dvRkZ9ZMQ0OfFM5WqrYwlA, 'RemoteTransportException[[node3][ip-host3:9300][internal:admin/repository/verify]]; nested: RepositoryVerificationException[[Repo_backups] store location [/media/backup01] is not accessible on the node [{node3}{dvRkZ9ZMQ0OfFM5WqrYwlA}{JFTCdFhNTk2CjrCv48bxEA}{ip-host3}{ip-host3:9300}{dilm}{ml.machine_memory=16344829952, xpack.installed=true, ml.max_open_jobs=20}]]; nested: AccessDeniedException[/media/backup01/tests-U8q9fZQmRJ2nQf_y0-OigA/data-dvRkZ9ZMQ0OfFM5WqrYwlA.dat];']]"
}
],
"type": "repository_verification_exception",
"reason": "[Repo_backups] [[dvRkZ9ZMQ0OfFM5WqrYwlA, 'RemoteTransportException[[node3][ip-host3:9300][internal:admin/repository/verify]]; nested: RepositoryVerificationException[[Repo_backups] store location [/media/backup01] is not accessible on the node [{node3}{dvRkZ9ZMQ0OfFM5WqrYwlA}{JFTCdFhNTk2CjrCv48bxEA}{ip-host3}{ip-host3:9300}{dilm}{ml.machine_memory=16344829952, xpack.installed=true, ml.max_open_jobs=20}]]; nested: AccessDeniedException[/media/backup01/tests-U8q9fZQmRJ2nQf_y0-OigA/data-dvRkZ9ZMQ0OfFM5WqrYwlA.dat];']]"
},
"status": 500
}
I know it seems like permission. But I assure you that it has the same permissions and the same route as the other nodes (in which it does work). Thanks for the help.
Regards.