I would recommend that both nodes have that config.
If you want to add a third node, because you want to replace the first one for any reason, it's better to have all in place.
One thing you can do is to prepare a clean machine, with all settings but no data. Then create an image.
If you want to scale out, just launch new instances using that ready to use image.
My 2 cents
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 4 juillet 2014 à 17:39:02, sabdalla80 (sabdalla80@gmail.com) a écrit:
David, Great that helped. At first I created the space in "node 1" config but didn't do anything. So, I commented the credentials section from "node 2" config, That worked. So, I am a bit confused about this, Do I need to maintain both configs on the two instances, so they both have credentials in them, or do I just put it in "node 1" and since the cluster is configured, there will be no need to have credentials in "node 2" ?
Thanks
On Friday, July 4, 2014 11:03:45 AM UTC-4, David Pilato wrote:
Try to add some spaces before cloud, so like this…
cloud:
aws:
access_key: XXXXX
secret_key: YYYYYYYYYYY
discovery:
type: ec2
--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr
Le 4 juillet 2014 à 16:47:03, sabdalla80 (sabda...@gmail.com) a écrit:
My cluster has two instances, I have the same setup on both instances. To answer Ross's question, I use the curl command to register the repository from the instance(s) itself. Do I need to do anything else on the instance(s) as far as AWS credentials?
Here is what I have:
Enter ##################### Elasticsearch Configuration Example #####################
This file contains an overview of various configuration settings,
targeted at operations staff. Application developers should
The installation procedure is covered at
Elasticsearch comes with reasonable defaults for most settings,
so you can try it out without bothering with configuration.
Most of the time, these defaults are just fine for running a production
cluster. If you're fine-tuning your cluster, or wondering about the
effect of certain configuration option, please do ask on the
Any element in the configuration can be replaced with environment variables
by placing them in ${...} notation. For example:
node.rack: ${RACK_ENV_VAR}
For information on supported formats and syntax for the config file, see
################################### Cluster ###################################
Cluster name identifies your cluster for auto-discovery. If you're running
multiple clusters on the same network, make sure you're using unique names.
cluster.name: rexCluster
#################################### Node #####################################
Node names are generated dynamically on startup, so you're relieved
from configuring them manually. You can tie this node to a specific name:
node.name: "node 1"
Every node can be configured to allow or deny being eligible as the master,
and to allow or deny to store the data.
Allow this node to be eligible as a master node (enabled by default):
node.master: true
Allow this node to store data (enabled by default):
node.data: true
You can exploit these settings to design advanced cluster topologies.
1. You want this node to never become a master node, only to hold data.
This will be the "workhorse" of your cluster.
node.master: false
node.data: true
2. You want this node to only serve as a master: to not store any data and
to have free resources. This will be the "coordinator" of your cluster.
node.master: true
node.data: false
3. You want this node to be neither master nor data node, but
to act as a "search load balancer" (fetching data from nodes,
aggregating results, etc.)
node.master: false
node.data: false
A node can have generic attributes associated with it, which can later be used
for customized shard allocation filtering, or allocation awareness. An attribute
Path to directory containing configuration (this file and logging.yml):
path.conf: /etc/elasticsearch
Path to directory where to store index data allocated for this node.
path.data: /path/to/data
Can optionally include more than one location, causing data to be striped across
the locations (a la RAID 0) on a file level, favouring locations with most free
space on creation. For example:
path.data: /opt/cores/elasticsearch/data
Path to temporary files:
path.work: /opt/cores/elasticsearch/work
Path to log files:
path.logs: /opt/cores/elasticsearch/logs
Path to where plugins are installed:
path.plugins: /path/to/plugins
################################### Memory ####################################
Elasticsearch performs poorly when JVM starts swapping: you should ensure that
it never swaps.
Set this property to true to lock the memory:
bootstrap.mlockall: true
################################## Discovery ##################################
Discovery infrastructure ensures nodes can be found within a cluster
and master node is elected. Multicast discovery is the default.
Set to ensure a node sees N other master eligible nodes to be considered
operational within the cluster. Its recommended to set it to a higher value
than 1 when running more than 2 nodes in the cluster.
discovery.zen.minimum_master_nodes: 2
Set the time to wait for ping responses from other nodes when discovering.
Set this option to a higher value on a slow or congested network
to minimize discovery failures:
discovery.zen.ping.timeout: 3s
For more information, see
Unicast discovery allows to explicitly control which nodes will be used
to discover the cluster. It can be used when multicast is not present,
or to restrict the cluster communication-wise.
1. Disable multicast discovery (enabled by default):
discovery.zen.ping.multicast.enabled: false
2. Configure an initial list of master nodes in the cluster
to perform discovery when new nodes (master or data) are started:
discovery.zen.ping.unicast.hosts: ["IP1","IP2"]
EC2 discovery allows to use AWS EC2 API in order to perform discovery.
You have to install the cloud-aws plugin for enabling the EC2 discovery.
For more information, see
for a step-by-step tutorial.
cloud:
aws:
access_key: XXXXX
secret_key: YYYYYYYYYYY
discovery:
type: ec2
On Friday, July 4, 2014 2:27:48 AM UTC-4, David Pilato wrote:
Agreed. Could you share your elasticsearch.yml file without touching anything but only replacing Key/secret?
Keep the formating.
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 4 juil. 2014 à 06:51, Ross Simpson simp...@gmail.com a écrit :
That specific exception (com.amazonaws.AmazonClientException) is thrown by the AWS client libraries, and it means the library couldn't find your AWS credentials. I'm not sure why, as the details in your original post look correct.
FWIW, S3 snapshots are working well for me. Here's my setup:
ES 1.1.1
AWS cloud plugin 2.1.0
elasticsearch.yml:
cloud.aws.access_key: ...
cloud.aws.secret_key: ......
Repo registration:
$ curl -XPUT 'http://localhost:9200/_snapshot/es-backups' -d '{"type":"s3","settings":{"compress":"true","base_path":"prod_backups","region":"us-east","bucket":"..."}}'
{"acknowledged":true}
In your latest post, it looks like you're running the command on a remote ES host (10.211.154.24). Does that specific host have the AWS credentials in its ES config? Snapshotting will require that all nodes in the cluster have the AWS credentials, because they will each be writing to S3.
Are there any relevant entries in the ES logs from startup?
On Friday, 4 July 2014 04:25:44 UTC+10, sabdalla80 wrote:
I installed latest ES version too "1.2.1", still getting same error
{
"error": "RemoteTransportException[[node 2][inet[/10.211.154.24:9300]][cluster/repository/put]]; nested: RepositoryException[[es_repository] failed to create repository]; nested: CreationException[Guice creation errors:\n\n1) Error injecting constructor, com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain\n at org.elasticsearch.repositories.s3.S3Repository.()\n at org.elasticsearch.repositories.s3.S3Repository\n at Key[type=org.elasticsearch.repositories.Repository, annotation=[none]]\n\n1 error]; nested: AmazonClientException[Unable to load AWS credentials from any provider in the chain]; ",
"status": 500
}
Any ideas? I would appreciate some feedback on how to figure out this problem because I would like to backup our index to S3.
On Wednesday, July 2, 2014 3:36:58 PM UTC-4, sabdalla80 wrote:
Unfortunately, I tried with and without the region setting, no difference.
On Tuesday, July 1, 2014 7:43:21 PM UTC-4, Glen Smith wrote:
I'm not sure it matters, but I noticed you aren't setting a region in either your config or when registering your repo.
On Tuesday, July 1, 2014 7:08:28 PM UTC-4, sabdalla80 wrote:
I am not sure the version is the problem, I guess I can upgrade from V1.1 to latest.
"Not able to load credential from supply chain", Any idea this error is generated, Is there any other place that my credentials need to be besides .yml file?
Note, I am able to write/read to S3 remotely, so I don't have any priviliges problems that I can think of.
On Tuesday, July 1, 2014 4:44:17 PM UTC-4, David Pilato wrote:
I think 2.1.1 should work fine as well.
That said, you should upgrade to latest 1.1 (or 1.2)...
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 1 juil. 2014 à 22:13, Glen Smith gl...@smithsrock.com a écrit :
According to
you should use v2.1.0 of the plugin with ES 1.1.0.
On Tuesday, July 1, 2014 9:03:04 AM UTC-4, sabdalla80 wrote:
I am having a problem setting up backup and restore part of AWS on S3.
I have 2.1.1 AWS plugin & ElasticSearch V1.1.0
My yml:
cloud:
aws:
access_key: #########
secret_key: #####################
discovery:
type: ec2
When I try to register a repository:
PUT /_snapshot/es_repository{
"type": "s3",
"settings": {
"bucket": "esbucket"
}}
I get this error, it complains about loading my credentials! Is this ElasticSearch problem or AWS?
Note I am running as root user "ubuntu" on Ec2 and also running AWS with root privileges as opposed to IAM role, not sure if it's a problem or not.
"error": "RepositoryException[[es_repository] failed to create repository]; nested: CreationException[Guice creation errors:\n\n1) Error injecting constructor, com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain\n at org.elasticsearch.repositories.s3.S3Repository.(Unknown Source)\n while locating org.elasticsearch.repositories.s3.S3Repository\n while locating org.elasticsearch.repositories.Repository\n\n1 error]; nested: AmazonClientException[Unable to load AWS credentials from any provider in the chain]; ",
"status": 500
}ode here...
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/db55bb02-b5a1-44c3-8692-01d2bb7efbae%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/etPan.53b6cb98.189a769b.2fae%40MacBook-Air-de-David.local.
For more options, visit https://groups.google.com/d/optout.