With one node you can never allocate replicas, which is why it's in a
yellow state.
I'd go with setup #1 personally, but you probably want more RAM, say 4GB
heap. Then set all to be master eligible so that you get some level of
protection against node loss.
Make sure you also use curator for managing retention and snapshot+restore.
So option 1 is preferable, can i get clarification on the points you have
raised just so i understand things properly ?
Increase ram heap to 4 GB; instead of using 3 instances, i can switch
to using 3 instances with 7.5 each, and assign 6 GB heap on both. would
that be the same ?
All ES instances will be set as master eligible; means there will be
identical data on all instances, correct?
In this case, i set my load balancer to use round robin on all
related ES and they will sync in between ?
Thanks for helping out.
On Monday, February 16, 2015 at 9:38:11 AM UTC+2, Mark Walkom wrote:
With one node you can never allocate replicas, which is why it's in a
yellow state.
I'd go with setup #1 personally, but you probably want more RAM, say 4GB
heap. Then set all to be master eligible so that you get some level of
protection against node loss.
Make sure you also use curator for managing retention and snapshot+restore.
On 15 February 2015 at 04:01, dna lor <dnal...@gmail.com <javascript:>>
wrote:
I am evaluating ELK for the past 2 weeks in a testing environment, and i
am very pleased with the result.
right now i want to move it to staging, so i want to make sure i have the
best practice/advised setup which i hope can get your feedback/opinion about
expected usage:
up to 20 GB of logs are sent from logstash to elasticsearch every
day (continuously 24/7)
15 days worth of data should be stored in elasticsearch for
search/graph.
logs older than 15 days should be be deleted
Daily incremental backup to AWS-S3
7 kibana users with average of 9 graphs per page/saved templates.
always on 9/7
1 kibana user with no graph, just a live "tail" of specific types.
24/7
Cronjobs curls directly to elasticsearch to perform different tasks
(these are negligible )
I am considering the below setup, please correct me if i am wrong:
1 - You want to have 50% system memory for heap, the other 50% for caching.
So 4GB heap can be done with 7.5GB in system, but you don't really want to
go higher.
2 - No. See
But if all nodes are in the same cluster then you can round robin and it
will shard the data between them.
So option 1 is preferable, can i get clarification on the points you have
raised just so i understand things properly ?
Increase ram heap to 4 GB; instead of using 3 instances, i can
switch to using 3 instances with 7.5 each, and assign 6 GB heap on both.
would that be the same ?
All ES instances will be set as master eligible; means there will
be identical data on all instances, correct?
In this case, i set my load balancer to use round robin on all
related ES and they will sync in between ?
Thanks for helping out.
On Monday, February 16, 2015 at 9:38:11 AM UTC+2, Mark Walkom wrote:
With one node you can never allocate replicas, which is why it's in a
yellow state.
I'd go with setup #1 personally, but you probably want more RAM, say 4GB
heap. Then set all to be master eligible so that you get some level of
protection against node loss.
Make sure you also use curator for managing retention and
snapshot+restore.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.