Hi,
We are building an application hosted on AWS EKS. We would like to have elastic stack along with metrics and apm to monitoring our infrastructure, application and logs. As a whole, we might be having the ingested data ranging from 100 GB to 500 GB. Can someone please help me to find the best configuration for my scenario?
As Kibana, Logstash and APM doesnt take much of resources, I would like to go with 8 GB machines for them. For Elasticsearch, I am confused that how many 64 GB machines will suffice (howmany data, master)..
I also thought of AWS Elastic Service, but found that it doesnt have the support for APM, metricbeat. Any idea on that please..
First, if you are looking for a service that includes all the features (including APM and Metricbeat modules) from the creators of the software: https://cloud.elastic.co
For sizing it will depend on a couple of additional points:
How long is your planned retention of the 100 to 500GB? Also does that include replication already or not?
I'd assume that the load from Metricbeat is pretty even, but APM or Filebeat could include spikes — you'll need to plan for the maximum ingestion rate.
For querying: Showing the dashboards of the last 1h will behave quite differently than searching everything with a wildcard pattern; also depending on how patient you're willing to be.
Disk size should be the easiest to get started with and the rest you might have to adjust with experience (or change in use). Also if you're unsure, you can scale Elastic Cloud up and down with a button and it will take care of the right hardware profile, (dedicated) master nodes,... so that might come in handy.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.