In our company, we chose to use Elasticsearch Cloud on Google Cloud Platform (GCP) due to the ease of updating and scaling our application.
However, GCP has a charge for every HTTP access on our VM.
Since our other VMs that are in GCP (same region) connect to Elasticsearch via the endpoint provided by Elasticsearch, is there any access cost?
Data Egress is Charged per rate in the table above
What other billing dimensions (GCP network charges intra zone / region etc) you have with GCP and your Architecture we will not be able to readily comment on those.
You will not be able to measure Per Dashboard unless you do specific testing...
As was shown in the docs you can go to the billing panel, select a deployment and a time range and then show the billing charges.
The granularity is daily.
You will be able to see the Data Out Charges, it is not broken down via Dashboard
In general, Dashboard only creates a very small amount of Outbound data... generally, direct API access for search results accounts for a much larger amount ...
@stephenb
I use this machine model "GCP.ES.DATAHOT.N2.68X10X45 - Gold", that is, if every hour I consume an average of 2GB of data (I am basing myself on the size of the returned JSON).
At the end of the day I would have read 48GB.
So the cost on this day would be something around US$ 1.52/day, US$ 47/month, right?
1st the Size of the Node / Cluster is Independent of the Data Transfer costs... you could have a little cluster / node with a lot of data transfer or a Big node / cluster with little data transfer ... the charges are independent.
So the size / type of the node or cluster has no effect on the Data Transfer Charges.
So that looks about right...
2GB an Hour in Dashboards sounds like a lot ... but if that is what you are measuring.
OTOH there is probably compression so it may be less than that
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.