I've started playing with Elastic serverless Security in my homelab. I'm a bit worrying about unexpected cost though.
Can I get an alert if my current usage exceeds a certain amount?
Is there a way to set a limit to the usage and sort of pauze the deployment if it exceeds a certain cost? I noticed you can prepay credits, but read if they are used it automatically switches to the configured payment method.
For curiosity, which use case are you testing? The generic one, observability or security?
I was planning on test the generic one but gave up when I saw it would be something close to 1k per month.
There was a thread on slack about the pricing being insanely expensive that it does not make sense for small deployments and they were looking into improving it.
I'm testing the Security use case. I have it running since 17/11 and Im sending Windows, System, Network Traffic, Elastic Defend logs for 1 host and my pfsense logs (hw appliance). I'm currently at $27.23..
I'm working on tuning the sources generating the most data, which are the network integration and the process and network metrics. Disabled flows, only configured http and tls now. Also changed the default interval of metrics from 10s to 15s.
What I want most are my pfsense logs. But I fear I get some kind of ddos resulting in huge cost spike...
Thanks for your feedback. I really hope ELastic allows us to set some kind of max cost cap, for example 100 $ / month. If it goes up, I would like it to pause the deployment / ingest. If this would be possible Elastic Security severless would really be an option for me, but without cost controlling features, it's a little bit too risky....
Also I'm not sure of the cost impact of enabling the siem rules. I enabled like about 10 rules atm, mostly related to Defend.
Yeah, firewall logs can be pretty offensive to the billing, any spike in traffic or misconfigured rule can lead to a cost spike.
From what I read, I don't think you can pause ingest on Elastic side, it won't make much sense for them as they would need to still receive your data, but drop it, which would still use computational resources.
If you can monitor the billing through some API you could maybe implement some way to stop the ingest, like change the elasticsearch fleet endpoint to something that will not work, at least the agents would not be able to send any data.
Being honest, I'm not sure who is the target for this serverless offering, it is not people with small budgets as it is pretty expensive and from what I see a lot of enterprise people are moving from SaaS tools to on-prem/self-hosted to avoid suprise costs and have more control on the spendings.
Another issue that I have is that you are billed on the uncompressed data, for a security use case where you have a lot of logs, the compression is normally pretty good and even better on newer versions, but this won't matter because you are billed based on the raw data.
It is also not clear if you are billed by the internal data created by the stack itself.
I was really curious to test the serverless offering, but unless the pricing changes, I will probably stay away from it.
I understand it's not an easy problem. Maybe the control should be implementable on the integration policy side then, with some sort of auto disabling option when some ingest threshold is reached.
Personnally I can live with 100 $ / month to get all the Enterprise features to play with, but atm there is just too much risk loosing control and being stuck with a huge bill.
I did not know they bill on the raw data. For now I left the policy enabled, but disabled the "Collect pfSense Logs" option in the policy. I will further try reduce the amount of data for my one test host to see how low I can get the total cost. Thanks again for your honest constructive feedback.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.