with the coming ElasticSearch support in Grafana and ES 2.0, ES starts to look interesting as a metrics backend.
Is anyone using it already?
What kind of performance do you see? I plan on having around 500.000 - 1.000.000 metrics/s and another 2000 logs/s. Is that something that ES can take or is it too much? My cluster is quite small, 3 servers with 2x6cores(HT), 64gb of memory(can add more) and SAS hard drives.
How do you send data to ES? Over Logstash or directly?
Is some kind of metrics down sampling planned for ES?
Is that 500K-1M metrics per second?
If so, your cluster is definitely too small.
2K logs/s is not a problem. At least not ingestion. Queries are the bigger question. As is retention. For data volume you're referencing you'll need to seriously think about architecture/design, whether you use ES or InfluxDB (does it support sharding and replication now?)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.