At this moment we have a myriad of microservices running on-prem. However, we are preparing to move everything to tha cloud (k8s on aws). As a result I am redesigning our elastic platform since this will move to aws as well.
Now we already figured out that the most convenient way is sidecars to the main app containers. However I a in doubt. The most conventional way seems to be to run filebeat in the sidecars and have it push to logstash running on aws as well which in turn ingests to Elasticsearch.
But I was wondering, Why not run a seperate logstash per app in a sidecar directly, since logstash is capable of using fileinputs as well?
Are there negative considerations using this approach?
We have a number of microservices (some of which not really are micro) and the ingest varies between 30000 to 75000 lines/sec in total for all together. So sending to a central logstash would make it rather big, while when running per app this might stay a bit less big?
I understand, but since the centralised logstashes will also be handling quite a lot of traffic... WIll sidecarred logstashes be more costly? I mean ... way more costly?
My idea was that by eliminating 1 pieve of the chain, it would reduce latency a little bit and also would scale more easily when every new pod also would mean a new sidecarred logstash?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.