Basic Architecture Question

New to ElasticSearch so hoping someone can clear this up.
I have a need to use WinLogBeat and FileBeat to collect logs from servers. I plan on having an ElasticSearch instance in Amazon. For various reasons, I cannot have the clients (WinLogBeat/FileBeat) send directly to Amazon. I was thinking of using a single, centralized syslog server that receives data from each client, and forwards to ElasticSearch at Amazon.
However, it looks like WinLogBeat does not support output to syslog, only to Elasticsearch, Logstash, Kafka, Redis, File, Console, Cloud.

Will one of these products, in concept, replace the syslog server in my plan? Ie, is there a product to collect logs from each endpoint and be the single source that sends logs out to ElasticSearch in the cloud?

You could run a few elasticsearch "coordinating nodes" locally, your beats clients would point to them, they forward the traffic on to your cloud servers. More than one for redundancy :slight_smile:

You could do similar with logstash, depends on which you need.

So if I understand correctly, my servers (with FileBeat/WinLogBeat) could log to a local centralized LogStash (or ElasticSearch coordinating node), which in turn forwards the logs up to ElasticSearch in the cloud? Is that the best architecture?

Logstash is a pretty common pattern for your requirement. For HA you could run a LB in front of 2 logstash that is a pretty common pattern.

If you already have Kafka or another technology you could use that as well.

Perhaps I am missing something but I am not sure that it makes sense to have a local / coordinating node that is part of an ES cluster in Amazon that may not be a stable cluster. ES Nodes in a single cluster distributed across a WAN is typically an anti-pattern.