I'm designing a new Elasticsearch cluster. Normally I'd have some sort of queue (e.g., Kafka, redis, SNS/SQS, etc.) where all the indexing requests would get queued up and a process would feed the indexing requests into ES via the data node. With the new Ingest Node type, I'm curious about the following:
- Can the ingest node act as a queue, simplifying the architecture? Thus not needing something like Kafka, redis or SNS/SQS.
- Does the ingest node persist the requests before they are transformed? So that it can handle the case where it crashes or needs to be re-indexed?
- If so, how many requests per second can it handle? I'm sure this is subjective based on the system specs and rules applied but wondering if there is a general sense.