You need a node with an ingest role
i. You should have one already in cloud.
# GET _cat/nodes
127.0.0.1 70 80 1 0.00 0.03 0.05 dlm - node1
127.0.0.1 46 80 1 0.00 0.03 0.05 dlmt * node2
127.0.0.1 63 80 1 0.00 0.03 0.05 dilmt - node3
With respect to load, please benchmark locally to see if there is a possible performance impact. You don't really need the painless script part from the blog example, and I doubt that the single
set processor will make a noticeable difference - but it depends on your data and use case.
Before you change your ingest, as a first step you could also try a manual validation that you have delayed data by running a search that will replicate what the delayed data check is trying to achieve. Assuming you have a
bucket_span and a
query_delay, create a date histogram search e.g. count of events every 15m from
now-90m say. Manually refresh this periodically over the course of the next 90m and see if the counts change as time elapses. Pay particular attention to the counts from time buckets that are greater than 30m ago. If these are changing, this suggests an ingest latency.