I have the following latest continuous transform running.
The source index has 10 unique key values. The destination index, at various times, will have between 1 and 10 docs in it when I view (and repeatedly refresh) the docs in kibana/discover view.
I want to be able to query the destination index and reliably get the 10 query key results.
I have tried adding and removing the max search size setting to the transform, but to no avail.
I have the transfom re-running/updating the destination at a fast frequency, and during busy times the source index will have new docs each checkpoint.
Presumably, what happens at the time when the destination index docs are being updated, is for some brief period of time, they're unavailable to query. Is this right?
Is there any way to reliably always be able to query 10 docs in the destination index?
There should be always 10 results. It might be that these results aren't recent if the transform isn't able to update them quickly. If you don't get 10 results I rather suspect that you run into a search failure. Did you created the destination index yourself or via transform? Does it have more than 1 shard?
I suggest to run the query via the REST API and check if you are getting search errors.
The other reason you don't see it in discover is the time picker, it internally runs a range query. So you won't get a result if its outside of the range. Can you expand the time and see if that helps? As the destination index has only 10 docs you actually don't need a range query.
Because you use ingest_timestamp for sync but exchange_timestamp for latest, I wonder if you have some out of order problem in your data.
This setting controls the number of search results the transform gets per call. I don't understand the motivation to tweak it for this use case.
You should always get results, there is no unavailability, but you might see an old version if the latest changes haven't been flushed. Transform re-freshes the destination index after every checkpoint. You should at least see the latest document of the last finished checkpoint.
Unfortunately, I didn't get chance to run that on the original transform and destination index.
I have this morning deleted the transform and destination index, and recreated the transform and PUT index and PUT index/_mapping manually (as dynamic mapping didn’t work for me this time for some fields).
The other thing to note, was that I had a pipeline on the transform destination index. Although, I wouldn’t expect this to have had any issue.
I have eliminated that pipeline, by using alias of an existing field whereas in the pipeline it was set’ing a new field based on the existing field value, and I was setting an ingest timestamp when transform'ed docs were indexed in the destination, but have chosen to simply use the existing ingest_timestamp value that existed in the source doc (so I am no longer overwriting it with an updated value).
Running the following is working reliably now. Happy days! I will continue to run and test that through the day, as this issue appeared to occur more later in the afternoon, when our ingest rate increases.
Thanks @Hendrik_Muhs I'll post here if I have any further issues.