I use a scripted metric aggregation to create a summary of all entities in an index and write them using a multi document index action to a new index. If I limit the index action to like 10k inserts it works okish. But if I upscale to 10m inserts the master node fails and crashes.
It seems to me that one of the elected nodes inserts all the documents at once. It also seems the node writes to a bytesstreamoutput which can only hold 2GB. This is quite inefficient since my nodes al contain at least 64GB of memory
Can anyone please elaborate on how the multi document index action works and how I can distribute the load over the cluster? How can I tune my watcher such that it can handle big index actions?
Update: bulk index action seem to use the bulk api which returns the status of every inserted document. This is then inserted into the watcher-history. This causes me to write the data to the cluster roughly twice! I coded another watcher to delete watcher-history directly because if you open a watcher in kibana it tends to load the watcher-history which causes kibana to crash. Is there a solution for this problem? Maybe disable watcher history? Or disabling bulk api response?