I've got a fairly straightforward use case, where I'd like to send a CSV file with data from Discover at a certain time every day.
I've got several other emails already automated and sending successfully in this way (both PDF reports and CSVs from Discover), but now I'm running into a problem where certain Watchers aren't sending emails because they claim the file size is exceeded, but the attachments are absolutely lower than the limit.
Small bump before the weekend, any suggestions on why this happens, or how to fix it? Seems to make no sense that it won't send attachments that are even well under 10MB, when the error (seems to say) that the problem is that the file is >10MB...
Is there a stack trace corresponding to an IOException in any of your elasticsearch logs? It looks like you are hitting the limit configured by xpack.http.max_response_size (Watcher settings in Elasticsearch | Elasticsearch Guide [8.12] | Elastic) which has a default of 10 MB and a max of 50 MB. I am guessing that it is the response from watcher that is so large, but a stack trace might help to narrow that down.
Hey Keith, thanks for the reply. We're looking into that. It seemed like a good suggestion to chase down, and something possibly to to do with a size limitation also imposed by our email server. Will circle back with how it goes.
Hi again Keith. Yep, definitely were getting that IOException you mentioned:
[instance-0000000006] failed to execute action [size_test_5/email_admin] java.io.IOException: Maximum limit of [26214400] bytes reached at org.elasticsearch.xpack.watcher.common.http.SizeLimitInputStream.checkMaximumLengthReached(SizeLimitInputStream.java:68) ~[?:?] at org.elasticsearch.xpack.watcher.common.http.SizeLimitInputStream.read(SizeLimitInputStream.java:47) ~[?:?] at [java.io](http://java.io/).FilterInputStream.read(FilterInputStream.java:95)
Now, Elastic has in the last couple days upped our size limit to 25 MB. But now, when I create a Watcher to send an attachment that is, for example, 9.5 MB, we get this new, updated error from the watcher log:
So I'm not sure what this 25MB limit is doing overall... Does it mean some other part of the email action here is generating/adding 15.5 MB of...something? causing the Watcher to fail to send?
Attachments of 8 MB seem to be the new threshold for us, and email watches with 8MB or less in attachments seem to issue fine.
The xpack.http.max_response_size setting is not dynamic, meaning you have to restart the node to have it take effect. Have you updated that setting on all nodes and restarted them all? That appears to be the setting that controls the IOException in your stack trace.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.