I have an elastic agent:
elastic-agent version
Binary: 8.9.0 (build: dc443bc2427920a26141b05f9c07a52191881af5 at 2023-07-19 20:55:16 +0000 UTC)
Daemon: 8.9.0 (build: dc443bc2427920a26141b05f9c07a52191881af5 at 2023-07-19 20:55:16 +0000 UTC)
elastic-agent status
┌─ fleet
│ └─ status: (HEALTHY) Connected
└─ elastic-agent
└─ status: (HEALTHY) Running
If I do:
elastic-agent logs | jq
I am seeing some errors like:
{
"log.level": "error",
"@timestamp": "2023-08-29T22:51:52.377Z",
"message": "failed to publish events: Post \"https://myelasticsearch.example:443/_bulk\": write tcp [redacted]:33430->[redacted]:443: write: broken pipe",
"component": {
"binary": "filebeat",
"dataset": "elastic_agent.filebeat",
"id": "filestream-default",
"type": "filestream"
},
"log": {
"source": "filestream-default"
},
"log.origin": {
"file.line": 174,
"file.name": "pipeline/client_worker.go"
},
"service.name": "filebeat",
"ecs.version": "1.6.0",
"log.logger": "publisher_pipeline_output"
}
{
"log.level": "error",
"@timestamp": "2023-08-29T22:51:53.079Z",
"message": "failed to perform any bulk index operations: Post \"https://myelasticsearch.example:443/_bulk\": write tcp [redacted]:33436->[redacted]:443: write: connection reset by peer",
"component": {
"binary": "filebeat",
"dataset": "elastic_agent.filebeat",
"id": "filestream-default",
"type": "filestream"
},
"log": {
"source": "filestream-default"
},
"log.origin": {
"file.line": 258,
"file.name": "elasticsearch/client.go"
},
"service.name": "filebeat",
"ecs.version": "1.6.0",
"log.logger": "elasticsearch"
}
If I use a protocol debugger, I see that HTTP 413 Entity Too Large errors are being thrown by the backend.
My Elasticsearch server is configured with "http.max_content_length": "100mb"
, (as seen from the _cluster/settings
API).
How can I tell how large the payload that elastic-agent is sending to the server?
How can I throttle the size of the payload that elastic-agent is sending to the server?