Http.max_content_length update on cluster

Hello.
On the logstash, have error

[ERROR][logstash.outputs.elasticsearch][main][4662344eb1eeab4baf336e2996a14ddadf8c61b8943c6e31c68cb582d77f72de] Encountered a retryable error (will retry with exponential ba
ckoff) {:code=>413, :url=>"http://elasticsearch-server:9200/_bulk", :content_length=>121168835}

How can i change the value "http.max_content_length: 200mb" on the whole cluster?
Thanks.

Hey,

this can only be configured statically in the configuration file or via properties on startup. Is there any chance to send smaller bulks? Out of curiosity: Why did you pick this value? Did you pick that value based on testing?

--Alex

--Alex

Hello, Alex.
We pick this value because i find Encountered a retryable error (will retry with exponential backoff) {:code=>413,

You suggest to leave this value by default. And look in the direction of reducing the input data from logstash.
Because in the future it will affect the performance of the cluster?

Also set this value. Because the data did not go to Elasticsearch. After restarting logstash, the data was uploaded correctly.

And this setting «http.max_content_length: 200mb» should be specified on all nodes of the cluster (master, data) or only on those where logstash output looks?

So, logstash will by default only go up to a maximum size of 20MB for its data... unless you are sending massive single documents.

Do you have a single document exceeding 100MB in size?

Hello, Alex
I think yes, because we have 8 inputs on logstash:
4 json_lines
3 json
1 beats

input {
    beats {
        port => 5044
        ssl => false
    }
}

input {
    tcp {
        port => 5045
        codec => json_lines
        type => "name-1-log"
    }
}

input {
    tcp {
        port => 5046
        codec => json_lines
        type => "name-2-log-demo"
    }
}

input {
    tcp {
        port => 5047
        codec => json_lines
        type => "name-3"
    }
}

input {
    tcp {
        port => 5048
        codec => json_lines
        type => "name-4"
    }
}

input {
    tcp {
        port => 5049
        codec => json
        type => "name-5"
       }
}
input {
    tcp {
        port => 5050
        codec => json
        type => "name-6"
    }
}

input {
    tcp {
        port => 5051
        codec => json
        type => "name-7"
    }
}

Since they accept all connections at the same time, in total there is an excess of 100 megabytes.
Or am I wrong?
Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.