Socket hang up while taking CSV report

Hi.

When I want to make an not so huge CSV report, less than 300 thousands hits. But I get the socket hang up error in Kibana; which eventually leads to the reporting fails.

recent logs in Kibana are as follows!

sudo tailf /var/log/kibana/kibana.log
{"type":"log","@timestamp":"2018-02-22T10:06:27Z","tags":["error","elasticsearch","admin"],"pid":1748,"message":"Request error, retrying\nPOST http://10.50.40.185:9200/.kibana/_search?size=0&from=0 => socket hang up"}
{"type":"log","@timestamp":"2018-02-22T10:06:27Z","tags":["error","elasticsearch","admin"],"pid":1748,"message":"Request error, retrying\nPOST http://10.50.40.185:9200/.kibana/_search?size=0&from=0 => socket hang up"}
{"type":"log","@timestamp":"2018-02-22T10:06:28Z","tags":["error","elasticsearch","admin"],"pid":1748,"message":"Request error, retrying\nPOST http://10.50.40.185:9200/.reporting-*/esqueue/_search?version=true => socket hang up"}
{"type":"log","@timestamp":"2018-02-22T10:06:28Z","tags":["error","elasticsearch","admin"],"pid":1748,"message":"Request error, retrying\nPOST http://10.50.40.185:9200/.reporting-*/esqueue/_search?version=true => socket hang up"}

And here is my kibana config file:

cat /etc/kibana/kibana.yml 
server.host: 10.50.30.150
elasticsearch.url: http://10.50.30.150:9200
elasticsearch.requestTimeout: 3600000
pid.file: "/var/run/kibana/kibana.pid"
logging.dest: /var/log/kibana/kibana.log
logging.quiet: true
xpack.ml.enabled: false
xpack.graph.enabled: false
xpack.apm.ui.enabled: false
xpack.watcher.enabled: false
xpack.security.enabled: false
xpack.reporting.queue.timeout: 3600000
xpack.reporting.encryptionKey: "4322378651"
xpack.reporting.csv.maxSizeBytes: 104857600

BTW, AWS load balancer Idle-timeout is sets to 3600 seconds!

Any help will be appreciated

How far do you have to back the number of hits to to get it to work? For example, if you set your timespan to some shorter time so that you get 200k hits does the csv work, and how long does it take?

Also, is there anything in your Elasticsearch log for it? Errors, etc?

Thank you for your reply.

There is no Error/Warn log none of the servers (master, ingest, data)!

For 200k it also fails. For 50k works!

It took almost eight minutes.

Have you changed the max size for the CSV reporting?

The reason I ask is that if I have a saved search with only a couple of columns of data selected I can still only get about 150k docs before I hit the max size of 10MB. And for me that takes less than 1 minute.

For mine, Reporting page for the report shows completed - max size reached

Thanks for your reply LeeDr.

Yeah, I already set that to 100MB!
Problem is with socket hangup, which should be related to a misconfiguration on time or ...? Not sure exactly!

Please let me know if you have any suggestions or I have to upload some config files?

@hari the way that Reporting works currently, is the export (in this situation the CSV) is indexed into Elasticsearch as a single document so the user can download it later. This operation to store the export in Elasticsearch is a single HTTP request. Elasticsearch enforces a maximum HTTP request size of 100 mB by default, and there is additional overhead of the http request itself that is cause us to exceed this threshold. You'll want to lower xpack.reporting.csv.maxSizeBytes to something like 99mB to no longer hit this limit, or adjust Elasticsearch's http.max_content_length documented here to something higher.

With all this being said, storing really large documents (even 100mB) in Elasticsearch is not advised, it hasn't been tested and it's not supported, so you are doing so at your own risk. We intend to address this limitation in another method be allowing the exported CSV to no longer be stored as a single document, but to be "chunked" into multiple and combined when streaming them to the end-user.

2 Likes

Thank you @Brandon_Kobel. After applying this limitation it works now.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.