Kibana Refresh Fields fails with Error: socket hang up

Hello All,
We need to run refresh fields on an index pattern in Kibana 5.4.1 (running with ES 5.4.1) and are running into issues getting the command to complete. Not really sure when the problem started, but has become an issue that we need to resolve. We need the refresh to work as we need to change the type of a field from "string" to "number".

The error case: run "Refresh Fields" - wait about 3 minutes and eventually get one of these errors:

  • If using nginx proxy: Error 504 (Gateway Time-out)
  • If communicating with K5 directly: "Error 503 Service Unavailable: Error: Error: socket hang up"

The actual Curl that comes back with the 503 is

"http://:5601/api/kibana/<index_name>-*/field_capabilities"`

From turning on Kibana verbose logging, I can see that we're actually getting this error back 30 seconds while the Kibana is talking to Elasticsearch:

{"type":"log","@timestamp":"2017-08-21T22:09:46Z","tags":["error","elasticsearch","data"],"pid":27431,"message":"Request error, retrying\nPOST http://:9200/<index_name>-/_field_stats?fields=&allow_no_indices=false =>
socket hang up"}

Seems that something is timing out and hanging-up from the Elasticsearch side every 30 seconds and then Kibana retries

From other threads on the forum, I've tried adding "http.max_header_size: 16kb" and moved that up to 32kb without any difference. I've also tried setting "http.compression: false" with no change. (I have been only trying to make changes to ES configurations on the Kibana node which is not a datanode or a master node.)

Any ideas on how to deal with this socket hang up condition? Or any ideas on how to manually update the index patterns to meet our needs? (We already set the mappings for the field to be a long and

There are only 2234 fields in the indices that are having this issue. Perhaps I just need to keep pushing-up max_header_size?

(Cors settings already set. Kibana.yml already set with elasticsearch.requestTimeout: 900000
)

Have you checked the ES logs? Anything you can provide there would be helpful.

How many fields do you have for this index in particular?

Sorry for the delayed response
(I'll check my notification settings to make sure I'm seeing replies in email)

I didn't spot anything in the ES logs unfortunately.

2234 fields in the specific index.
-Jeff

Hello, do you have any news about this topic? I have the same issue and tried same things as @JeffParsons did without success.

@tsmalley - any ideas on what to do with this issue?

I'm suspecting that moving off of 5.4.1 may help resolve the issue. Any thoughts on that approach?

-Jeff

@JeffParsons + @Jua , is there a proxy between Kibana and ES? I am thinking the connection is being dropped there, or possibly being timed out in ES.

The number of fields is most likely the culprit for the request to the field_capabilities taking so long.

Hey @tsmalley,
We do have an nginx in front of K5 normally, but even if I go directly to Kibana (which talks directly to a local ES (non-datanode) instance, we experience this issue.

Any ideas on how to get K5 to set a useful timeout on the call to Elasticsearch for "_field_stats"?
-Jeff

Hi @tsmalley, I don´t have any proxy between Kibana and ES and still get the timeout.

In previous elasticsearch version we had the dotkibana tool which refresh indexes without the problem of the timeout. Do you have something similar to perform a refresh of the index without taking care of the timeout?

Hi, seems to be solved in 5.6.1 version. The index refresh operation it´s very quick.

Thanks @tsmalley!

That's great to hear. We were planning to start testing upgrades as a solution. Thanks @Jua

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.