Elasticsearch 6.2 bulk indexations ends in Bad Gateway


So I'm currently running a 6.2 instance of Elasticsearch on a docker container using the official image. (Our CTO did the configuration so I don't have all the info)

I'm trying to bulk index documents but it always ends up in a bad Gateway.
There are 1357 documents to index. The total size of the JSON is around 12MB.
I tried to bulk index documents per 250 to keep the JSON size reasonable (2.4MB) and the bulk indexation crashes and returns "Bad Gateway" at the third bulk request. I tried to reduce the bulk size to 50 and it got worse. It now crashes at the second bulk request.

I thought it was an issue with the refresh_interval so I set it up to -1 to disable it. I also set number_of_replicas to 0 just for the test. It ends up in the same "Bad Gateway".

I went inside the docker container and ran htop to see what's going on. The memory is just fine but the CPU skyrockets to 100% and then the container crashes.

Honestly I'm clueless here. I've go no idea what's going on as I use Elasticsearch (who doesn't? :smiley:) but I don't usually configure it.

I hope I provided all necessary information, don't hesitate to ask more if this is not the case.

Thanks for your help!

Elasticsearch doesn't itself return a Bad Gateway error so I think this is coming from something between Elasticsearch and your client.

When you say "and then the container crashes" what does this mean? Does Elasticsearch exit? If so, what exceptions is it logging? Or is something else shutting the container down?

Hello, thanks for your answer.

Well, the container is perfectly fine and running right before the bulk indexation. When I run the bulk indexation I get disconnected from the container (in my shell) and using docker ps shows that the container is rebooting. I don't think Elasticsearch has a chance to exit considering the container is rebooting.

I'm sorry but I have no idea where to find Elasticsearch logs...

I don't think anything is shutting down the container. It is a local architecture and rather basic. We have several containers: PHP, MariaDB, Elasticsearch, Kibana, Redis and Traefik. We don't use anything like Rancher so there is no supervision of containers.

I don't understand what "rebooting" means in this context. Do you mean that the container is restarting? Is your Docker environment set to restart a container if it stops?

If the container has stopped then Elasticsearch must have exited, and it will almost certainly have logged a reason for that. The only times it doesn't log messages on exit is if the exit occurs very suddenly thanks to some external force (e.g. SIGKILL or the OOM killer). The OOM killer logs messages that are readable with dmesg.

The location of the logs will depend a bit on how your environment is configured. They normally go to stdout, and I think they are also written to /usr/share/elasticsearch/logs within the container, which might be mounted somewhere persistent.


Yes I mean restarting. Yes Docker will restart any container that stops.

Currently we are running Docker inside a virtual machine. I affected another CPU to the virtual machine and it got fixed. I assume Elasticsearch couldn't run properly with only 2 CPUs. I don't have a really powerful CPU on my local machine.

I assume my issue to be solved. Thanks a lot for your help!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.