Error: Request error, retrying - connect EMFILE

I have 240k requests to insert on Elasticsearch, but i get the error:

How can I fix this?

Hi @mcolla Welcome to the community.

First please do not post images of text, often the are hard or can not be read, they can not be searched, not copied to debug.

Please paste in text .. .select the text and format with the </> button

2nd we are going to need more information.

What version are you using?

What are you using to ingest, logstash, elasticsearch _bluk API?

What commands are you running?

What does the file / data look like?

Can you provide more of the error message?

Looks like you might be trying to use old syntax and or types if so See Here

In order to help you are going to need to provide more /complete information.... not enough here to help.

Hi @stephenb , thanks!
I use a queue on RabbitMQ to APIs register the logs. In one day, accumulated 240k of logs.
When I runing my worker (in typescript) this file is my consumer, this error explode on terminal:

Elasticsearch ERROR: 2022-07-01T16:58:54Z
  Error: Request error, retrying
  POST http://localhost:9200/enterprise/_doc?type=log => connect EMFILE - Local (undefined:undefined)
      at Log.error (/home/...."

I need persist this 240k messages (logs) on queue to inside elasticsearch...

I am not that familiar with RabbitMQ you should really have put that in your subject line for help with that.

What version elasticsearch are you running?

We have no clue what context that error is coming from ... what is configured how you are trying to do this... more info needed otherwise we can not help

240K events ... should be no problem....

Is the RabbitMQ and elasticsearch on the same host? otherewise that localhost will not work

Not sure how / what you have configured to try to POST the data from RabbitMQ to elasticsearch

Are you using logstash... that is a pretty common method to ingest RabbitMQ into elasticsearch

RabbitMQ -> Logstash with RabbitMQ input -> Elasticsearch

240K events ... should be no problem.... but here, is the problem :S
We are running the Elasticsearch version 7.9.2

The both(RabbitMQ and Elasticsearch) are on the same host.
I'm not using the logstash, because the logs they are on queue of RabbibMQ, we need just only insert the 240k stored messages, inside of Elasticsearch, just it!

The context are: we have, many APIs, each of them call directly RabbitMQ to insert log on queue. And the worker.js are responsible to listening this queue, and, persist the log on Elasticsearch. And works well!
But, in the last night worker.js stayed offline, and accumulated 240k messages in queue. When I put to run worker.js again, he tries get all the messages of the queue (240k), and Elasticsearch show the mentioned error.
the impression it gives is that for each message in the queue, it opens a new connection with Elasticsearch and can't keep so many connections

My worker.js:

import dotenv from 'dotenv';
import getClient from './client/elasticsearch';
import * as Amqp from 'amqp-ts';


try {
  const connection = new Amqp.Connection(process.env.RABBITMQ_HOST);
  const exchange = connection.declareExchange('ExchangeEnterprise');
  const queue = connection.declareQueue('EnterpriseLogQueue');

    function (message) {

      // fake a second of work for every dot in the message
      const content = message.getContent();
      const contentParsed = JSON.parse(content);

      const seconds = content.split('.').length - 1;
      console.log(` [x] received message: ${contentParsed}`);

      setTimeout(async function () {
        const log = {
          scope: contentParsed.scope,
          type: contentParsed.type,
          app_id: contentParsed.app_id,
          route: contentParsed.route,
          payload: contentParsed.payload

        const client = getClient();

        // persist on elasticsearch
        const result = await client.index({
          index: 'enterprise',
          type: 'log',
          body: {
            app_id: log.app_id,
            route: log.route,
            payload: log.payload,
            created_at: new Date(),

        console.log(' [x] Done');
        // acknowledge that the message has been received (and processed)
      }, seconds * 1000);
    { noAck: false },

  console.log(`[*] ${Date()}`);
  console.log(`[*] Waiting for messages. To exit press CTRL+C `);
} catch (err) {

Elasticsearch version 7.9 is EOL and no longer supported. Please upgrade ASAP.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

Hmm I think you are going to need to figure out how to the read the queue in batches / loop / pause etc... I do not know what the underlying code is doing.

It is possible / probably yes you are overwhelming elasticsearch but As far as I know that is not and elasticsearch error ... That is a node.js error but related to opening file descriptors / sockets... so this is on the node side not really the elastic side ... either read the queue in batches or in a loop and etc. yeah if you are correct the OS / Elastic is not going to be happy trying to open 250K client connections.

Again you could probably install logstash locally in a few minutes and drain the queue in another few minutes, get rid of it and go about your business... Otherwise you are going to need to work on the code on your worker side ...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.