[ConnectionError]: connect ECONNREFUSED 127.0.0.1:9200"} and Server is not ready

After lunching docker-compose file I've this problem issue. In case I try to see the dashborad on loclahost:5601 there's a response ("Kibana server is not ready yet"). What's the real problem? I'm able to query information on ElasticSearch. I report below the code the docker-compose file.

    kibana           | {"type":"log","@timestamp":"2021-04-30T12:41:43+00:00","tags":["debug","metrics","ops"],"pid":7,"ecs":{"version":"1.7.0"},"event":{"kind":"metric","category":["process","host"],"type":"info"},"process":{"uptime":98,"memory":{"heap":{"usedInBytes":131746896}},"eventLoopDelay":0.8419680000515655},"host":{"os":{"load":{"1m":9.51,"5m":4.49,"15m":1.96}}},"message":"memory: 125.6MB uptime: 0:01:38 load: [9.51,4.49,1.96] delay: 0.842"}
    kibana           | {"type":"log","@timestamp":"2021-04-30T12:41:45+00:00","tags":["debug","elasticsearch","query","data"],"pid":7,"message":"[ConnectionError]: connect ECONNREFUSED 127.0.0.1:9200"}

I report below the docker-compose file.

    version: '2'

    services: 
      zookeeper:
        image: wurstmeister/zookeeper:3.4.6
        ports:
         - "2181:2181"
      kafka:
        build: .
        ports:
         - "9092:9092"
        expose:
         - "9093"
        environment:
          KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
          KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
          KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
          KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
          KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
          KAFKA_CREATE_TOPICS:      "streaming_stream:1:1,batch_stream:1,1,output_batch:1,1,output_streaming:1,1"
        volumes:
         - /var/run/docker.sock:/var/run/docker.sock
     
      jobmanager:
        image: pyflink/playgrounds:1.10.0
        volumes:
          - ./examples:/opt/examples
        hostname: "jobmanager"
         expose:
          - "6123"
         ports:
          -  "8088:8088"
        command: jobmanager
        environment:
         - |
            FLINK_PROPERTIES=
            jobmanager.rpc.address: jobmanager
      taskmanager:
        image: pyflink/playgrounds:1.10.0
        volumes:
          - ./examples:/opt/examples
        expose:
          - "6121"
          - "6122"
        depends_on:
          - jobmanager
        command: taskmanager
        links:
          - jobmanager:jobmanager
        environment:
         - |
           FLINK_PROPERTIES=
           jobmanager.rpc.address: jobmanager
           taskmanager.numberOfTaskSlots: 2
      elasticsearch:
        restart: always
        image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0
        container_name: elasticsearch
        ulimits:
         memlock:
          soft: -1
          hard: -1
        volumes:
         - vibhuviesdata:/usr/share/elasticsearch/data    
        ports:
         - 9200:9200
       networks:
         - es-net
       environment:
         - discovery.type=single-node
         - bootstrap.memory_lock=true
         - ES_JAVA_OPTS:"-Xms1g-Xmx1g"
      kibana:
        image: docker.elastic.co/kibana/kibana:7.12.0
        mem_limit: 5096m
        mem_reservation: 4096m
        container_name: kibana
        restart: always
        networks:
         - es-net
        environment:
          ELASTICSEARCH_URL: "http://localhost:9200"
          ELASTICSEARCH_HOSTS: "http://localhost:9200"  
          elasticsearch.ssl.verificationMode: none  
          LOGGING_VERBOSE: "true"
        ports:
          - 5601:5601
    volumes:
      vibhuviesdata:
        driver: local
    networks:
      es-net:
        driver: bridge 

I'm not sure localhost is resolved correctly. Would you mind testing if using container_name as recommended in Running the Elastic Stack on Docker | Getting Started [7.15] | Elastic addresses the problem?

Thanks for the suggestion I manage to solve the log error reported as connect ECONNREFUSED but I don't manage to visualize the dashboard . Page continually visualize "server is not ready yet".
How can solve it?

It start flushing in log this snippet code:

    kibana           | {"type":"log","@timestamp":"2021-04-30T13:34:20+00:00","tags":["debug","elasticsearch","query","data"],"pid":7,"message":"200\nGET /_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip"}
    kibana           | {"type":"log","@timestamp":"2021-04-30T13:34:22+00:00","tags":["debug","elasticsearch","query","monitoring"],"pid":7,"message":"400\nGET /_xpack?accept_enterprise=true\n"}
    kibana           | {"type":"log","@timestamp":"2021-04-30T13:32:22+00:00","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to [illegal_argument_exception] request [/_xpack] contains unrecognized parameter: [accept_enterprise] :: {\"path\":\"/_xpack?accept_enterprise=true\",\"statusCode\":400,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"illegal_argument_exception\\\",\\\"reason\\\":\\\"request [/_xpack] contains unrecognized parameter: [accept_enterprise]\\\"}],\\\"type\\\":\\\"illegal_argument_exception\\\",\\\"reason\\\":\\\"request [/_xpack] contains unrecognized parameter: [accept_enterprise]\\\"},\\\"status\\\":400}\"} error"}
    kibana           | {"type":"log","@timestamp":"2021-04-30T13:32:23+00:00","tags":["debug","elasticsearch","query","data"],"pid":7,"message":"200\nGET /_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip"}
1 Like

Can you see any activity in Kibana logs? Maybe it runs migration?

It's because you're running elasticsearch v7.4.0 with kibana v7.12.0. They both should have the same version stack compatibility matrix

I've upgraded the version of elasticsearach move to 7.12.1 but now it return this issue:

    elasticsearch exited with code 137
    kibana           | {"type":"log","@timestamp":"2021-04-30T14:35:13+00:00","tags":["debug","metrics","ops"],"pid":7,"ecs":{"version":"1.7.0"},"event":{"kind":"metric","category":["process","host"],"type":"info"},"process":{"uptime":170,"memory":{"heap":{"usedInBytes":148955224}},"eventLoopDelay":10.207666999660432},"host":{"os":{"load":{"1m":26.21,"5m":12.52,"15m":5.39}}},"message":"memory: 142.1MB uptime: 0:02:50 load: [26.21,12.52,5.39] delay: 10.208"}
    kibana           | {"type":"log","@timestamp":"2021-04-30T14:35:13+00:00","tags":["debug","elasticsearch","query","data"],"pid":7,"message":"[ConnectionError]: connect ECONNREFUSED 127.0.0.1:9200"}
    kibana           | {"type":"log","@timestamp":"2021-04-30T14:35:16+00:00","tags":["debug","elasticsearch","query","data"],"pid":7,"message":"[ConnectionError]: connect ECONNREFUSED 172.20.0.3:9200"}
    kibana           | {"type":"log","@timestamp":"2021-04-30T14:35:18+00:00","tags":["debug","metrics","ops"],"pid":7,"ecs":{"version":"1.7.0"},"event":{"kind":"metric","category":["process","host"],"type":"info"},"process":{"uptime":175,"memory":{"heap":{"usedInBytes":149510224}},"eventLoopDelay":1.4643759997561574},"host":{"os":{"load":{"1m":24.43,"5m":12.38,"15m":5.39}}},"message":"memory: 142.6MB uptime: 0:02:55 load: [24.43,12.38,5.39] delay: 1.464"}
    kibana           | {"type":"log","@timestamp":"2021-04-30T14:35:18+00:00","tags":["debug","elasticsearch","query","data"],"pid":7,"message":"[ConnectionError]: connect ECONNREFUSED 172.20.0.3:9200"}

The modification done in docker-compose file is the following:

for elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.1

It seems your Elasticsearch container cannot start due to OOM. Could you increase memory size?

How to increase the single memory of the container elasticsearch? I've tryed to insert the line of code inserted for kibana:

  mem_limit: 5096m
 mem_reservation: 4096m  

but it doesn't work. Continuing give me the same error.

elasticsearch:
    restart: always
    image: docker.elastic.co/elasticsearch/elasticsearch:7.12.1
    container_name: elasticsearch
    mem_limit: 5096m
    mem_reservation: 4096m
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - vibhuviesdata:/usr/share/elasticsearch/data    
    ports:
      - 9200:9200
    networks:
      - es-net
    environment:
     - discovery.type=single-node
     - bootstrap.memory_lock=true
     - ES_JAVA_OPTS:"-Xms1g-Xmx1g"
  kibana:
    image: docker.elastic.co/kibana/kibana:7.12.0
    mem_limit: 5096m
    mem_reservation: 4096m
    container_name: kibana
    restart: always
    networks:
    - es-net
    environment:
      ELASTICSEARCH_URL: "http://localhost:9200"
      ELASTICSEARCH_HOSTS: "http://elasticsearch:9200"  
      elasticsearch.ssl.verificationMode: none  
      xpack.monitoring.ui.container.elasticsearch.enabled: "true" 
      LOGGING_VERBOSE: "true"
    ports:
      - 5601:5601
volumes:
  vibhuviesdata:
    driver: local
networks:
  es-net:
    driver: bridge 

This is the interested part.

Have you tried to increase the available memory size for the Docker app? Docker Container exited with code 137

I've used Virtual machine on Linux. (Ubuntu) I've not installed docker as app. Sorry if I said something strange or not right. I'm a beginner of all these techonologies. (the link you've sent me used a docker version 3 instead I've my docker-compose file on version 2.

I manage to solve removing images that I've istalled of differnt versions. Thanks for the support @Mikhail_Shustov

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.