Exception in thread "main" BindTransportException[Failed to bind to [9300-9400]]; nested: ChannelException[Failed to bind to: /167.99.56.190:9400]; when running on digital ocean cloud server

I want to host my multi-container docker application and elasticsearch is one of the service in the container. I use docker-compose.yml file to host it in digital ocean cloud server 2GB Memory.

I am just trying to host a single node to production, and I am not sure if it is possible or not.
I encounter into problem for binding error, not able to bind network.bind_host on digital ocean IP address to the elasticsearch.yml

Here is my elasticsearch.yml file

# path:
# data: /var/data/elasticsearch
# logs: /var/log/elasticsearch
# plugins: /data/plugins
# work: /data/work

# Cluster information
cluster.name: production
node.name: mooc-search-docker-single-node
# xpack.security.enabled: false
# xpack.license.self_generated.type: basic
# xpack.security.transport.ssl.enabled: true
# xpack.security.transport.ssl.verification_mode: certificate
# xpack.security.transport.ssl.key: /home/es/config/x-pack/node01.key
# xpack.security.transport.ssl.certificate: /home/es/config/x-pack/node01.crt
# xpack.security.transport.ssl.certificate_authorities: [ "/home/es/config/x-pack/ca.crt" ]


# Security settings
# script.disable_dynamic: true
# script.inline: on
# script.indexed: on

# Index settings
# action.auto_create_index: false

# network settings
network.host: _local_
network.bind_host: 167.99.56.190 ** this causes error if I put this IP address but if I put 0 then it works but I'm not able to access from my local machine**
# network.tcp.reuse_address: true
http.port: 9200

http.cors.enabled: true
http.cors.allow-origin: "*"


# Turn off swap to get a big speed increase.  This will prevent the ES server
# from swapping memory on the node. 
https://www.elastic.co/guide/en/elasticsearch/reference/current/setup- 
configuration.html#setup-configuration-memory
bootstrap.memory_lock: true

# In order to communicate and to form a cluster with nodes on other servers,
# your node will need to bind to a non-loopback address. While there are many
# network settings, usually all you need to configure is network.host:
# network.host: 192.168.1.10

# Disable deleting all indices from the api.
action.disable_delete_all_indices: true

# shield.transport.filter.enabled: false
# shield.http.filter.enabled: true
# shield.transport.filter.allow: [ "192.168.0.1", "192.168.0.2", "192.168.0.3", "192.168.0.4", 
"167.99.56.190" ]
# shield.transport.filter.deny: _all



# When the moment comes to form a cluster with nodes on other servers, you have
# to provide a seed list of other nodes in the cluster that are likely to be live
# and contactable
discovery.zen.ping.unicast.hosts:
  - 192.168.1.10:9300
  - 192.168.1.11

# prevent data loss 
discovery.zen.minimum_master_nodes: 1 #


# indices.cluster.send_refresh_mapping: false />

This is my Dockerfile for elasticsearch:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.2.2
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/

Lastly, this is the error log:
Exception in thread "main" BindTransportException[Failed to bind to [9300-9400]];
nested: ChannelException[Failed to bind to: /167.99.56.190:9400]; nested:
BindException[Cannot assign requested address];
Likely root cause: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at

  org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
	    at 
  
  org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391)
	    at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315)
	at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

All I want is to be able to let elasticsearch service to be access by other container, such as the search-services.

Thank you so much!

What happen if you don't set at all network.bind_host?

Hi David,

If I did not set the network.bind_host it worked perfectly, but not able to access from outside. However, later I figure that I was using docker-compose run instead of docker-compose up. Therefore, I am not able to call the IP address of the current machine, as the elasticsearch instance is not really up.

I configure network.host to digital ocean IP address, and network.bind_host to 0.0.0.0.

I thought the network.bind_host is to set elastic instance accessible from outside. Since the doc said that A node can bind to multiple interfaces, e.g. two network cards, or a site-local address and a local address.

What does your docker-compose file look like?

This is what it looks like.

version: '3'

services:
  get-course-services:
    container_name: courses-db
    image: edwardhuang/mooc_search:courses-db # youruser/repo:tag
    build:
      context: ./Services/GetCourseServices
    env_file:
      - ./Services/GetCourseServices/.env
    ports:
      - "5433:5432"
    healthcheck:
      test: ["CMD","curl","-f","http://localhost || exit 1"]
    # command: 'done'

  search-services: #node.js app for backend application logic
    container_name: search-services
    image: edwardhuang/mooc_search:search-services
    build:
      context: ./Services/SearchServices
    ports:
      - "3000:3000" # expose API port
      - "9229:9229" # Expose node process debug port (disable in production)
    # deploy:
    #   replicas: 5
    #   resources:
    #     limits:
    #       cpus: "0.1"
    #       memory: 50M
    #   restart_policy:
    #     condition: on-failure
    # volumes: # Attach local data directory to persist(save) data
      # - "./Services/SearchServices:/usr/src/app"
      # - "./Services/GetCourseServices/data:/usr/src/app/data" # your hostFilePath:containerFilePath
      # - "./Services/SearchServices/package.json:/usr/src/package.json"
    depends_on: # tell docker compose to also starts other services if this services is started
      - elasticsearch
      - get-course-services
    environment: # Set ENV vars
      - NODE_ENV=local
      - ES_HOST=elasticsearch
      - PORT=3000
      - jsonlocFile=../data/course.json
      - jsonTestFile=../data/course_test.json


  web-services: # Nginx server for frontend and backend is express
    container_name: web-services
    image: edwardhuang/mooc_search:nginx
    build: ./Services/WebServices
    # deploy:
    #   replicas: 5
    #   resources:
    #     limits:
    #       cpus: "0.1"
    #       memory: 50M
    #   restart_policy:
    #     condition: on-failure
    # volumes: #Serve local public dir
    #   - ./Services/WebServices/public:/usr/share/nginx/html
    ports:
      - "8080:80" # forward frontend side for localhost:8080
#   visualizer:
#     image: dockersamples/visualizer:stable
#     ports:
#       - "5000:5080"
#     volumes:
#       - "/var/run/docker.sock:/var/run/docker.sock"
#     deploy:
#       placement:
#         constraints: [node.role == manager]
#
  elasticsearch: # elasticsearch instance
    container_name: esSearch
    image: edwardhuang/mooc_search:es-image
    build:
      context: ./config/production
    volumes: # Persist es data in the separate "esdata" volume
      # - ./config:/usr/share/elasticsearch/config
      - esdata:/usr/share/elasticsearch/data
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.type=single-node
    # control the limits for JVM mlock all
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    cap_add:
      - IPC_LOCK
    ports: #expose elasticsearch ports
      - "9300:9300"
      - "9200:9200"

volumes: # to share data between containers
    esdata:

It was the misconception but once I put docker-compose upI get it to work now. this thread can be close.

Thank you so much!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.