java.net.UnknownHostException/Name or service not known/failed to resolve host

Hi

I'm trying to deploy a self-managed containerized multinode Elasticsearch cluster in aws ecs. I was able to deploy and use a single node Elasticsearch cluster inside aws ecs. When I'm trying to host the same in multinode approach in aws ecs, The master & the data node are throwing the above-mentioned errors and I'm not sure what I'm missing here. I tried using multiple containers in the same service and also 2 different services for different nodes. Can anyone help here to make me understand where i'm going wrong so that I can get this working.

Please find the task definition I'm using below:

{
"family": "elasticsearchmultinode",
"containerDefinitions": [
{
"name": "elasticsearch-master",
"image": "docker.elastic.co/elasticsearch/elasticsearch:7.10.2",
"cpu": 1024,
"memory": 2048,
"portMappings": [
{
"name": "elasticsearch-master-9200-tcp",
"containerPort": 9200,
"hostPort": 9200,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "discovery.seed_hosts",
"value": "es02"
},
{
"name": "cluster.name",
"value": "docker-cluster"
},
{
"name": "cluster.initial_master_nodes",
"value": "es01,es02"
},
{
"name": "ES_JAVA_OPTS",
"value": "-Xms2g -Xmx2g"
},
{
"name": "node.name",
"value": "es01"
},
{
"name": "bootstrap.memory_lock",
"value": "true"
},
{
"name": "node.roles",
"value": "master"
}
],
"mountPoints": ,
"volumesFrom": ,
"ulimits": [
{
"name": "nofile",
"softLimit": 65535,
"hardLimit": 65535
},
{
"name": "memlock",
"softLimit": -1,
"hardLimit": -1
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "c2dev1pop1-esmultinode-logs",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "elasticsearch-master"
}
}
},
{
"name": "elasticsearch-data",
"image": "docker.elastic.co/elasticsearch/elasticsearch:7.10.2",
"cpu": 1024,
"memory": 2048,
"portMappings": ,
"essential": true,
"environment": [
{
"name": "discovery.seed_hosts",
"value": "es01"
},
{
"name": "cluster.name",
"value": "docker-cluster"
},
{
"name": "cluster.initial_master_nodes",
"value": "es01,es02"
},
{
"name": "ES_JAVA_OPTS",
"value": "-Xms2g -Xmx2g"
},
{
"name": "node.name",
"value": "es02"
},
{
"name": "bootstrap.memory_lock",
"value": "true"
},
{
"name": "node.roles",
"value": "data"
}
],
"mountPoints": ,
"volumesFrom": ,
"dependsOn": [
{
"containerName": "elasticsearch-master",
"condition": "START"
}
],
"ulimits": [
{
"name": "nofile",
"softLimit": 65535,
"hardLimit": 65535
},
{
"name": "memlock",
"softLimit": -1,
"hardLimit": -1
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "c2dev1pop1-esmultinode-logs",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "elasticsearch-data"
}
}
}
],
"executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"requiresCompatibilities": [
"EC2"
],
"cpu": "2048",
"memory": "4096"
}

If you are deploying a new cluster, why are you using such an old version that has been EOL for a long time?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.