Can't curl from host to elasticsearch docker

docker

(Roberto Morávia) #1

Hy,

I'm trying to set up a "lab" dev environment with 3 Elasticsearch nodes and a Kibana in order to practice managing a cluster.

However, even with a single node, I can't access the Elasticsearch node from the host machine. I would prefer to have this nodes started in development mode (so I don't need to change the vm.max_map_countfrom the host), however I didn't managed to do that either, because instead of setting up host=127.0.0.1 by default, elastic inside the docker selects a different IP.

I tried setting network.host, network.bind_host and network.publish_host to 127.0.0.1; using 0.0.0.0, _local_ and _no_loopback_without success.

My docker-compose configuration:

version: '2.2'
services:

  es1:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
    container_name: es1
    labels:
      - "identifier=es-training"
    environment:
      - node.name=es01
      - cluster.name=docker-cluster
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    cpu_count: 1
    mem_limit: 1g
    memswap_limit: 1g
    mem_swappiness: 0
    ports: ['127.0.0.1:9200:9200', '127.0.0.1:9300:9300']
    healthcheck:
      test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
      interval: 30s
      timeout: 30s
      retries: 3

Elasticsearch log:

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-04-14T08:11:47,835][INFO ][o.e.x.m.j.p.NativeController] [es01] Native controller process has stopped - no new native processes can be started
[2019-04-14T08:14:22,751][WARN ][o.e.c.l.LogConfigurator  ] [es01] Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:  /usr/share/elasticsearch/config/log4j2.properties
[2019-04-14T08:14:23,017][INFO ][o.e.e.NodeEnvironment    ] [es01] using [1] data paths, mounts [[/ (overlay)]], net usable_space [68.8gb], net total_space [456.9gb], types [overlay]
[2019-04-14T08:14:23,018][INFO ][o.e.e.NodeEnvironment    ] [es01] heap size [494.9mb], compressed ordinary object pointers [true]
[2019-04-14T08:14:23,019][INFO ][o.e.n.Node               ] [es01] node name [es01], node ID [Y9joe9chSC22F0lHu8FF2Q]
[2019-04-14T08:14:23,019][INFO ][o.e.n.Node               ] [es01] version[6.5.0], pid[1], build[default/tar/816e6f6/2018-11-09T18:58:36.352602Z], OS[Linux/4.15.0-47-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-04-14T08:14:23,019][INFO ][o.e.n.Node               ] [es01] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.Ns7iCAv2, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-04-14T08:14:24,176][INFO ][o.e.p.PluginsService     ] [es01] loaded module [aggs-matrix-stats]
...
[2019-04-14T08:14:24,178][INFO ][o.e.p.PluginsService     ] [es01] loaded plugin [ingest-user-agent]
[2019-04-14T08:14:26,781][INFO ][o.e.x.s.a.s.FileRolesStore] [es01] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2019-04-14T08:14:27,127][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [es01] [controller/65] [Main.cc@109] controller (64 bit): Version 6.5.0 (Build 71882a589e5556) Copyright (c) 2018 Elasticsearch BV
[2019-04-14T08:14:27,577][INFO ][o.e.d.DiscoveryModule    ] [es01] using discovery type [zen] and host providers [settings]
[2019-04-14T08:14:28,225][INFO ][o.e.n.Node               ] [es01] initialized
[2019-04-14T08:14:28,225][INFO ][o.e.n.Node               ] [es01] starting ...
[2019-04-14T08:14:28,346][INFO ][o.e.t.TransportService   ] [es01] publish_address {172.30.0.3:9300}, bound_addresses {0.0.0.0:9300}
[2019-04-14T08:14:28,356][INFO ][o.e.b.BootstrapChecks    ] [es01] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-04-14T08:14:31,403][INFO ][o.e.c.s.MasterService    ] [es01] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {es01}{Y9joe9chSC22F0lHu8FF2Q}{_uHZ-sodQv-jljg7xf4eQA}{172.30.0.3}{172.30.0.3:9300}{ml.machine_memory=1073741824, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2019-04-14T08:14:31,410][INFO ][o.e.c.s.ClusterApplierService] [es01] new_master {es01}{Y9joe9chSC22F0lHu8FF2Q}{_uHZ-sodQv-jljg7xf4eQA}{172.30.0.3}{172.30.0.3:9300}{ml.machine_memory=1073741824, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {es01}{Y9joe9chSC22F0lHu8FF2Q}{_uHZ-sodQv-jljg7xf4eQA}{172.30.0.3}{172.30.0.3:9300}{ml.machine_memory=1073741824, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-04-14T08:14:31,433][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [es01] publish_address {172.30.0.3:9200}, bound_addresses {0.0.0.0:9200}
[2019-04-14T08:14:31,434][INFO ][o.e.n.Node               ] [es01] started
[2019-04-14T08:14:31,457][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [es01] Failed to clear cache for realms [[]]
[2019-04-14T08:14:31,503][INFO ][o.e.g.GatewayService     ] [es01] recovered [0] indices into cluster_state
[2019-04-14T08:14:31,607][INFO ][o.e.c.m.MetaDataIndexTemplateService] [es01] adding template [.triggered_watches] for index patterns [.triggered_watches*]
...
[2019-04-14T08:14:31,805][INFO ][o.e.c.m.MetaDataIndexTemplateService] [es01] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2019-04-14T08:14:31,872][INFO ][o.e.l.LicenseService     ] [es01] license [11dcf203-4b2a-4bfa-84c4-1b6aa8c65f62] mode [basic] - valid

Any help is highly appreciated!
Thanks!


(David Pilato) #2

Here is the one I'm using. Might help:


(Roberto Morávia) #3

Hy! Thans!
But I think yours is running in production mode, right?
Is there any way to run it in dev mode inside the docker?


(David Pilato) #4

Why not using the one I shared?


(Roberto Morávia) #5
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

I get this error. And I'm not sure of the consequences of applying ES fix, since this is a work machine.


(David Pilato) #6

I don't know. Not a system expert here. :confused:


(Roberto Morávia) #7

Thanks, the same!