"codebase property already set" when running ESIntegTestCase

That's good to know. This particular project uses gradle, but one of our other ones is maven, so I'll definitely need that. Are you going to release a new version to maven central?

Are you going to release a new version to maven central?

Definitely yes. Question is when? :smiley:

For now I can build and deploy it to our local repo.

I'm having a strange issue at the moment whereby it's starting two containers. The first one doesn't respond on any ports (curl: (52) Empty reply from server) the second one does, but the ElasticSearchResource is pointing at the first one. That's currently why I can't connect.

[Test worker] INFO 🐳 [alpine:3.5] - Starting an elasticsearch container using version [6.2.4] from [docker.elastic.co/elasticsearch/elasticsearch-oss]
[Test worker] INFO 🐳 [testcontainers/imqt7qlwihxlzacq] - Creating container for image: testcontainers/imqt7qlwihxlzacq
[Test worker] INFO 🐳 [testcontainers/imqt7qlwihxlzacq] - Starting container with ID: afd6bba62d2846d49d74dd502cf4c63e31f6e0101ad497df0174c9ed4d852574
[Test worker] INFO 🐳 [testcontainers/imqt7qlwihxlzacq] - Container testcontainers/imqt7qlwihxlzacq is starting: afd6bba62d2846d49d74dd502cf4c63e31f6e0101ad497df0174c9ed4d852574
[Test worker] INFO 🐳 [testcontainers/imqt7qlwihxlzacq] - Container testcontainers/imqt7qlwihxlzacq started
[Test worker] INFO 🐳 [testcontainers/imqt7qlwihxlzacq] - Starting an elasticsearch container using version [6.2.4] from [docker.elastic.co/elasticsearch/elasticsearch-oss]
[Test worker] INFO 🐳 [testcontainers/rhayznm8q37v9ng6] - Creating container for image: testcontainers/rhayznm8q37v9ng6
[Test worker] INFO 🐳 [testcontainers/rhayznm8q37v9ng6] - Starting container with ID: 2ba9d3ba1465eafb74f923a17ecf165a118e5545e3602a7b9aba0e7d26a436da
[Test worker] INFO 🐳 [testcontainers/rhayznm8q37v9ng6] - Container testcontainers/rhayznm8q37v9ng6 is starting: 2ba9d3ba1465eafb74f923a17ecf165a118e5545e3602a7b9aba0e7d26a436da
[Test worker] INFO 🐳 [testcontainers/rhayznm8q37v9ng6] - Container testcontainers/rhayznm8q37v9ng6 started
[Test worker] INFO com.cameraforensics.elasticsearch.ElasticSearchMonitor - Checking availability of nodes for connection: ElasticSearchConnection{ restHost=http://localhost:32815, transportHost=http://localhost:32814, clusterName=elasticsearch}

docker ps:

CONTAINER ID        IMAGE                             COMMAND                  CREATED                  STATUS              PORTS                                              NAMES
2ba9d3ba1465        testcontainers/rhayznm8q37v9ng6   "/usr/local/bin/dock…"   Less than a second ago   Up 4 seconds        0.0.0.0:32815->9200/tcp, 0.0.0.0:32814->9300/tcp   naughty_blackwell
afd6bba62d28        testcontainers/imqt7qlwihxlzacq   "/usr/local/bin/dock…"   19 seconds ago           Up 24 seconds       0.0.0.0:32813->9200/tcp, 0.0.0.0:32812->9300/tcp   loving_vaughan
0db246b7c0a8        bsideup/moby-ryuk:0.2.2           "/app"                   21 seconds ago           Up 25 seconds       0.0.0.0:32811->8080/tcp                            testcontainers-ryuk-f5d95521-3921-4c03-b597-471727ff80cb

curl localhost:32814 = curl: (52) Empty reply from server

curl localhost:32812 = This is not a HTTP port%

What am I doing wrong?

More helpful information:

    @Rule
    public ElasticsearchResource elasticsearch = new ElasticsearchResource(
            "docker.elastic.co/elasticsearch/elasticsearch-oss",
            "6.2.4",
            null,
            new ArrayList<>(),
            new HashMap<>(),
            null);

    public ElasticSearchConnection elasticSearchConnection;

    public String clusterName = "elasticsearch";

    @Before
    public void setup() {
        elasticsearch.getContainer().start();
        HttpHost transportHost = new HttpHost(elasticsearch.getContainer().getContainerIpAddress(), elasticsearch.getContainer().getMappedPort(DEFAULT_TRANSPORT_PORT));


        elasticSearchConnection = new ElasticSearchConnection(elasticsearch.getHost(), transportHost, clusterName);
    }

    @After
    public void shutdown() {
        elasticSearchConnection.close();
        elasticSearchConnection = null;
        elasticsearch.getContainer().stop();
    }

Is it because I'm calling start()? Is that even necessary?

Update: If I remove the start() call, then only one container gets started, but still won't respond on any ports.

REST client works...

I think it's related to this: No node available : Elasticsearch Transport Client can't connect to Docker container

and that I need to check the container's ES config for network.host/bind_host?

Yeah. It's not needed. See GitHub - dadoonet/testcontainers-java-module-elasticsearch: Dockerized Elasticsearch container for testing under Testcontainers

but still won't respond on any ports.

Could you get the logs?

The docker container shuts down when the test exits. Any suggestions on how? The REST client works. Just not the transport client...

Add a breakpoint in your tests or a sleep that takes enough time for you to run docker ps and logs.

Oh yeah :woman_facepalming:

[2018-06-14T12:06:39,807][INFO ][o.e.n.Node               ] [] initializing ...
[2018-06-14T12:06:39,902][INFO ][o.e.e.NodeEnvironment    ] [zXCixBy] using [1] data paths, mounts [[/ (overlay)]], net usable_space [57.8gb], net total_space [62.7gb], types [overlay]
[2018-06-14T12:06:39,903][INFO ][o.e.e.NodeEnvironment    ] [zXCixBy] heap size [1007.3mb], compressed ordinary object pointers [true]
[2018-06-14T12:06:39,905][INFO ][o.e.n.Node               ] node name [zXCixBy] derived from node ID [zXCixByNTHSbNB09Bx5L-A]; set [node.name] to override
[2018-06-14T12:06:39,905][INFO ][o.e.n.Node               ] version[6.2.4], pid[1], build[ccec39f/2018-04-12T20:37:28.497551Z], OS[Linux/4.9.87-linuxkit-aufs/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_161/25.161-b14]
[2018-06-14T12:06:39,905][INFO ][o.e.n.Node               ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.x74V4bHC, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
[2018-06-14T12:06:43,490][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [aggs-matrix-stats]
[2018-06-14T12:06:43,490][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [analysis-common]
[2018-06-14T12:06:43,490][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [ingest-common]
[2018-06-14T12:06:43,491][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [lang-expression]
[2018-06-14T12:06:43,491][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [lang-mustache]
[2018-06-14T12:06:43,491][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [lang-painless]
[2018-06-14T12:06:43,491][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [mapper-extras]
[2018-06-14T12:06:43,491][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [parent-join]
[2018-06-14T12:06:43,491][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [percolator]
[2018-06-14T12:06:43,491][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [rank-eval]
[2018-06-14T12:06:43,491][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [reindex]
[2018-06-14T12:06:43,491][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [repository-url]
[2018-06-14T12:06:43,492][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [transport-netty4]
[2018-06-14T12:06:43,492][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded module [tribe]
[2018-06-14T12:06:43,492][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [ingest-geoip]
[2018-06-14T12:06:43,492][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [ingest-user-agent]
[2018-06-14T12:06:43,492][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [x-pack-core]
[2018-06-14T12:06:43,493][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [x-pack-deprecation]
[2018-06-14T12:06:43,493][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [x-pack-graph]
[2018-06-14T12:06:43,493][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [x-pack-logstash]
[2018-06-14T12:06:43,493][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [x-pack-ml]
[2018-06-14T12:06:43,493][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [x-pack-monitoring]
[2018-06-14T12:06:43,493][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [x-pack-security]
[2018-06-14T12:06:43,494][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [x-pack-upgrade]
[2018-06-14T12:06:43,494][INFO ][o.e.p.PluginsService     ] [zXCixBy] loaded plugin [x-pack-watcher]
[2018-06-14T12:06:50,083][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/204] [Main.cc@128] controller (64 bit): Version 6.2.4 (Build 524e7fe231abc1) Copyright (c) 2018 Elasticsearch BV
[2018-06-14T12:06:52,790][INFO ][o.e.d.DiscoveryModule    ] [zXCixBy] using discovery type [zen]
[2018-06-14T12:06:54,928][INFO ][o.e.n.Node               ] initialized
[2018-06-14T12:06:54,929][INFO ][o.e.n.Node               ] [zXCixBy] starting ...
[2018-06-14T12:06:55,257][INFO ][o.e.t.TransportService   ] [zXCixBy] publish_address {172.17.0.3:9300}, bound_addresses {0.0.0.0:9300}
[2018-06-14T12:06:55,291][INFO ][o.e.b.BootstrapChecks    ] [zXCixBy] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-06-14T12:06:58,368][INFO ][o.e.c.s.MasterService    ] [zXCixBy] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {zXCixBy}{zXCixByNTHSbNB09Bx5L-A}{rZ-BGNgCQ6-KX2Bfu3epdg}{172.17.0.3}{172.17.0.3:9300}{ml.machine_memory=2096066560, ml.max_open_jobs=20, ml.enabled=true}
[2018-06-14T12:06:58,379][INFO ][o.e.c.s.ClusterApplierService] [zXCixBy] new_master {zXCixBy}{zXCixByNTHSbNB09Bx5L-A}{rZ-BGNgCQ6-KX2Bfu3epdg}{172.17.0.3}{172.17.0.3:9300}{ml.machine_memory=2096066560, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {zXCixBy}{zXCixByNTHSbNB09Bx5L-A}{rZ-BGNgCQ6-KX2Bfu3epdg}{172.17.0.3}{172.17.0.3:9300}{ml.machine_memory=2096066560, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-06-14T12:06:58,436][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [zXCixBy] publish_address {172.17.0.3:9200}, bound_addresses {0.0.0.0:9200}
[2018-06-14T12:06:58,436][INFO ][o.e.n.Node               ] [zXCixBy] started
[2018-06-14T12:06:58,514][INFO ][o.e.g.GatewayService     ] [zXCixBy] recovered [0] indices into cluster_state
[2018-06-14T12:06:59,617][INFO ][o.e.l.LicenseService     ] [zXCixBy] license [25bb1a8c-40c7-4278-aafc-a867b5454666] mode [basic] - valid
[2018-06-14T12:07:05,751][INFO ][o.e.c.m.MetaDataCreateIndexService] [zXCixBy] [.monitoring-es-6-2018.06.14] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[0], mappings [doc]
[2018-06-14T12:07:06,767][INFO ][o.e.c.r.a.AllocationService] [zXCixBy] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-es-6-2018.06.14][0]] ...]).

It's ok. I've figured it out :man_facepalming:t3::man_facepalming:t3::man_facepalming:t3::man_facepalming:t3::man_facepalming:t3::man_facepalming:t3::man_facepalming:t3:

I'll get my coat.

PS: Thanks for all your help. This is totally the best way to do it, and way better than what we were previously doing with the embedded node. :+1:

1 Like

For the record, could you share your findings here? That could help other readers.

This is totally the best way to do it, and way better than what we were previously doing with the embedded node.

I'm glad you are agreeing on that!

Sure, my problem was that I was expecting the ES cluster to be called elasticsearch, when actually the dockerised cluster is called: docker-cluster.

If you specify the wrong cluster name in your TransportClient connection settings, it won't connect. If you need to confirm it, just curl -XGET localhost:[REST PORT] and see what the cluster_name property is.

TBH, I'm just glad I can still integration test my stuff without having to manage an integration-testing ES cluster myself as well. :+1:

Great! I added some documentation based on that. Thanks!

1 Like

Is there currently a way to tell the container which port to map to? One of my tests checks that a reconnection is made if lost at any point.

I'm currently testing this by stopping and starting the container, but when it comes back up again, the ports have all changed. In reality that wouldn't happen.

Any ideas?

Also, if I do elasticsearch.getContainer().stop() and elasticsearch.getContainer().start() inside a test, this happens when the test exits:

[dockerjava-netty-1-5] ERROR com.github.dockerjava.core.async.ResultCallbackTemplate - Error during callback
com.github.dockerjava.api.exception.NotFoundException: {"message":"No such container: 5cd80164de9986021cc59442425380afc2eab9b7efc669c9841674ca0dc02d23"}

I'm assuming when the @Rule ElasticsearchResource is torn down it assumes that the reference to the container it obtained when starting up is still the same...?

I believe you'd like to have that?

In that case I'd not use @Rule annotation but control all that manually.

Like what I did here:

Yes! (also for Transport port, obviously :wink: )