Is the HighLevelRestClient no supported in the Java Testing Framework?

After spending quite a bit of time fighting jar hell issues trying to setup the java testing framework, and tweaking intellij settings to accommodate it, I ended up not being able to get my HighLevelRestClient to connect to an elastic cluster that was started with ESSingleNodeTest case.

I'm now unhappy that after having come across this thread that basically explains that the high level rest client isn't expected to work with the java testing framework. Am I reading that correctly? The preferred client going forward isn't supported in the test framework? The test framework is just if you're developing for elasticsearch itself?

I'm posting this in hopes of finding out I've misunderstood what I'm reading or that there is new information that can be shared AND that someone can provide an example for how to:

  1. Startup a single node cluster using the java test framework
  2. Create an index and a document using the high level rest client (the very client elastic is saying to use for application development)
  3. Shut the server down

I've read the links to alternatives referred from the above topic...firing up docker, using a real cluster etc. Some of my testing will happen at the integration level. I really don't want to debate what should or should not be a unit test here. I want to focus on the intent of this doc which had me pretty sold on "[using] the same testing infrastructure we do in the Elasticsearch core" for my application.

Hi @dg1,

the problem with running high level rest client tests against the single node created by ESSingleNodeTest isn't so much that the HLRC doesn't work in tests per se but rather that the single node started, is started without a HTTP layer. If you really want it to start up the HTTP layer as well, you'll need to override the plugins that are loaded for that node to include the Netty 4 transport and also disable the mock http layer.

The latte you can do by adding:

    @Override
    protected boolean addMockHttpTransport() {
        return false; // enable http
    }

to your test class as is done here for example (those tests use low level rest client, but HLRC will work just the same): https://github.com/elastic/elasticsearch/blob/master/modules/transport-netty4/src/test/java/org/elasticsearch/rest/discovery/Zen2RestApiIT.java

The Netty 4 plugin you can load by adding:

    @Override
    protected Collection<Class<? extends Plugin>> nodePlugins() {
        return Collections.singletonList(Netty4Plugin.class);
    }

as is done in e.g. our Netty ITs here https://github.com/elastic/elasticsearch/blob/master/modules/transport-netty4/src/test/java/org/elasticsearch/ESNetty4IntegTestCase.java#L57

Provided the dependencies are set up correctly (i.e. you have the Netty 4 transport dependency on your test class path), this should work fine (it does in some of our own tests in the linked examples).

Armin,

This is a huge relief to me. Thank you! I'll try this out today and report back here.

Thanks again,

Damon

Hi Armin,
I just got around to trying this. I got a bit sidetracked on something else until now. ESSingleNodeTestCase didn't allow me to override nodePlugins, so I switched to ESIntegTestCase to get around that.

So I've imported this:

        <dependency>
            <groupId>org.elasticsearch.plugin</groupId>
            <artifactId>transport-netty4-client</artifactId>
            <version>7.4.2</version>
            <scope>test</scope>
        </dependency>

I guessed that this version should match my elastic test framework version because using the latest failed.

My test class looks like this:

package com.ctct.amplifier.schema;

import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.junit.Test;
import java.util.*;
import org.elasticsearch.transport.Netty4Plugin;

public class ElasticClientITTests extends ESIntegTestCase
{

    @Override
    protected boolean addMockHttpTransport() {
        return false; // enable http
    }

    @Override
    protected Collection<Class<? extends Plugin>> nodePlugins() {
        return Collections.singletonList(Netty4Plugin.class);
    }

    @Test
    public void sampleTest() {

    }

}

And it intermittently works. Obviously there's no content to my test case yet but figured I should iron out issues with startup and shutdown first. The value for the available processors values: did not match current value [y] changes every time.

What might I be doing wrong?
Also I see examples of using the RestClient class. I assume I can use the RestHighLevelClient in it's place just the same?

Here is the stack trace:

java.lang.IllegalStateException: available processors value [2] did not match current value [3]

at org.elasticsearch.transport.netty4.Netty4Utils.setAvailableProcessors(Netty4Utils.java:71)
at org.elasticsearch.transport.netty4.Netty4Transport.<init>(Netty4Transport.java:115)
at org.elasticsearch.transport.Netty4Plugin.lambda$getTransports$0(Netty4Plugin.java:78)
at org.elasticsearch.node.Node.<init>(Node.java:478)
at org.elasticsearch.node.MockNode.<init>(MockNode.java:95)
at org.elasticsearch.node.MockNode.<init>(MockNode.java:85)
at org.elasticsearch.test.InternalTestCluster.buildNode(InternalTestCluster.java:708)
at org.elasticsearch.test.InternalTestCluster.reset(InternalTestCluster.java:1219)
at org.elasticsearch.test.InternalTestCluster.beforeTest(InternalTestCluster.java:1119)
at org.elasticsearch.test.ESIntegTestCase.lambda$beforeInternal$0(ESIntegTestCase.java:369)
at com.carrotsearch.randomizedtesting.RandomizedContext.runWithPrivateRandomness(RandomizedContext.java:187)
at com.carrotsearch.randomizedtesting.RandomizedContext.runWithPrivateRandomness(RandomizedContext.java:211)
at org.elasticsearch.test.ESIntegTestCase.beforeInternal(ESIntegTestCase.java:378)
at org.elasticsearch.test.ESIntegTestCase.setupTestCluster(ESIntegTestCase.java:2111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:972)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)

Hi @dg1

That's good news then I think. Your issue is easy to fix fortunately :slight_smile:
Simply run your tests with system property
-Des.set.netty.runtime.available.processors=false and it will go away.

Hi Armin,
Adding that system property worked! So I'm unsure how to use the RestHighLevelClient now. I see there is getRestClientMethod that returns a RestClient...which doesn't seem to be a RestHighLevelClient exactly. I suspect I need to figure out how to programmatically retrieve the port that was assigned for http traffic and pass that into my applications instance of the RestHighLevelClient. If that's correct, where might I retrieve that port number from? Or is the port number the same all the time and I can just hard code it to a specific value?

Thanks,
Damon

Hi @dg1

glad it worked. The port is dynamic though you can't hard code it. What you can do is do something like this:

RestHighLevelClient client = new RestHighLevelClient(RestClient.builder(getRestClient().getNodes().toArray(new Node[0])))

to build a new high level rest client from the low level client. That should work fine :slight_smile:

It feels like I'm getting close!

That worked for setting up the rest client but I got another error on the linse where I try and create an index.

My sample test class looks the same as above but now has this body to the test method:

@Test
public void sampleTest() throws IOException {

    RestHighLevelClient restHighLevelClient = new RestHighLevelClient(RestClient.builder(getRestClient().getNodes().toArray(new Node[0])));
    CreateIndexRequest request = new CreateIndexRequest("foo");
    restHighLevelClient.indices().create(request, RequestOptions.DEFAULT);
    restHighLevelClient.close();
}

I have this for a test error:

AVVERTENZA: Uncaught exception in thread: Thread[Thread-2,5,TGRP-ElasticClientITTests]
java.lang.AssertionError: Buffer must have heap array
	at __randomizedtesting.SeedInfo.seed([3CCBFC4E0D0661F]:0)
	at org.elasticsearch.transport.CopyBytesSocketChannel.copyBytes(CopyBytesSocketChannel.java:162)
	at org.elasticsearch.transport.CopyBytesSocketChannel.doWrite(CopyBytesSocketChannel.java:98)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:928)
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:356)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1383)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
	at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
	at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
	at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
	at io.netty.channel.AbstractChannelHandlerContext.access$2100(AbstractChannelHandlerContext.java:56)
	at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1150)
	at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:416)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:515)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at java.lang.Thread.run(Thread.java:748)

[2020-02-20T00:43:23,048][ERROR][o.e.ExceptionsHelper     ] [node_s2] fatal error
    at org.elasticsearch.ExceptionsHelper.lambda$maybeDieOnAnotherThread$4(ExceptionsHelper.java:300)
    at java.util.Optional.ifPresent(Optional.java:159)
    at org.elasticsearch.ExceptionsHelper.maybeDieOnAnotherThread(ExceptionsHelper.java:290)
    at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addPromise$1(Netty4TcpChannel.java:88)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:500)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:474)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:413)
    at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:538)
    at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:531)
    at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:111)
    at io.netty.util.concurrent.PromiseCombiner.tryPromise(PromiseCombiner.java:170)
    at io.netty.util.concurrent.PromiseCombiner.access$600(PromiseCombiner.java:35)
    at io.netty.util.concurrent.PromiseCombiner$1.operationComplete0(PromiseCombiner.java:62)
    at io.netty.util.concurrent.PromiseCombiner$1.operationComplete(PromiseCombiner.java:44)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:500)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:474)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:413)
    at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:538)
    at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:531)
    at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:111)
    at io.netty.util.internal.PromiseNotificationUtil.tryFailure(PromiseNotificationUtil.java:64)
    at io.netty.channel.ChannelOutboundBuffer.safeFail(ChannelOutboundBuffer.java:721)
    at io.netty.channel.ChannelOutboundBuffer.remove0(ChannelOutboundBuffer.java:306)
    at io.netty.channel.ChannelOutboundBuffer.failFlushed(ChannelOutboundBuffer.java:658)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.closeOutboundBufferForShutdown(AbstractChannel.java:675)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:668)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:943)
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:356)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1383)
    at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
    at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
    at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
    at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127)
    at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
    at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
    at io.netty.channel.AbstractChannelHandlerContext.access$2100(AbstractChannelHandlerContext.java:56)
    at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1150)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:416)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:515)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at java.lang.Thread.run(Thread.java:748)

I'm guessing that maybe it's because it's firing up two nodes by default and maybe I need additional config? It's mentioned node_s2 as "fatal error". Maybe I could go down to one node somehow. Just guessing.

@dg1

I forgot you're on 7.4.x . This can probably be fixed by adding the following sys prop as well:

-Dio.netty.allocator.numDirectArenas=0

7.4.x depends on this sys prop being set as Netty uses the wrong memory allocator otherwise and trips our assertion for that.

Thanks Armin,
I've added that sysprop to surefire as well. The new error appears to be similar unfortunately. Although now the fatal error comes before the Buffer assertion.

2020-02-19T15:51:07,082][INFO ][o.e.c.m.MetaDataCreateIndexService] [node_sm0] [foo] creating index, cause [api], templates [random_index_template], shards [5]/[1], mappings []
[2020-02-19T15:51:07,478][ERROR][o.e.ExceptionsHelper     ] [node_sm2] fatal error
    at org.elasticsearch.ExceptionsHelper.lambda$maybeDieOnAnotherThread$4(ExceptionsHelper.java:300)
    at java.util.Optional.ifPresent(Optional.java:159)
    at org.elasticsearch.ExceptionsHelper.maybeDieOnAnotherThread(ExceptionsHelper.java:290)
    at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addPromise$1(Netty4TcpChannel.java:88)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:500)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:474)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:413)
    at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:538)
    at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:531)
    at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:111)
    at io.netty.util.internal.PromiseNotificationUtil.tryFailure(PromiseNotificationUtil.java:64)
    at io.netty.channel.ChannelOutboundBuffer.safeFail(ChannelOutboundBuffer.java:721)
    at io.netty.channel.ChannelOutboundBuffer.remove0(ChannelOutboundBuffer.java:306)
    at io.netty.channel.ChannelOutboundBuffer.failFlushed(ChannelOutboundBuffer.java:658)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.closeOutboundBufferForShutdown(AbstractChannel.java:675)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:668)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:943)
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:356)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:895)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1383)
    at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
    at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
    at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
    at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127)
    at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
    at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
    at io.netty.channel.AbstractChannelHandlerContext.access$2100(AbstractChannelHandlerContext.java:56)
    at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1150)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:416)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:515)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at java.lang.Thread.run(Thread.java:748)

@dg1 do you have the option of testing the same code against 7.6 ? I'd like to make sure this is not some dependency conflict and 7.6 should work fine out of the box.
I double checked what we use as environment when testing 7.4 and it seems you should be in line with 7.4 now so I'm a little surprised by this behaviour.

Hi Armin,
That pretty much worked! Thank you! Upgrading to 7.6.0 (as well as bumping my lucene-test-framework version to 8.4.0 to align) successfully created an index with the RestHighLevelClient. I did get a couple warnings but they may be expected for this version.

feb 21, 2020 11:14:21 AM org.elasticsearch.client.RestClient logResponse
AVVERTENZA: request [PUT http://127.0.0.1:63355/foo?master_timeout=30s&timeout=30s] 
returned 1 warnings: [299 Elasticsearch-7.6.0-7f634e9f44834fbc12724506cc1da681b0c3b1e3 " 
[index.force_memory_term_dictionary] setting was deprecated in Elasticsearch and will be 
removed in a future release! See the breaking changes documentation for the next major 
version."]

and

Feb 20, 2020 5:40:25 PM com.carrotsearch.randomizedtesting.ThreadLeakControl 
checkThreadLeaks
WARNING: Will linger awaiting termination of 1 leaked thread(s).

After the warning it still said it disconnected from the target VM and finished with exit code zero, so maybe that's ok?

Hi @dg1

The second failure is nothing to worry about I think.

The first warning though is a little confusing to me (assuming you're still using exactly the code you pasted above). Could it be that you're using a different version for the high level rest client and ES in your tests? Or maybe you have a different ES version on your class path by accident somehow? This would also explain why 7.4.x for some unexplained reason picked up the wrong settings IMO.
A stock 7.6.0 client shouldn't be using default index settings that will raise deprecation warnings.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.