About remove tansport client performance

I want to know why remove tansport client . HTTP1.1 that I know uses the text protocol and does not support multiplexing , so I think there is a huge performance difference between jest client and tansport client in high concurrency scenarios

Elastic (the company) does not maintain the Jest client, maybe others in the community can chime in they have had issues with the performance ?

There are a few reasons it has been removed, but the primary reason is that there is now a High Level Rest Client, which is implemented over HTTP. Maintaining two clients that functionally do the same thing is difficult to maintain.

1 Like

Thank you very mush for your reply.I think I didn't express it clearly. It is my fault .
Let's forget about the Jest Client. We will discuss the difference High Level Rest Client and Treansport Client.
High Level Rest Client Which is implemented over HTTP. Transport Client Which is implemented over Netty.I have read the official performance comparison between Rest Client and Transport Client. Benchmarking REST client and transport client .
All benchmarks use only a single client thread .
We know thant HTTP1.1 does not support multiplexing.
So If is in high concurrency,
High Level Rest Client Which is implemented over HTTP latency will be large.
Thank you again for your reply.

What is your use case? What is the ratio between indexing and querying? How many concurrent queries per second does your cluster need to support? What is your average query latency and how much data do you return?

Have you run any tests or benchmarks based on your specific use case to quantify any difference in performance? Even if the parsing is slower when using HTTP, how much added latency does this add when you also factor in all the work required to process the query. I suspect this will depend a lot on types of queries and how much data is returned.

1 Like

First of all ,thank you for your reply. I have been paying mor attention of Elasticsearch.Because it is a very good search engine. I really didn't do bechmarking. At the same , I also pay great attention of Netty . So I am confused abount removing the Tansport Client . I think the Elasticsearch official lacks s stress test in full scenario to compare the performance between the Transport Client and the Rest Client . Because the offical is just doing a stress test on a single-thread client.

Some of the drawbacks of the transport client is that it requires a much tighter coupling with respect to Java and Elasticsearch version than a HTTP client does. This means any upgrade affects requires both client and server to be upgraded at exactly the same time, which is not necessarily the case with the HTTP interface. Even if it is a bit faster, any overhead of the HTTP interface is typically quite small compared to the overall processing required to serve a request. The number of concurrent queries that can be served is likely going to be limited by the resources required to serve the requests rather than any serialization overhead. The exception might be very small and cheap requests, but that is not necessarily a typical use case.

1 Like

Ok. I really enjoy chatting with you. This allows me to learn somethings.But I am a problematic person. I really agree with Transport Client to update in a way that is difficult to scroll. Also you mentioned the serialization problem. I It is considered that both Transport Client and Rest Client are serialized using JSON. Therefore, There should be no performance difference between the two in Serialization. The difference between the two should be the network IO. Because I have been researching HTTP2.0 recently. So The drawbacks of HTTP1.1 that is doesn't support multiplexing. What do you think? I look foward to your reply.

I do not think these differences matter much in most real systems. All other language clients use HTTP and I have seen high performance applications not using Java. Rather than making theoretical assumptions try to run some benchmarks with non-trivial queries and see for yourself.


Well.I understand what you mean. I have plans to do a bechmark test. Then compare the results of the two.After Then. Let's talk to you again.Finally. thank you for you patience.