Getting issue during running a api command

http://192.168.1.211:9000/api/sources?range=36000&size=5000
By running this command getting issue.

"ElasticsearchException{message=Unable to perform search query\n\n[parent] Data too large, data for [<transport_request>] would be [11946545236/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=901/901b, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946578116/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=33781/32.9kb, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946578116/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=33781/32.9kb, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946562577/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=18242/17.8kb, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946561676/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=17341/16.9kb, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946545236/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=17341/16.9kb, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946578116/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=33781/32.9kb, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946545236/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=901/901b, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946578116/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=33781/32.9kb, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946561676/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=17341/16.9kb, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946561676/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=17341/16.9kb, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946545236/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=901/901b, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946545236/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=901/901b, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946545236/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=901/901b, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946545236/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b,

The error message itself is quite clear:

ElasticsearchException{message=Unable to perform search query\n\n[parent] Data too large, data for [<transport_request>] would be [11946545236/11.1gb], which is larger than the limit of [11946544332/11.1gb]

You need to use smaller values for size , try say 100 or 500 instead of 5000 and see if it works

@Linuxuser Did you try with a smaller size? Let us know if you still getting the error.

@DineshNaik Sorry for the late reply i was out of town.
I am getting the same issue.

{"type":"ApiError","message":"ElasticsearchException{message=Unable to perform search query\n\n[parent] Data too large, data for [<transport_request>] would be [11946545236/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=901/901b, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946561676/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages [request=0/0b, fielddata=210765501/201mb, in_flight_requests=17341/16.9kb, accounting=11735778834/10.9gb][parent] Data too large, data for [<transport_request>] would be [11946578116/11.1gb], which is larger than the limit of [11946544332/11.1gb], usages

Also getting the elastic status is red.

Ok , fixing the cluster status would be the first step.

Check the cluster status and try to make it green first.
you can use these commands to get an idea of what's going in the cluster.

curl -XGET "http://localhost:9200/_cluster/stats"
curl -XGET "http://localhost:9200/_cat/shards"
curl -XGET "http://localhost:9200/_nodes/hot_threads"
curl -XGET "http://localhost:9200/_cat/nodes"
curl -XGET "http://localhost:9200/_cluster/health"

please read the below post to know more about this issue:

And this page to diagnose further:
: Fix common cluster issues | Elasticsearch Guide [7.14] | Elastic

  1. curl -XGET "http://localhost:9200/_cluster/stats"
    output:

{"_nodes":{"total":2,"successful":2,"failed":0},"cluster_name":"graylog","cluster_uuid":"zdnt0uYMR-Gz77tlPsmzMQ","timestamp":1638437591202,"status":"red","indices":{"count":404,"shards":{"total":1191,"primaries":1191,"replication":0.0,"index":{"shards":{"min":1,"max":3,"avg":2.948019801980198},"primaries":{"min":1,"max":3,"avg":2.948019801980198},"replication":{"min":0.0,"max":0.0,"avg":0.0}}},"docs":{"count":15794533747,"deleted":4608},"store":{"size_in_bytes":6983526621389},"fielddata":{"memory_size_in_bytes":2023118460,"evictions":0},"query_cache":{"memory_size_in_bytes":0,"total_count":1601,"hit_count":0,"miss_count":1601,"cache_size":0,"cache_count":0,"evictions":0},"completion":{"size_in_bytes":0},"segments":{"count":2319,"memory_in_bytes":12334309640,"terms_memory_in_bytes":7913977192,"stored_fields_memory_in_bytes":4184825744,"term_vectors_memory_in_bytes":0,"norms_memory_in_bytes":444928,"points_memory_in_bytes":233055564,"doc_values_memory_in_bytes":2006212,"index_writer_memory_in_bytes":8340588,"version_map_memory_in_bytes":15194,"fixed_bit_set_memory_in_bytes":0,"max_unsafe_auto_id_timestamp":-1,"file_sizes":{}}},"nodes":{"count":{"total":2,"data":2,"coordinating_only":0,"master":1,"ingest":2},"versions":["6.8.10","6.8.18"],"os":{"available_processors":32,"allocated_processors":32,"names":[{"name":"Linux","count":2}],"pretty_names":[{"pretty_name":"Ubuntu 16.04.7 LTS","count":1},{"pretty_name":"Ubuntu 16.04.6 LTS","count":1}],"mem":{"total_in_bytes":67408044032,"free_in_bytes":2580172800,"used_in_bytes":64827871232,"free_percent":4,"used_percent":96}},"process":{"cpu":{"percent":7},"open_file_descriptors":{"min":2439,"max":4498,"avg":3468}},"jvm":{"max_uptime_in_millis":1796489630,"versions":[{"version":"1.8.0_252","vm_name":"OpenJDK 64-Bit Server VM","vm_version":"25.252-b09","vm_vendor":"Private Build","count":1},{"version":"1.8.0_292","vm_name":"OpenJDK 64-Bit Server VM","vm_version":"25.292-b10","vm_vendor":"Private Build","count":1}],"mem":{"heap_used_in_bytes":22818459960,"heap_max_in_bytes":34132983808},"threads":349},"fs":{"total_in_bytes":13958664310784,"free_in_bytes":2358992531456,"available_in_bytes":1687792795648},"plugins":,"network_types":{"transport_types":{"security4":2},"http_types":{"security4":2}}}}

  1. curl -XGET "http://localhost:9200/_cat/shards"
    output:

graylog_232 2 r UNASSIGNED
graylog_232 0 p STARTED 6668518 3.4gb 172.20.17.47 DC-ELASTIC-01
graylog_232 0 r UNASSIGNED
graylog_232 0 r UNASSIGNED
graylog_160 1 p STARTED 94093797 59.2gb 172.20.24.163 DR-ELASTIC-01
graylog_160 1 r UNASSIGNED
graylog_160 1 r UNASSIGNED
graylog_160 2 p STARTED 94124970 59.3gb 172.20.24.163 DR-ELASTIC-01
graylog_160 2 r UNASSIGNED
graylog_160 2 r UNASSIGNED
graylog_160 0 p STARTED 94114348 59.3gb 172.20.24.163 DR-ELASTIC-01
graylog_160 0 r UNASSIGNED
graylog_160 0 r UNASSIGNED
graylog_330 1 p STARTED 6688107 2.1gb 172.20.17.47 DC-ELASTIC-01
graylog_330 1 r UNASSIGNED
graylog_330 1 r UNASSIGNED
graylog_330 2 p STARTED 6692207 2.1gb 172.20.17.47 DC-ELASTIC-01
graylog_330 2 r UNASSIGNED
graylog_330 2 r UNASSIGNED
graylog_330 0 p STARTED 6688802 2.1gb 172.20.17.47 DC-ELASTIC-01
graylog_330 0 r UNASSIGNED
graylog_330 0 r UNASSIGNED
graylog_436 1 p STARTED 6679283 2gb 172.20.17.47 DC-ELASTIC-01
graylog_436 1 r UNASSIGNED
graylog_436 1 r UNASSIGNED
graylog_436 2 p STARTED 6681739 2gb 172.20.17.47 DC-ELASTIC-01
graylog_436 2 r UNASSIGNED
graylog_436 2 r UNASSIGNED
graylog_436 0 p STARTED 6678108 2gb 172.20.17.47 DC-ELASTIC-01
graylog_436 0 r UNASSIGNED
graylog_436 0 r UNASSIGNED
graylog_235 1 p STARTED 6669703 3.3gb 172.20.24.163 DR-ELASTIC-01
graylog_235 1 r UNASSIGNED
graylog_235 1 r UNASSIGNED
graylog_235 2 p STARTED 6670671 3.3gb 172.20.24.163 DR-ELASTIC-01
graylog_235 2 r UNASSIGNED
graylog_235 2 r UNASSIGNED
graylog_235 0 p STARTED 6671232 3.3gb 172.20.24.163 DR-ELASTIC-01
graylog_235 0 r UNASSIGNED
graylog_235 0 r UNASSIGNED
graylog_452 1 p STARTED 6671516 2gb 172.20.17.47 DC-ELASTIC-01
graylog_452 1 r UNASSIGNED
graylog_452 1 r UNASSIGNED
graylog_452 2 p STARTED 6672535 2gb 172.20.17.47 DC-ELASTIC-01
graylog_452 2 r UNASSIGNED
graylog_452 2 r UNASSIGNED
graylog_452 0 p STARTED 6669030 2gb 172.20.17.47 DC-ELASTIC-01
graylog_452 0 r UNASSIGNED
graylog_452 0 r UNASSIGNED
graylog_429 1 p STARTED 6684044 1.8gb 172.20.17.47 DC-ELASTIC-01
graylog_429 1 r UNASSIGNED
graylog_429 1 r UNASSIGNED
graylog_429 2 p STARTED 6687401 1.8gb 172.20.17.47 DC-ELASTIC-01
graylog_429 2 r UNASSIGNED
graylog_429 2 r UNASSIGNED
graylog_429 0 p STARTED 6684144 1.8gb 172.20.17.47 DC-ELASTIC-01
graylog_429 0 r UNASSIGNED
graylog_429 0 r UNASSIGNED

  1. curl -XGET "http://localhost:9200/_nodes/hot_threads"
    output:

::: {DC-ELASTIC-01}{GtMH3ewUQDaF6Xb4V6vHbg}{vEz4TtNlT2i9qnxS50kPeQ}{172.20.17.47}{172.20.17.47:9300}{ml.machine_memory=33704022016, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
Hot threads at 2021-12-02T09:37:57.101Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

::: {DR-ELASTIC-01}{VcirmtS6SBGRhWBDycXAVw}{kSKZM06XSL-XjFehLbECbg}{172.20.24.163}{172.20.24.163:9300}{ml.machine_memory=33704022016, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}
Hot threads at 2021-12-02T09:37:57.128Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

2.4% (11.8ms out of 500ms) cpu usage by thread 'elasticsearch[DR-ELASTIC-01][transport_worker][T#8]'
 2/10 snapshots sharing following 48 elements
   java.lang.Throwable.fillInStackTrace(Native Method)
   java.lang.Throwable.fillInStackTrace(Throwable.java:784)
   java.lang.Throwable.<init>(Throwable.java:288)
   java.lang.Exception.<init>(Exception.java:84)
   java.lang.RuntimeException.<init>(RuntimeException.java:80)
   java.lang.IllegalArgumentException.<init>(IllegalArgumentException.java:72)
   org.elasticsearch.common.io.stream.StreamInput.readException(StreamInput.java:826)
   org.elasticsearch.ElasticsearchException.<init>(ElasticsearchException.java:137)
   org.elasticsearch.index.mapper.MapperException.<init>(MapperException.java:29)
   org.elasticsearch.index.mapper.MapperParsingException.<init>(MapperParsingException.java:30)
   org.elasticsearch.ElasticsearchException$ElasticsearchExceptionHandle$$Lambda$1042/2090142523.apply(Unknown Source)
   org.elasticsearch.ElasticsearchException.readException(ElasticsearchException.java:306)
   org.elasticsearch.common.io.stream.StreamInput.readException(StreamInput.java:799)
   org.elasticsearch.action.bulk.BulkItemResponse$Failure.<init>(BulkItemResponse.java:239)
   org.elasticsearch.action.bulk.BulkItemResponse.readFrom(BulkItemResponse.java:495)
   org.elasticsearch.action.bulk.BulkItemResponse.readBulkItem(BulkItemResponse.java:468)
   org.elasticsearch.action.bulk.BulkShardResponse.readFrom(BulkShardResponse.java:73)
   org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.read(TransportReplicationAction.java:886)
   org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.read(TransportReplicationAction.java:881)
   org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.read(TransportService.java:1107)
   org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.read(TransportService.java:1094)
   org.elasticsearch.transport.TcpTransport.handleResponse(TcpTransport.java:970)
   org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:952)
   org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:763)
   org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:53)
   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
   io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241)
   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
   io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
   io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
   io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
   io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)
   io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556)
   io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510)
   io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
   io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
   java.lang.Thread.run(Thread.java:748)
 8/10 snapshots sharing following 2 elements
   io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
   java.lang.Thread.run(Thread.java:748)
  1. curl -XGET "http://localhost:9200/_cat/nodes"
    output:

172.20.17.47 58 96 3 0.59 0.67 0.59 mdi * DC-ELASTIC-01
172.20.24.163 75 96 4 1.08 0.91 0.77 di - DR-ELASTIC-01

  1. curl -XGET "http://localhost:9200/_cluster/health"
    output:

{"cluster_name":"graylog","status":"red","timed_out":false,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":1191,"active_shards":1191,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2343,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":33.70118845500849}

These are the outputs of the given command kindly take a look and tell me what can i change to make it green.

Your two data nodes are not running exactly the same version, which prevents primary shards on the newer node to replicate to the older one. Upgrade the older node to exactly the same version as the newer node and I suspect you will see replicas getting allocated.

you mean there is difference in Elasticsearch version?

Yes.

"versions":["6.8.10","6.8.18"]

thanks to guide i will do the same as you told and then i will inform you.

@Linuxuser you can quickly verify the same running
curl localhost:9200 on both nodes .

As @Christian_Dahlqvist rightly said you need to have the same version in both nodes/vm's .

Also having 1000's of shard per vm is not a very good idea unless you have done thorough performance testing.

Remember that there is an additional cost for each shard that you allocate:

  • Since a shard is essentially a Lucene index, it consumes file handles, memory, and CPU resources. Although many small shards can speed up processing per shard, they may also form query queues that compromise the cluster performance and decrease query throughput.

  • Each search request will touch a copy of every shard in the index, which isn’t a problem when the shards are spread across several nodes. However, contention arises and performance decreases when the shards are competing for the same hardware resources.

Try to bring up the cluster to green status and then look for the optimizations.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.