HTTP Poller [API] input in Logstash - Does it support "Request_data" on top of "Headers"?

Thanks, Leandro for your input!
I have shared the output and conf file below.

I just wanted to flatten the [0],[1], [2], [3], [4]

input {
  http_poller {
    urls => {
		  ussite => {
		    method => "POST"
		    url => "XXXXX"
		    headers => {
			    "x-xdr-auth-id" => "9"
			    "Authorization" => "XXXXXXX}
		    body => '{ "request_data":{} }'
		  }
    }
    tags => "us_site"
    request_timeout => 30
    schedule => {cron => "* * * * * UTC"}
  }
}
filter{
		split { field => "[reply][alerts]" }
		mutate {
	    add_field => { 
	      "agent.ip" => "%{[reply][alerts][host_ip][0]"
					   "event.action" => "%{[reply][alerts][action]}"
					   "event.id" => "%{[reply][alerts][alert_id]}"
					   "host.ip" => "%{[reply][alerts][host_ip]}"
					   "host.name" => "%{[reply][alerts][host_name]}"
					   "host.os.sub_type" => "%{[reply][alerts][action]}"
					   "host.os.type" => "%{[reply][alerts][action]}"     
	    }
    
    remove_field => [ "[reply][alerts]" ]
  }   
}
output {
	....
}

Error from Logstash

docker-elk-logstash-1       | java.lang.OutOfMemoryError: Java heap space
docker-elk-logstash-1       | Dumping heap to java_pid1.hprof ...
docker-elk-logstash-1       | Heap dump file created [298260349 bytes in 1.188 secs]
docker-elk-logstash-1       | [2022-10-14T06:07:16,153][FATAL][org.logstash.Logstash    ][main] uncaught error (in thread [main]>worker2)
docker-elk-logstash-1       | java.lang.OutOfMemoryError: Java heap space
docker-elk-logstash-1       | 	at java.nio.HeapCharBuffer.<init>(java/nio/HeapCharBuffer.java:64) ~[?:?]
docker-elk-logstash-1       | 	at java.nio.CharBuffer.allocate(java/nio/CharBuffer.java:363) ~[?:?]
docker-elk-logstash-1       | 	at java.nio.charset.CharsetDecoder.decode(java/nio/charset/CharsetDecoder.java:799) ~[?:?]
docker-elk-logstash-1       | 	at java.nio.charset.Charset.decode(java/nio/charset/Charset.java:814) ~[?:?]
docker-elk-logstash-1       | 	at org.jruby.RubyEncoding.decodeUTF8(org/jruby/RubyEncoding.java:308) ~[jruby.jar:?]
docker-elk-logstash-1       | 	at org.jruby.RubyString.decodeString(org/jruby/RubyString.java:814) ~[jruby.jar:?]
docker-elk-logstash-1       | 	at org.jruby.RubyString.toString(org/jruby/RubyString.java:805) ~[jruby.jar:?]
docker-elk-logstash-1       | 	at org.jruby.RubyString.asJavaString(org/jruby/RubyString.java:1353) ~[jruby.jar:?]
docker-elk-logstash-1       | 	at org.logstash.ObjectMappers$RubyStringSerializer.serialize(org/logstash/ObjectMappers.java:140) ~[logstash-core.jar:?]
docker-elk-logstash-1       | 	at org.logstash.ObjectMappers$RubyStringSerializer.serialize(org/logstash/ObjectMappers.java:128) ~[logstash-core.jar:?]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ser.std.MapSerializer.serializeFields(com/fasterxml/jackson/databind/ser/std/MapSerializer.java:808) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ser.std.MapSerializer.serializeWithoutTypeInfo(com/fasterxml/jackson/databind/ser/std/MapSerializer.java:764) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(com/fasterxml/jackson/databind/ser/std/MapSerializer.java:720) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(com/fasterxml/jackson/databind/ser/std/MapSerializer.java:35) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ser.std.MapSerializer.serializeFields(com/fasterxml/jackson/databind/ser/std/MapSerializer.java:808) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ser.std.MapSerializer.serializeWithoutTypeInfo(com/fasterxml/jackson/databind/ser/std/MapSerializer.java:764) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(com/fasterxml/jackson/databind/ser/std/MapSerializer.java:720) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(com/fasterxml/jackson/databind/ser/std/MapSerializer.java:35) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(com/fasterxml/jackson/databind/ser/DefaultSerializerProvider.java:480) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(com/fasterxml/jackson/databind/ser/DefaultSerializerProvider.java:319) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ObjectMapper._writeValueAndClose(com/fasterxml/jackson/databind/ObjectMapper.java:4568) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(com/fasterxml/jackson/databind/ObjectMapper.java:3821) ~[jackson-databind-2.13.3.jar:2.13.3]
docker-elk-logstash-1       | 	at org.logstash.Event.toJson(org/logstash/Event.java:251) ~[logstash-core.jar:?]
docker-elk-logstash-1       | 	at org.logstash.ext.JrubyEventExtLibrary$RubyEvent.ruby_to_json(org/logstash/ext/JrubyEventExtLibrary.java:239) ~[logstash-core.jar:?]

Use ruby debug mode in output. It seems either you getting too much data or you are doing something wrong in the transformation logic.

    stdout {
        codec => rubydebug{ metadata => true}
    }

Looking at the amount of data stored under one object, if i flatten [0],[1], [2], [3], [4] this could potential create huge load of data causing overflow?

Still exploring Is there any other workaround for this.

Anyone know the difference between the use of "http" and "http poller" plugin? in my situation, which should i use?

4 fields transformed and in total around 50-100 fields it's not much.
LS is working like a pipeline, receive data per event, transform, send to ES.

Add ID on http_poll, and check execution time.

input {
  http_poller {
    urls => {
		  ussite => {
		    method => "POST"
		    url => "XXXXX"
		    headers => {
			    "x-xdr-auth-id" => "9"
			    "Authorization" => "XXXXXXX}
		    body => '{ "request_data":{} }'
		  }
    }
    tags => "us_site"
    request_timeout => 30
    schedule => {cron => "* * * * * UTC"}
    id => "http_pa"
  }
}

http://localhost:9600/_node/stats/pipelines?pretty


Still cannot get pass this parsing. There will always be only 1 hits with latest timestamp after it fetch. I have removed entire "filter" section for debugging
How do i get the nested in red into records in Kibana?

If is possible, copy here event.original as text, and remove classified info.

You need to share the output of the http request you are making, share the content of the event.original field.

Have you tried to split on [reply][alerts] as I explained in the previous answer?

You have a response where you have multiple alerts in the reply.alerts field, it is on this field that you need to apply the split field.

filter {
    split {
        field => "[reply][alerts]"
    }
}

You can't rely on the array indices of 0 to 4, what if you have an array with 5 items, or 10, or 100 in the future?

2 Likes

Echoing Leandro here… if the desire is to have a 1:1 reply.alerts[alert]:elastic doc then you need to split on reply.alerts;

Usually when applying such splits on initial ingestion POCs we’ll specify a target to make things easier to track and then drop the original array of objects (reply.alerts, which will contain duplicate / extraneous data after a successful split )

filter { 
  split {
    field => ‘[reply][alerts]’
    target => ‘my_arbitrary_fieldname’
    remove_field => ‘[reply][alerts]’
  }
}

and after successful split, your previous fields like

reply.alerts.is_pcap

Will be found under

my_arbitrary_fieldname.is_pcap
1 Like

Executing a one line split generated this error. Losing hope on this "split" thingy.

filter{
		split { field => "[reply][alerts]" }
}
output {
...
}
"tags" => [
        [0] "_http_request_failure",
        [1] "_split_type_failure"
    ]
}
[2022-10-16T09:03:31,458][INFO ][logstash.outputs.file    ][main][64a88add8f32dca0d184380e90f5de1d2d318a835b8d679a9eea8712f8197028] Closing file /usr/share/logstash/elogs/output/getusalerts_logs.txt
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid1.hprof ...
Heap dump file created [271942270 bytes in 1.354 secs]
[2022-10-16T09:04:10,386][FATAL][org.logstash.Logstash    ][main] uncaught error (in thread [main]>worker0)
java.lang.OutOfMemoryError: Java heap space
        at java.nio.HeapCharBuffer.<init>(java/nio/HeapCharBuffer.java:64) ~[?:?]
        at java.nio.CharBuffer.allocate(java/nio/CharBuffer.java:363) ~[?:?]
        at java.nio.charset.CharsetDecoder.decode(java/nio/charset/CharsetDecoder.java:799) ~[?:?]
        at java.nio.charset.Charset.decode(java/nio/charset/Charset.java:814) ~[?:?]
        at org.jruby.RubyEncoding.decodeUTF8(org/jruby/RubyEncoding.java:308) ~[jruby.jar:?]
        at org.jruby.RubyString.decodeString(org/jruby/RubyString.java:814) ~[jruby.jar:?]
        at org.jruby.RubyString.toJava(org/jruby/RubyString.java:6627) ~[jruby.jar:?]
        at org.jruby.RubyClass.new(org/jruby/RubyClass.java:895) ~[jruby.jar:?]
        at org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen) ~[jruby.jar:?]
        at java.lang.invoke.LambdaForm$DMH/0x00000008012d3400.invokeVirtual(java/lang/invoke/LambdaForm$DMH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x00000008012d4c00.invoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x00000008012bb800.reinvoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x00000008012bbc00.guard(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x00000008012bb800.reinvoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x00000008012bbc00.guard(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.Invokers$Holder.linkToCallSite(java/lang/invoke/Invokers$Holder) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.manticore_minus_0_dot_9_dot_1_minus_java.lib.manticore.client.request_from_options(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/manticore-0.9.1-java/lib/manticore/client.rb:536) ~[?:?]
        at java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java/lang/invoke/DirectMethodHandle$Holder) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x0000000801360400.invoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x00000008012be000.reinvoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x00000008012be400.guard(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x00000008012be000.reinvoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
        at java.lang.invoke.LambdaForm$MH/0x00000008012be400.guard(java/lang/invoke/LambdaForm$MH) ~[?:?]

Agree. I see the json streaming in is not just 0 to 4 but 0 to 99.
I cannot paste the event.original and http response as the json is super lengthy with 99 alerts chained together.
But the scheme is as posted in the previous post above.

Scheme: reply > Alerts > 0 to 99 > fields.....

java.lang.OutOfMemoryError: Java heap space

What is the heap size you are using for Logstash? Maybe you will need to increase it.

Also, as I said before, unless you share the original event you are receiving it is impossible to help you further as there is no sample data to try to replicate your issues.

"tags" => [
[0] "_http_request_failure",
[1] "_split_type_failure"
]

This means that your http request failed, and since it failed, the field [reply][alerts] does not exist or it is not an array.

Oooo. It works!

After I adjusted the heapsize from default 512m to 4g for Logstash and Kibana, Logstash managed to process the split without the heapsize error:

split { field => "[reply][alerts]" }

From one record to more than 100 records now in Kibana. Then an error hits

handleSearchError@http://localhost:5601/55434/bundles/plugin/data/kibana/data.plugin.js:1:409692
search/</<@http://localhost:5601/55434/bundles/plugin/data/kibana/data.plugin.js:1:412433
a/</s<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:364:106267
t/s._error<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:6:29525
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
__kbnSharedDeps_npm__</c</t.prototype.error/<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:329:46099
o@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:252982
__kbnSharedDeps_npm__</c</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:329:45966
error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:430606
__kbnSharedDeps_npm__</m</e.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:89793
__kbnSharedDeps_npm__</f</t.prototype._error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:89350
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
o@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:364:254482
__kbnSharedDeps_npm__</l</e.prototype._trySubscribe@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:272:15794
__kbnSharedDeps_npm__</l</e.prototype.subscribe/<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:272:15716
o@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:252982
__kbnSharedDeps_npm__</l</e.prototype.subscribe@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:272:15631
a/</s<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:364:106309
t/s._error<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:6:29525
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
s/</<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:364:125124
t/s._error<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:6:29525
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
__kbnSharedDeps_npm__</f</t.prototype._error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:89350
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
s/</<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:364:125124
t/s._error<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:6:29525
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
__kbnSharedDeps_npm__</f</t.prototype._error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:89350
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
__kbnSharedDeps_npm__</f</t.prototype._error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:89350
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
v/</<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:272:7745

Another licensing error

[2022-10-21T01:45:35.849+00:00][WARN ][plugins.licensing] License information could not be obtained from Elasticsearch due to ConnectionError: getaddrinfo ENOTFOUND elasticsearch error

Troubleshooting in progress. Trying to poll the data in batches instead of "Everything"

Released is ES that crashed.

{"@timestamp":"2022-10-21T01:41:07.129Z", "log.level":"ERROR", "message":"failed to store async-search [cmdP7NPVRFuS8Dd2jGQoqw]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[9fd38c1674f8][search][T#3]","log.logger":"org.elasticsearch.xpack.core.async.AsyncTaskIndexService","trace.id":"7a077afe67cf75ecb79542d811d080bc","elasticsearch.cluster.uuid":"OdZ2nhiUS0eRTsqm7LyMpA","elasticsearch.node.id":"lrho6hODTkK4zqCFfCK7mw","elasticsearch.node.name":"9fd38c1674f8","elasticsearch.cluster.name":"docker-cluster","error.type":"java.lang.IllegalArgumentException","error.message":"Can't store an async search response larger than [10485760] bytes. This limit can be set by changing the [search.max_async_search_response_size] setting.","error.stack_trace":"java.lang.IllegalArgumentException: Can't store an async search response larger than [10485760] bytes. This limit can be set by changing the [search.max_async_search_response_size] setting.\n\tat org.elasticsearch.xcore@8.4.1/



org.elasticsearch.xpack.core.async.AsyncTaskIndexService$ReleasableBytesStreamOutputWithLimit.ensureCapacity(AsyncTaskIndexService.java:634)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.BytesStreamOutput.writeBytes(BytesStreamOutput.java:86)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.write(StreamOutput.java:504)\n\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\tat java.base/java.util.Base64$EncOutputStream.write(Base64.java:973)\n\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\tat java.base/java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:261)\n\tat java.base/java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:210)\n\tat java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)\n\tat java.base/java.io.BufferedOutputStream.write(BufferedOutputStream.java:127)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.OutputStreamStreamOutput.writeBytes(OutputStreamStreamOutput.java:29)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeBytes(StreamOutput.java:121)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeString(StreamOutput.java:433)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeGenericString(StreamOutput.java:782)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.lambda$static$6(StreamOutput.java:649)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeGenericValue(StreamOutput.java:820)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeCollection(StreamOutput.java:1160)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.document.DocumentField.writeTo(DocumentField.java:118)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.search.SearchHit.lambda$writeTo$1(SearchHit.java:259)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeMap(StreamOutput.java:624)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.search.SearchHit.writeTo(SearchHit.java:259)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.lambda$writeArray$31(StreamOutput.java:939)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeArray(StreamOutput.java:916)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeArray(StreamOutput.java:939)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.search.SearchHits.writeTo(SearchHits.java:100)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.search.internal.InternalSearchResponse.writeTo(InternalSearchResponse.java:73)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.SearchResponse.writeTo(SearchResponse.java:434)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeOptionalWriteable(StreamOutput.java:953)\n\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.search.action.AsyncSearchResponse.writeTo(AsyncSearchResponse.java:96)\n\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService.writeResponse(AsyncTaskIndexService.java:578)\n\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService.lambda$updateResponse$3(AsyncTaskIndexService.java:291)\n\tat org.elasticsearch.xcontent@8.4.1/org.elasticsearch.xcontent.XContentBuilder.lambda$directFieldAsBase64$24(XContentBuilder.java:1212)\n\tat org.elasticsearch.xcontent.impl@8.4.1/org.elasticsearch.xcontent.provider.json.JsonXContentGenerator.writeDirectField(JsonXContentGenerator.java:557)\n\tat org.elasticsearch.xcontent@8.4.1/org.elasticsearch.xcontent.XContentBuilder.directFieldAsBase64(XContentBuilder.java:1206)\n\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService.updateResponse(AsyncTaskIndexService.java:291)\n\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService.updateResponse(AsyncTaskIndexService.java:270)\n\tat org.elasticsearch.xpack.search.TransportSubmitAsyncSearchAction.onFinalResponse(TransportSubmitAsyncSearchAction.java:204)\n\tat org.elasticsearch.xpack.search.TransportSubmitAsyncSearchAction$1$1.lambda$onResponse$1(TransportSubmitAsyncSearchAction.java:106)\n\tat org.elasticsearch.xpack.search.AsyncSearchTask.executeCompletionListeners(AsyncSearchTask.java:307)\n\tat org.elasticsearch.xpack.search.AsyncSearchTask$Listener.onResponse(AsyncSearchTask.java:446)\n\tat org.elasticsearch.xpack.search.AsyncSearchTask$Listener.onResponse(AsyncSearchTask.java:367)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:31)\n\tat org.elasticsearch.security@8.4.1/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$2(SecurityActionFilter.java:165)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.ActionListener$DelegatingFailureActionListener.onResponse(ActionListener.java:245)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.ActionListener$RunAfterActionListener.onResponse(ActionListener.java:367)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.AbstractSearchAsyncAction.sendSearchResponse(AbstractSearchAsyncAction.java:722)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.FetchLookupFieldsPhase.run(FetchLookupFieldsPhase.java:75)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:469)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:463)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.ExpandSearchPhase.onPhaseDone(ExpandSearchPhase.java:151)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.ExpandSearchPhase.run(ExpandSearchPhase.java:105)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:469)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:463)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.FetchSearchPhase.moveToNextPhase(FetchSearchPhase.java:271)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.FetchSearchPhase.lambda$innerRun$2(FetchSearchPhase.java:108)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:117)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:90)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:33)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:769)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n\tSuppressed: java.lang.IllegalArgumentException: Can't store an async search response larger than [10485760] bytes. This limit can be set by changing the [search.max_async_search_response_size] setting.\n\t\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService$ReleasableBytesStreamOutputWithLimit.ensureCapacity(AsyncTaskIndexService.java:634)\n\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.BytesStreamOutput.writeBytes(BytesStreamOutput.java:86)\n\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.write(StreamOutput.java:504)\n\t\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\t\tat java.base/java.util.Base64$EncOutputStream.write(Base64.java:973)\n\t\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\t\tat java.base/java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:261)\n\t\tat java.base/java.util.zip.DeflaterOutputStream.finish(DeflaterOutputStream.java:226)\n\t\tat java.base/java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:244)\n\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.compress.DeflateCompressor$2.close(DeflateCompressor.java:186)\n\t\tat java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:191)\n\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.OutputStreamStreamOutput.close(OutputStreamStreamOutput.java:39)\n\t\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService.writeResponse(AsyncTaskIndexService.java:576)\n\t\t... 34 more\n\t\tSuppressed: java.lang.IllegalArgumentException: Can't store an async search response larger than [10485760] bytes. This limit can be set by changing the [search.max_async_search_response_size] setting.\n\t\t\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService$ReleasableBytesStreamOutputWithLimit.ensureCapacity(AsyncTaskIndexService.java:634)\n\t\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.BytesStreamOutput.writeBytes(BytesStreamOutput.java:86)\n\t\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.write(StreamOutput.java:504)\n\t\t\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\t\t\tat java.base/java.util.Base64$EncOutputStream.write(Base64.java:973)\n\t\t\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\t\t\tat java.base/java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:261)\n\t\t\tat java.base/java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:210)\n\t\t\tat java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)\n\t\t\tat java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)\n\t\t\tat java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:182)\n\t\t\t... 36 more\n"}

ERROR: Elasticsearch exited unexpectedly

Can't store an async search response larger than [10485760] bytes. This limit can be set by changing the [search.max_async_search_response_size] setting.

Elasticsearch documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/async-search.html

From ES documentation

After i removed this code after research, async error is gone. From 100hits to 200 hits.

id => "http_pa"

But faced with another error. Still resolving this..

The content length (612290936) is bigger than the maximum allowed string (536870888)

handleSearchError@http://localhost:5601/55434/bundles/plugin/data/kibana/data.plugin.js:1:409692
search/</<@http://localhost:5601/55434/bundles/plugin/data/kibana/data.plugin.js:1:412433
a/</s<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:364:106267
t/s._error<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:6:29525
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
__kbnSharedDeps_npm__</c</t.prototype.error/<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:329:46099
o@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:252982
__kbnSharedDeps_npm__</c</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:329:45966
error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:430606
__kbnSharedDeps_npm__</m</e.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:89793
__kbnSharedDeps_npm__</f</t.prototype._error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:89350
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
o@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:364:254482
__kbnSharedDeps_npm__</l</e.prototype._trySubscribe@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:272:15794
__kbnSharedDeps_npm__</l</e.prototype.subscribe/<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:272:15716
o@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:252982
__kbnSharedDeps_npm__</l</e.prototype.subscribe@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:272:15631
a/</s<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:364:106309
t/s._error<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:6:29525
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
s/</<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:364:125124
t/s._error<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:6:29525
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
__kbnSharedDeps_npm__</f</t.prototype._error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:89350
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
s/</<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:364:125124
t/s._error<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:6:29525
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
__kbnSharedDeps_npm__</f</t.prototype._error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:89350
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
__kbnSharedDeps_npm__</f</t.prototype._error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:89350
__kbnSharedDeps_npm__</f</t.prototype.error@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:334:88987
v/</<@http://localhost:5601/55434/bundles/kbn-ui-shared-deps-npm/kbn-ui-shared-deps-npm.dll.js:272:7745

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.