Looking at the amount of data stored under one object, if i flatten [0],[1], [2], [3], [4] this could potential create huge load of data causing overflow?
Still exploring Is there any other workaround for this.
Still cannot get pass this parsing. There will always be only 1 hits with latest timestamp after it fetch. I have removed entire "filter" section for debugging
How do i get the nested in red into records in Kibana?
Echoing Leandro here… if the desire is to have a 1:1 reply.alerts[alert]:elastic doc then you need to split on reply.alerts;
Usually when applying such splits on initial ingestion POCs we’ll specify a target to make things easier to track and then drop the original array of objects (reply.alerts, which will contain duplicate / extraneous data after a successful split )
"tags" => [
[0] "_http_request_failure",
[1] "_split_type_failure"
]
}
[2022-10-16T09:03:31,458][INFO ][logstash.outputs.file ][main][64a88add8f32dca0d184380e90f5de1d2d318a835b8d679a9eea8712f8197028] Closing file /usr/share/logstash/elogs/output/getusalerts_logs.txt
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid1.hprof ...
Heap dump file created [271942270 bytes in 1.354 secs]
[2022-10-16T09:04:10,386][FATAL][org.logstash.Logstash ][main] uncaught error (in thread [main]>worker0)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapCharBuffer.<init>(java/nio/HeapCharBuffer.java:64) ~[?:?]
at java.nio.CharBuffer.allocate(java/nio/CharBuffer.java:363) ~[?:?]
at java.nio.charset.CharsetDecoder.decode(java/nio/charset/CharsetDecoder.java:799) ~[?:?]
at java.nio.charset.Charset.decode(java/nio/charset/Charset.java:814) ~[?:?]
at org.jruby.RubyEncoding.decodeUTF8(org/jruby/RubyEncoding.java:308) ~[jruby.jar:?]
at org.jruby.RubyString.decodeString(org/jruby/RubyString.java:814) ~[jruby.jar:?]
at org.jruby.RubyString.toJava(org/jruby/RubyString.java:6627) ~[jruby.jar:?]
at org.jruby.RubyClass.new(org/jruby/RubyClass.java:895) ~[jruby.jar:?]
at org.jruby.RubyClass$INVOKER$i$newInstance.call(org/jruby/RubyClass$INVOKER$i$newInstance.gen) ~[jruby.jar:?]
at java.lang.invoke.LambdaForm$DMH/0x00000008012d3400.invokeVirtual(java/lang/invoke/LambdaForm$DMH) ~[?:?]
at java.lang.invoke.LambdaForm$MH/0x00000008012d4c00.invoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
at java.lang.invoke.LambdaForm$MH/0x00000008012bb800.reinvoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
at java.lang.invoke.LambdaForm$MH/0x00000008012bbc00.guard(java/lang/invoke/LambdaForm$MH) ~[?:?]
at java.lang.invoke.LambdaForm$MH/0x00000008012bb800.reinvoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
at java.lang.invoke.LambdaForm$MH/0x00000008012bbc00.guard(java/lang/invoke/LambdaForm$MH) ~[?:?]
at java.lang.invoke.Invokers$Holder.linkToCallSite(java/lang/invoke/Invokers$Holder) ~[?:?]
at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.manticore_minus_0_dot_9_dot_1_minus_java.lib.manticore.client.request_from_options(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/manticore-0.9.1-java/lib/manticore/client.rb:536) ~[?:?]
at java.lang.invoke.DirectMethodHandle$Holder.invokeStatic(java/lang/invoke/DirectMethodHandle$Holder) ~[?:?]
at java.lang.invoke.LambdaForm$MH/0x0000000801360400.invoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
at java.lang.invoke.LambdaForm$MH/0x00000008012be000.reinvoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
at java.lang.invoke.LambdaForm$MH/0x00000008012be400.guard(java/lang/invoke/LambdaForm$MH) ~[?:?]
at java.lang.invoke.LambdaForm$MH/0x00000008012be000.reinvoke(java/lang/invoke/LambdaForm$MH) ~[?:?]
at java.lang.invoke.LambdaForm$MH/0x00000008012be400.guard(java/lang/invoke/LambdaForm$MH) ~[?:?]
Agree. I see the json streaming in is not just 0 to 4 but 0 to 99.
I cannot paste the event.original and http response as the json is super lengthy with 99 alerts chained together.
But the scheme is as posted in the previous post above.
What is the heap size you are using for Logstash? Maybe you will need to increase it.
Also, as I said before, unless you share the original event you are receiving it is impossible to help you further as there is no sample data to try to replicate your issues.
[2022-10-21T01:45:35.849+00:00][WARN ][plugins.licensing] License information could not be obtained from Elasticsearch due to ConnectionError: getaddrinfo ENOTFOUND elasticsearch error
Troubleshooting in progress. Trying to poll the data in batches instead of "Everything"
{"@timestamp":"2022-10-21T01:41:07.129Z", "log.level":"ERROR", "message":"failed to store async-search [cmdP7NPVRFuS8Dd2jGQoqw]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[9fd38c1674f8][search][T#3]","log.logger":"org.elasticsearch.xpack.core.async.AsyncTaskIndexService","trace.id":"7a077afe67cf75ecb79542d811d080bc","elasticsearch.cluster.uuid":"OdZ2nhiUS0eRTsqm7LyMpA","elasticsearch.node.id":"lrho6hODTkK4zqCFfCK7mw","elasticsearch.node.name":"9fd38c1674f8","elasticsearch.cluster.name":"docker-cluster","error.type":"java.lang.IllegalArgumentException","error.message":"Can't store an async search response larger than [10485760] bytes. This limit can be set by changing the [search.max_async_search_response_size] setting.","error.stack_trace":"java.lang.IllegalArgumentException: Can't store an async search response larger than [10485760] bytes. This limit can be set by changing the [search.max_async_search_response_size] setting.\n\tat org.elasticsearch.xcore@8.4.1/
org.elasticsearch.xpack.core.async.AsyncTaskIndexService$ReleasableBytesStreamOutputWithLimit.ensureCapacity(AsyncTaskIndexService.java:634)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.BytesStreamOutput.writeBytes(BytesStreamOutput.java:86)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.write(StreamOutput.java:504)\n\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\tat java.base/java.util.Base64$EncOutputStream.write(Base64.java:973)\n\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\tat java.base/java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:261)\n\tat java.base/java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:210)\n\tat java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)\n\tat java.base/java.io.BufferedOutputStream.write(BufferedOutputStream.java:127)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.OutputStreamStreamOutput.writeBytes(OutputStreamStreamOutput.java:29)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeBytes(StreamOutput.java:121)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeString(StreamOutput.java:433)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeGenericString(StreamOutput.java:782)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.lambda$static$6(StreamOutput.java:649)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeGenericValue(StreamOutput.java:820)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeCollection(StreamOutput.java:1160)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.document.DocumentField.writeTo(DocumentField.java:118)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.search.SearchHit.lambda$writeTo$1(SearchHit.java:259)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeMap(StreamOutput.java:624)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.search.SearchHit.writeTo(SearchHit.java:259)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.lambda$writeArray$31(StreamOutput.java:939)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeArray(StreamOutput.java:916)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeArray(StreamOutput.java:939)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.search.SearchHits.writeTo(SearchHits.java:100)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.search.internal.InternalSearchResponse.writeTo(InternalSearchResponse.java:73)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.SearchResponse.writeTo(SearchResponse.java:434)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.writeOptionalWriteable(StreamOutput.java:953)\n\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.search.action.AsyncSearchResponse.writeTo(AsyncSearchResponse.java:96)\n\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService.writeResponse(AsyncTaskIndexService.java:578)\n\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService.lambda$updateResponse$3(AsyncTaskIndexService.java:291)\n\tat org.elasticsearch.xcontent@8.4.1/org.elasticsearch.xcontent.XContentBuilder.lambda$directFieldAsBase64$24(XContentBuilder.java:1212)\n\tat org.elasticsearch.xcontent.impl@8.4.1/org.elasticsearch.xcontent.provider.json.JsonXContentGenerator.writeDirectField(JsonXContentGenerator.java:557)\n\tat org.elasticsearch.xcontent@8.4.1/org.elasticsearch.xcontent.XContentBuilder.directFieldAsBase64(XContentBuilder.java:1206)\n\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService.updateResponse(AsyncTaskIndexService.java:291)\n\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService.updateResponse(AsyncTaskIndexService.java:270)\n\tat org.elasticsearch.xpack.search.TransportSubmitAsyncSearchAction.onFinalResponse(TransportSubmitAsyncSearchAction.java:204)\n\tat org.elasticsearch.xpack.search.TransportSubmitAsyncSearchAction$1$1.lambda$onResponse$1(TransportSubmitAsyncSearchAction.java:106)\n\tat org.elasticsearch.xpack.search.AsyncSearchTask.executeCompletionListeners(AsyncSearchTask.java:307)\n\tat org.elasticsearch.xpack.search.AsyncSearchTask$Listener.onResponse(AsyncSearchTask.java:446)\n\tat org.elasticsearch.xpack.search.AsyncSearchTask$Listener.onResponse(AsyncSearchTask.java:367)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:31)\n\tat org.elasticsearch.security@8.4.1/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$2(SecurityActionFilter.java:165)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.ActionListener$DelegatingFailureActionListener.onResponse(ActionListener.java:245)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.ActionListener$RunAfterActionListener.onResponse(ActionListener.java:367)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.AbstractSearchAsyncAction.sendSearchResponse(AbstractSearchAsyncAction.java:722)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.FetchLookupFieldsPhase.run(FetchLookupFieldsPhase.java:75)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:469)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:463)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.ExpandSearchPhase.onPhaseDone(ExpandSearchPhase.java:151)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.ExpandSearchPhase.run(ExpandSearchPhase.java:105)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:469)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:463)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.FetchSearchPhase.moveToNextPhase(FetchSearchPhase.java:271)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.FetchSearchPhase.lambda$innerRun$2(FetchSearchPhase.java:108)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.FetchSearchPhase.innerRun(FetchSearchPhase.java:117)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.action.search.FetchSearchPhase$1.doRun(FetchSearchPhase.java:90)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:33)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:769)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n\tSuppressed: java.lang.IllegalArgumentException: Can't store an async search response larger than [10485760] bytes. This limit can be set by changing the [search.max_async_search_response_size] setting.\n\t\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService$ReleasableBytesStreamOutputWithLimit.ensureCapacity(AsyncTaskIndexService.java:634)\n\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.BytesStreamOutput.writeBytes(BytesStreamOutput.java:86)\n\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.write(StreamOutput.java:504)\n\t\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\t\tat java.base/java.util.Base64$EncOutputStream.write(Base64.java:973)\n\t\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\t\tat java.base/java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:261)\n\t\tat java.base/java.util.zip.DeflaterOutputStream.finish(DeflaterOutputStream.java:226)\n\t\tat java.base/java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:244)\n\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.compress.DeflateCompressor$2.close(DeflateCompressor.java:186)\n\t\tat java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:191)\n\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.OutputStreamStreamOutput.close(OutputStreamStreamOutput.java:39)\n\t\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService.writeResponse(AsyncTaskIndexService.java:576)\n\t\t... 34 more\n\t\tSuppressed: java.lang.IllegalArgumentException: Can't store an async search response larger than [10485760] bytes. This limit can be set by changing the [search.max_async_search_response_size] setting.\n\t\t\tat org.elasticsearch.xcore@8.4.1/org.elasticsearch.xpack.core.async.AsyncTaskIndexService$ReleasableBytesStreamOutputWithLimit.ensureCapacity(AsyncTaskIndexService.java:634)\n\t\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.BytesStreamOutput.writeBytes(BytesStreamOutput.java:86)\n\t\t\tat org.elasticsearch.server@8.4.1/org.elasticsearch.common.io.stream.StreamOutput.write(StreamOutput.java:504)\n\t\t\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\t\t\tat java.base/java.util.Base64$EncOutputStream.write(Base64.java:973)\n\t\t\tat org.elasticsearch.base@8.4.1/org.elasticsearch.core.Streams$1.write(Streams.java:92)\n\t\t\tat java.base/java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:261)\n\t\t\tat java.base/java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:210)\n\t\t\tat java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)\n\t\t\tat java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)\n\t\t\tat java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:182)\n\t\t\t... 36 more\n"}
ERROR: Elasticsearch exited unexpectedly
Can't store an async search response larger than [10485760] bytes. This limit can be set by changing the [search.max_async_search_response_size] setting.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.