No shard exception after logstash tries to index large data

My goal is to migrate data from psql db to Elasticsearch.

I am using jdbc plugin to achieve it.

Issue: ES instance is breaking for larger db ( 700k records )

NOTE: For small db ( say 500 records ), it works perfectly fine.

My logstash conf file looks like this :

input {
     jdbc {
        jdbc_connection_string => "<db_url>"
        jdbc_user => "<user>"
        jdbc_password => "<password>"
        jdbc_validate_connection => true
        jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/postgresql-42.2.23.jar"
        jdbc_driver_class => "org.postgresql.Driver"
        jdbc_fetch_size => 10000
        statement => "SELECT  *  FROM dbo.candidate"
        lowercase_column_names => false
    }
}
output {
    elasticsearch {     
        hosts => ["http://locales:9200"]
        user => "elastic"
	password => "password"
        index => "staging-logstash-candidate"
        document_id => "%{id}"
        ecs_compatibility => disabled
    }
    
    stdout { codec => rubydebug }

}

After that es instance stops and gives following error:

> "Caused by: org.elasticsearch.action.NoShardAvailableActionException: [d4887f12caa3][172.18.0.2:9300][indices:data/read/open_reader_context]",
> "at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:500) ~[elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:449) [elasticsearch-7.14.0.jar:7.14.0]",
> "... 367 more"] }
> {"type": "server", "timestamp": "2021-09-27T06:06:34,206Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "docker-cluster", "node.name": "d4887f12caa3", "message": "path: /.kibana_7.14.0_001/_pit, params: {index=.kibana_7.14.0_001, keep_alive=10m}", "cluster.uuid": "UqtVRRidS-aB5CtPXxlRWw", "node.id": "Op6G4UKfSeeBYRh4QFnkTQ" ,
> "stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed",
> "at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:661) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:384) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:693) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:467) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.action.search.AbstractSearchAsyncAction.access$000(AbstractSearchAsyncAction.java:62) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.action.search.AbstractSearchAsyncAction$1.onFailure(AbstractSearchAsyncAction.java:316) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:48) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.transport.TransportService$5.handleException(TransportService.java:743) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1288) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:1397) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1371) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.transport.TaskTransportChannel.sendResponse(TaskTransportChannel.java:50) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.transport.TransportChannel.sendErrorResponse(TransportChannel.java:45) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.action.support.ChannelActionListener.onFailure(ChannelActionListener.java:40) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.action.ActionListener$Delegating.onFailure(ActionListener.java:66) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.search.SearchService.lambda$openReaderContext$10(SearchService.java:760) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.index.shard.IndexShard.awaitShardSearchActive(IndexShard.java:3559) [elasticsearch-7.14.0.jar:7.14.0]",
> "at org.elasticsearch.search.SearchService.openReaderContext(SearchService.java:743) [elasticsearch-7.14.0.jar:7.14.0]"

,

Welcome !

Could you share the full Elasticsearch logs?

Also, try to reduce jdbc_fetch_size to smaller value.

These are the additional logs apart from posted in the post. These logs keep getting repeated. Let me know if you need any more info.

Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException: Search rejected due to missing shards [[.kibana_task_manager_7.14.0_001][0]]. Consider using `allow_partial_search_results` setting to bypass this error.",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.run(AbstractSearchAsyncAction.java:211) ~[elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:424) [elasticsearch-7.14.0.jar:7.14.0]",
"... 229 more"] }
{"type": "server", "timestamp": "2021-09-27T06:06:31,426Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "docker-cluster", "node.name": "d4887f12caa3", "message": "path: /.kibana_7.14.0_001/_pit, params: {index=.kibana_7.14.0_001, keep_alive=10m}", "cluster.uuid": "UqtVRRidS-aB5CtPXxlRWw", "node.id": "Op6G4UKfSeeBYRh4QFnkTQ" ,
"stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: ",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:661) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:429) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.AbstractSearchAsyncAction.start(AbstractSearchAsyncAction.java:184) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.TransportSearchAction.executeSearch(TransportSearchAction.java:673) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.TransportSearchAction.executeLocalSearch(TransportSearchAction.java:494) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.TransportSearchAction.lambda$executeRequest$3(TransportSearchAction.java:288) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:134) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:103) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:76) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.TransportSearchAction.executeRequest(TransportSearchAction.java:329) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.TransportSearchAction.executeRequest(TransportSearchAction.java:228) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.TransportOpenPointInTimeAction.doExecute(TransportOpenPointInTimeAction.java:89) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.search.TransportOpenPointInTimeAction.doExecute(TransportOpenPointInTimeAction.java:39) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:173) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.ActionFilter$Simple.apply(ActionFilter.java:42) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:171) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$3(SecurityActionFilter.java:160) [x-pack-security-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.ActionListener$DelegatingFailureActionListener.onResponse(ActionListener.java:217) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:113) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:100) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:133) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:139) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.StepListener.innerOnResponse(StepListener.java:52) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.action.NotifyOnceListener.onResponse(NotifyOnceListener.java:29) [elasticsearch-7.14.0.jar:7.14.0]",
"at org.elasticsearch.xpack.security.authz.interceptor.FieldAndDocumentLevelSecurityRequestInterceptor.intercept(FieldAndDocumentLevelSecurityRequestInterceptor.java:77) [x-pack-security-7.14.0.jar:7.14.0]",
"at org.elasticsearch.xpack.security.authz.intercept

Yes. I need to see the full logs from the beginning. Not only a small part.

You can upload them to gist.github.com if needed.

Here it is:

full logs