Elasticsearch upgrade from elasticsearch 5.1.2 to 5.6.14, index operation tps rise cause bulk rejected

my bulk queue size = 200
I find the bulk queue always reject like follow,
I find the rejected size can't reduce
hot thread like fllow:
$ curl localhost:8200/_cat/thread_pool/bulk?h=node_id,name,queue,queue_size,rejected

eYdTntK_xxxxxxdw-nmkw bulk 0 200 193083
31VHf-MxxxxxxtIagxUNCQ bulk 0 200 0
FH7Ly3Svxxxxxx4UFR-ag bulk 0 200 52
0HtssJxxxxxx4cCQBTHQ bulk 0 200 0
Tiq_ELaUxxxxxxm7IhYD_Q bulk 0 200 145
Bg4oFea0xxxxxx3cGY4sboQ bulk 0 200 2708186
6nNHxyh2xxxxxxb0zlMpZQ bulk 0 200 32139
B3AM2fExxxxxxvhuqlvzg bulk 0 200 0
y7OTpxxxxxx9QdqpMDew bulk 0 200 24
DoXAsxxxxxxQVb3IgOu6RA bulk 0 200 0
has6UOxxxxxxxHMzE7DuAJg bulk 0 200 85
u5SfGUzxxxxxxoqxKn1Q bulk 0 200 0
Ru4BcxxxxxxK6vHhPGg bulk 0 200 0

$ curl localhost:9200/_nodes/Bg4oFea0xxxxxxxGY4sboQ/hot_threads

::: {127.0.0.1}{Bg4oFeaxxxxxxV3cGY4sboQ}{ceDy3FuxRxxxxxUWNhA}{127.0.0.x}{127.0.0.x:9300}{storetype=ssd}
Hot threads at 2019-05-24T07:17:19.700Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

101.1% (505.4ms out of 500ms) cpu usage by thread 'elasticsearch[xxxxxxxxx][bulk][T#27]'
10/10 snapshots sharing following 35 elements
java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:617)
java.lang.ThreadLocal$ThreadLocalMap.replaceStaleEntry(ThreadLocal.java:575)
java.lang.ThreadLocal$ThreadLocalMap.set(ThreadLocal.java:476)
java.lang.ThreadLocal$ThreadLocalMap.access$100(ThreadLocal.java:298)
java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:184)
java.lang.ThreadLocal.get(ThreadLocal.java:170)
org.apache.lucene.util.CloseableThreadLocal.get(CloseableThreadLocal.java:78)
org.elasticsearch.common.lucene.uid.VersionsResolver.getLookupState(VersionsResolver.java:72)
org.elasticsearch.common.lucene.uid.VersionsResolver.loadDocIdAndVersion(VersionsResolver.java:120)
org.elasticsearch.common.lucene.uid.VersionsResolver.loadVersion(VersionsResolver.java:137)
org.elasticsearch.index.engine.InternalEngine.loadCurrentVersionFromIndex(InternalEngine.java:1377)
org.elasticsearch.index.engine.InternalEngine.resolveDocVersion(InternalEngine.java:393)
org.elasticsearch.index.engine.InternalEngine.compareOpToLuceneDocBasedOnVersions(InternalEngine.java:408)
org.elasticsearch.index.engine.InternalEngine.planIndexingAsNonPrimary(InternalEngine.java:545)
org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:496)
org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:557)
org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:546)
org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnReplica(TransportShardBulkAction.java:449)
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnReplica(TransportShardBulkAction.java:383)
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnReplica(TransportShardBulkAction.java:69)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:522)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:491)
org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:151)
org.elasticsearch.index.shard.IndexShard.acquireReplicaOperationLock(IndexShard.java:1675)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:594)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:475)
org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:464)
org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1556)
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:675)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)

101.1% (505.4ms out of 500ms) cpu usage by thread 'elasticsearch[xxxxx.xxxx.xxxx][bulk][T#29]'
10/10 snapshots sharing following 41 elements
java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:617)
java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss(ThreadLocal.java:440)
java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:419)
java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:298)
java.lang.ThreadLocal.get(ThreadLocal.java:163)
org.apache.lucene.util.CloseableThreadLocal.get(CloseableThreadLocal.java:78)
org.elasticsearch.common.lucene.uid.VersionsResolver.getLookupState(VersionsResolver.java:72)
org.elasticsearch.common.lucene.uid.VersionsResolver.loadDocIdAndVersion(VersionsResolver.java:120)
org.elasticsearch.common.lucene.uid.VersionsResolver.loadVersion(VersionsResolver.java:137)
org.elasticsearch.index.engine.InternalEngine.loadCurrentVersionFromIndex(InternalEngine.java:1377)
org.elasticsearch.index.engine.InternalEngine.resolveDocVersion(InternalEngine.java:393)
org.elasticsearch.index.engine.InternalEngine.planIndexingAsPrimary(InternalEngine.java:570)
org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:493)
org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:557)
org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:546)

++++++++++++++++++++++++++++++++++++++=

I find one node rejected always great than 2706146
Thanks

Please don't post images of text as they are hardly readable and not searchable.

Instead paste the text and format it with </> icon. Check the preview window.

org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:493)
org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:145)
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:114)
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69)
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:975)
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:944)
org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:345)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:270)
org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:924)
org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:921)
org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:151)
org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1659)
org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:933)
org.elasticsearch.action.support.replication.TransportReplicationAction.access$500(TransportReplicationAction.java:92)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:291)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:266)
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:248)
org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:662)
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:675)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)

101.1% (505.3ms out of 500ms) cpu usage by thread 'elasticsearch[xxxx.xxxxxx][bulk][T#12]'
10/10 snapshots sharing following 45 elements
java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:617)
java.lang.ThreadLocal$ThreadLocalMap.replaceStaleEntry(ThreadLocal.java:575)
java.lang.ThreadLocal$ThreadLocalMap.set(ThreadLocal.java:476)
java.lang.ThreadLocal$ThreadLocalMap.access$100(ThreadLocal.java:298)
java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:184)
java.lang.ThreadLocal.get(ThreadLocal.java:170)
org.apache.lucene.util.CloseableThreadLocal.get(CloseableThreadLocal.java:78)
org.apache.lucene.index.CodecReader.getNumericDocValues(CodecReader.java:150)
org.apache.lucene.index.FilterLeafReader.getNumericDocValues(FilterLeafReader.java:436)
org.elasticsearch.common.lucene.uid.PerThreadIDAndVersionLookup.(PerThreadIDAndVersionLookup.java:73)
org.elasticsearch.common.lucene.uid.VersionsResolver.getLookupState(VersionsResolver.java:74)
org.elasticsearch.common.lucene.uid.VersionsResolver.loadDocIdAndVersion(VersionsResolver.java:120)
org.elasticsearch.common.lucene.uid.VersionsResolver.loadVersion(VersionsResolver.java:137)
org.elasticsearch.index.engine.InternalEngine.loadCurrentVersionFromIndex(InternalEngine.java:1377)
org.elasticsearch.index.engine.InternalEngine.resolveDocVersion(InternalEngine.java:393)
org.elasticsearch.index.engine.InternalEngine.planIndexingAsPrimary(InternalEngine.java:570)
org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:493)
org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:557)
org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:546)

org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:493)
org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:145)
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:114)
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69)
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:975)
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:944)
org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:345)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:270)
org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:924)
org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:921)
org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:151)
org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1659)
org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:933)
org.elasticsearch.action.support.replication.TransportReplicationAction.access$500(TransportReplicationAction.java:92)
org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:291)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:266)
org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:248)
org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:662)
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:675)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)

Sorry, But the stacktrack is too large than the limit

there are another hot_thread

$ curl localhost:9200/_nodes/Bg4oFea0QuxxxxxxboQ/hot_threads
::: {xxxxxx-es1}{Bg4oFea0xxxxxxoQ}{ceDy3xxxxxxtVsoTKUWNhA}{xxxxxx}{xxxxxxx:9300}{storetype=ssd}
Hot threads at 2019-05-24T08:30:01.431Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

101.0% (505.2ms out of 500ms) cpu usage by thread 'elasticsearch[xxxxxxxx-es1][management][T#4]'
4/10 snapshots sharing following 32 elements
java.security.AccessController.doPrivileged(Native Method)
java.io.FilePermission.init(FilePermission.java:203)
java.io.FilePermission.(FilePermission.java:277)
java.lang.SecurityManager.checkRead(SecurityManager.java:888)
sun.nio.fs.UnixPath.checkRead(UnixPath.java:795)
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:49)
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
java.nio.file.Files.readAttributes(Files.java:1737)
java.nio.file.Files.size(Files.java:2332)
org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
org.apache.lucene.store.FilterDirectory.fileLength(FilterDirectory.java:67)
org.apache.lucene.store.FilterDirectory.fileLength(FilterDirectory.java:67)
org.elasticsearch.index.store.Store$StoreStatsCache.estimateSize(Store.java:1402)
org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1391)
org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1378)
org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:54)
org.elasticsearch.index.store.Store.stats(Store.java:332)
org.elasticsearch.index.shard.IndexShard.storeStats(IndexShard.java:703)
org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:177)
org.elasticsearch.action.admin.cluster.stats.TransportClusterStatsAction.nodeOperation(TransportClusterStatsAction.java:104)
org.elasticsearch.action.admin.cluster.stats.TransportClusterStatsAction.nodeOperation(TransportClusterStatsAction.java:53)
org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:140)
org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:262)
org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:258)
org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1556)
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:675)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
6/10 snapshots sharing following 18 elements
org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1391)
org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1378)
org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:54)
org.elasticsearch.index.store.Store.stats(Store.java:332)
org.elasticsearch.index.shard.IndexShard.storeStats(IndexShard.java:703)
org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:177)
org.elasticsearch.action.admin.cluster.stats.TransportClusterStatsAction.nodeOperation(TransportClusterStatsAction.java:104)
org.elasticsearch.action.admin.cluster.stats.TransportClusterStatsAction.nodeOperation(TransportClusterStatsAction.java:53)
org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:140)
org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:262)
org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:258)
org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1556)
org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:675)
org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)

8.5% (42.6ms out of 500ms) cpu usage by thread 'elasticsearch[xxxxxx-xxxx-es1][bulk][T#13]'
 unique snapshot
   java.lang.ThreadLocal$ThreadLocalMap.getEntryAfterMiss(ThreadLocal.java:444)
   java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:419)
   java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:298)
   java.lang.ThreadLocal.get(ThreadLocal.java:163)
   org.apache.lucene.util.CloseableThreadLocal.get(CloseableThreadLocal.java:78)
   org.elasticsearch.common.lucene.uid.VersionsResolver.getLookupState(VersionsResolver.java:72)
   org.elasticsearch.common.lucene.uid.VersionsResolver.loadDocIdAndVersion(VersionsResolver.java:120)
   org.elasticsearch.common.lucene.uid.VersionsResolver.loadVersion(VersionsResolver.java:137)
   org.elasticsearch.index.engine.InternalEngine.loadCurrentVersionFromIndex(InternalEngine.java:1377)
   org.elasticsearch.index.engine.InternalEngine.resolveDocVersion(InternalEngine.java:393)
   org.elasticsearch.index.engine.InternalEngine.planIndexingAsPrimary(InternalEngine.java:570)
   org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:493)
   org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:557)
   org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:546)
   org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:493)
   org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:145)
   org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:114)
   org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69)

It looks like the nodes are very busy processing bulk requests. How many indices/shards do you have in the cluster? How many of these are you actively indexing into? Are these evenly distributed across the nodes in the cluster? What is your use-case?

I would also recommend you look at the following blog posts:

The cluster detail:
8 data node machine,2 es instances in it
mem: 32G/(every data node instance)

3 master node
2 client node

248 indices
3208 shards
8,195,271,243 docs



How many of these are you actively indexing into? Are bulk requests potentially indexing into a large number of shards?

Yes I have 100 big index

A few days ago, I do a reindex for the cluster, from one to another with _reindex api,
But it reject, I have the same another cluster, that cluster es version is es 5.1.2, but that cluster doesn't * frequently cause the rejected

My all 248 indices have indexing, but the 100indices larger

We need to update our elasticsearch cluster from 5.1.2 ---> 5.6 ----> 6.x
So we use the same config for 5.1 and 5.6

the upgrade 5.6 cluster only have index operation doesn't have search operation

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.