[1154.233s][warning][gc,alloc] [npdb_dns]>worker20: Retried waiting for GCLocker too often allocating 256 words
[1154.233s][warning][gc,alloc] [npdb_network]>worker7: Retried waiting for GCLocker too often allocating 256 words
[1154.233s][warning][gc,alloc] [npdb_dn_blocks]>worker6: Retried waiting for GCLocker too often allocating 256 words
[1154.233s][warning][gc,alloc] [npdb_network]>worker22: Retried waiting for GCLocker too often allocating 256 words
If you are writing native (non-Java) code that will be called via JNI, then when passed a Java object (typically a byte array) you can ask the JVM for the physical memory address of the array. You can also promise the JVM that you will be really quick and perform only a limited set of operations, and in return the JVM will make sure the Java object does not move. When using G1 the JVM does that by acquiring a lock that prevents garbage collection running. That is the GCLocker.
When you try to allocate an object on the heap, the JVM looks for a free area large enough to hold it. If it finds one then it uses it. If it does not then it needs to trigger garbage collection. If another thread's JNI code is holding the GCLocker, then it requests a "GCLocker Initiated GC" and waits. When the GCLocker is released, a minor GC occurs. At some point later your thread will get scheduled and retry the memory allocation. It is possible that other threads have used up the memory that the GC freed, so your thread will pause and request a second "GCLocker Initiated GC". The next time it is scheduled (after the GC) if it still fails then it logs that message and return an out-of-memory exception.
Allocating a larger heap might help (if it avoids the initial allocation failure), it might not. Allocating a smaller heap might help (if it forces GCs to happen more often), it might not.
Switching to a different garbage collector might help. You could try Shenandoah. (I wouldn't want to go back to CMS or Parallel.) Shenandoah appears to support region pinning, so that instead of saying "Do not run GC" for the entire heap, it can say "Do not run GC" on a much smaller part of the heap that contains the object that JNI is accessing.
100 seems to have been picked on the grounds that it was larger than 2 (the default). I don't think they tuned it. Depending on the version your JVM might not support this solution. Even if it does, then it is possible that saying "I am OK with waiting for 100 GC cycles to run before creating this object" would have a terrible performance impact. A larger value for GCLockerRetryAllocationCount will slow the app down, a smaller value for GCLockerRetryAllocationCount will result in more "retried too often" errors (and associated OOM errors). Only you can test your use case to see whether a larger or smaller value works well for you. Try a variety of values and see where on the speed/errors trade-off you feel comfortable.