Thanks David , this sounds quite logical.
Strange I'm missing 40%-60% of the log entries generated by applications in docker on the kubernetes platforms.
The fluent-bit reports many errors
[2020/12/29 02:51:54] [ warn] [engine] failed to flush chunk '1-1608891285.347503559.flb', retry in 867 seconds: task_id=796, input=tail.0 > output=es.0
[2020/12/29 02:51:55] [ warn] [engine] failed to flush chunk '1-1608894153.539621536.flb', retry in 1110 seconds: task_id=1422, input=tail.0 > output=es.0
[2020/12/29 02:51:55] [ warn] [engine] failed to flush chunk '1-1608895587.328538075.flb', retry in 830 seconds: task_id=1759, input=tail.0 > output=es.0
[2020/12/29 02:51:56] [ warn] [engine] failed to flush chunk '1-1608887565.333424683.flb', retry in 1366 seconds: task_id=35, input=tail.0 > output=es.0
[2020/12/29 02:51:56] [ warn] [engine] failed to flush chunk '1-1608891815.333089060.flb', retry in 1047 seconds: task_id=949, input=tail.0 > output=es.0
[2020/12/29 02:51:56] [ warn] [engine] failed to flush chunk '1-1608888441.890175576.flb', retry in 108 seconds: task_id=253, input=tail.0 > output=es.0
[2020/12/29 02:51:57] [ warn] [engine] failed to flush chunk '1-1608892265.334450560.flb', retry in 727 seconds: task_id=1051, input=tail.0 > output=es.0
[2020/12/29 02:51:57] [ warn] [engine] failed to flush chunk '1-1608889041.727434703.flb', retry in 532 seconds: task_id=388, input=tail.0 > output=es.0
I will tune into fluent-bit as probably the error is their or between fluent-bit and nginx in the firewalls
attached the cpu / memory monitoring of the cluster