Little amount of logs appears in Kibana

Hi there!
I have a question please.
I'm using filbeat ,logstash and kibana installed both on K8S, running and working.
It is important for me , to see the nginx-ingress logs from K8S.
I can see the logs in kibana, but much much less than it appears when watching them in CLI running.
For example , in last 15 mins in Kibana, I can see 4-6 "hits", though their amount must be hundreds and more.
Including the fact that I can actually see the logs and receive them, I can't understand the reason that I can't see ALL of them, maybe there is some kind of limitation on filebeat that can handle logs or whatever.
I'm adding conf file of my filbeat , logstash and kibana, would be very appreciate for any help.
Filebeat -

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
  name: filebeat
  namespace: kube-system
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: filebeat
      kubernetes.io/cluster-service: "true"
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: filebeat
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - args:
        - -c
        - /etc/filebeat.yml
        - -e
        env:
        - name: LOGSTASH_HOSTS
          value: logstash-kube:5044
        image: docker.elastic.co/beats/filebeat:6.4.2
        imagePullPolicy: IfNotPresent
        name: filebeat
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/filebeat.yml
          name: config
          readOnly: true
          subPath: filebeat.yml
        - mountPath: /usr/share/filebeat/prospectors.d
          name: prospectors
          readOnly: true
        - mountPath: /usr/share/filebeat/data
          name: data
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /var/log
          name: varlog
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: filebeat
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 384
          name: filebeat-config
        name: config
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/lib/docker/containers
          type: ""
        name: varlibdockercontainers
      - configMap:
          defaultMode: 384
          name: filebeat-prospectors
        name: prospectors
      - emptyDir: {}
        name: data
  templateGeneration: 2
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate

-------------
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-prospectors
  namespace: kube-system
  labels:
    k8s-app: filebeat
    kubernetes.io/cluster-service: "true"
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true

-----------

My Logstash config and deployment file

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-kube-config
data:
  logstash.conf: |-
    input {
        beats {
            port => 5044
        }
    }
    filter {
        if  [kubernetes][container][name] == "nginx-ingress" {

            json {
                source => "message"
                remove_field => "message"
              }

        }

        else if  [kubernetes][container][name] == "nginx" {
           grok {
               match => {
                   "message" => "%{IP:remote_ip} - \[%{HTTPDATE:[response][time]}\] \"%{DATA:url}\" %{NUMBER:[response][code]} %{NUMBER:[response][bytes]} %{QS:user_agent}"
               }
               remove_field => "message"


           }

           geoip {
               source => "remote_ip"
               target => "[geoip]"
           }

       }

        date {
            match => ["time", "ISO8601"]
            remove_field => ["time"]
        }

        mutate {
            remove_field => ["source", "host", "[beat][name]", "[beat][version]"]
        }
    }

    output {
            elasticsearch {
                hosts => ["es-kube-01.XXXX.pro:9200", "es-kube-02.XXXX.pro:9200"]
                index => "apps-qa-%{[kubernetes][namespace]}-deployment-%{[kubernetes][container][name]}-%{[kubernetes][replicaset][name]}%{+YYYY.MM.dd}"

        }
    }

--------------------------

apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash-kube
  namespace: {{system_namespace}}
  labels:
    app: logstash
spec:
  replicas: 3
  revisionHistoryLimit: 5
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  selector:
    matchLabels:
      app: logstash
  template:
    metadata:
      labels:
        app: logstash
    spec:
      containers:
      - name: logstash-kube
        env:
          - name: LS_JAVA_OPTS
            value: -Xmx256m -Xms256m
        imagePullPolicy: Always
        image: docker.elastic.co/logstash/logstash-oss:6.4.2
        ports:
        - name: logstash-kube
          containerPort: 5044
        resources: {}
        volumeMounts:
        - mountPath: /usr/share/logstash/pipeline
          name: config
      restartPolicy: Always
      volumes:
      - name: config
        configMap:
          name: logstash-kube-config
          items:
          - key: logstash.conf
            path: logstash.conf
      dnsPolicy: ClusterFirst

And Kibana deployment file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana-kube
  labels:
    k8s-app: kibana-kube
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana-kube
  template:
    metadata:
      labels:
        k8s-app: kibana-kube
    spec:
      containers:
      - name: kibana-kube
        imagePullPolicy: Always
        image: registry.ng.bluemix.net/XXXXops/kibana:latest
        #image: docker.elastic.co/kibana/kibana:6.3.2
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://Kube-XXXXXXXXXXXXX.lb.bluemix.net:9200
          - name: KIBANA_INDEX
            value: .kube-kibana
        ports:
        - containerPort: 5601
          name: kibana-kube
          protocol: TCP
      imagePullSecrets:
        - name: bluemix-default-secret
      restartPolicy: Always
      dnsPolicy: ClusterFirst

Thanks in advance!

I managed it to work better now with LB, now it gets much much more logs.
But there is a delta in time when presenting them.

For example, now the time is 17:48, but logs presented in Kibana are from 17:29...and it getting up slow.
Is there a way to change it?

Changing time frequency , doesn't really help...

I will close this thread, and will ask the question above in other thread, since the main issue was resolved