Missing some logs

Hi friends ,
I have elk stack ( version 8.14 )that shipped my app log( json ) with filebeat to logstash and then to elasticsearch.i have 13million logs every 30 minutes , Logs are shipped and shown in kibana but some of my logs are missed in kibana Discovery but when i search on my service.log file it's there ! Any idea what is the cuase of this problem?!

Do you by any chance have mapping conflicts that could prevent some data from being indexed? I you can identify entries that have not been indexed, try to index them manually and see if you encounter any error.

thank you for your time and reply,
I have checked logs are coming to elasticsearch but with too much delay.Incoming log is 19:30 but response log is set to elastic at 3 in the morning. i have set some parameters logs are shipped a little faster ,this is my filebeat config do you have any idea how can I improve it?

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /logdisk/service.log*

  json.keys_under_root: true
  json.add_error_key: true
  json.message_key: "message"

  fields:
    type: type_product
  ignore_older: 0

queue:
  mem:
    events: 10000
    flush.min_events: 300
    flush.timeout: 1s

disk:
    path: "${path.data}/diskqueue"
    max_size: 10GB
    segment_size: 1GB
    read_ahead: 1024
    write_ahead: 4096
    retry_interval: 1s
    max_retry_interval: 30s

output.logstash:
  hosts: ["ip1:5044", "ip2:5044", "ip3:5044", "ip4:5044", "ip5:5044", "ip6:5044"]
  loadbalance: true
  worker: 8

What does your Logstash output configuration look like? How many indices are you indexing into? What is the size and specification of your Elasticsearch cluster?

input {
    beats {
        port => 5044
        type => "type_product"
    }
}

filter {
  json {
    source => "message"
  }
  date {
    match => [ "filebeatTime" , "YYYY-MM-dd HH:mm:ss,SSS","ISO8601"]
    remove_field => [ "timestamp" ]
    target => "eventTime"
  }

  mutate {
    remove_field => ["message"]
  }

}

output {
  if [type] == "type_product" {
      elasticsearch {
        hosts => ["https://xxxx:9200", "https://xxxx:9200", "https://xxxx:9200"]
        index => "apigw-logs-%{+YYYY.MM.dd}"
        user => "xxxx"
        password => "xxxxx"
        cacert => "/etc/logstash/certs/http_ca.crt"
      }
  }
}

I have 12 apigw machine that each one has 25 service.log that ships logs to 6 logstashs and logstashs send to 3 of hot nodes I also have 3 warm nodes
I have enough resources on eche machine , the problem is some of logs are for 10 hours before but they are shipped just a seconds ago!!