Indices are not showing for Fluentd on kubernetes

I deployed the EFK Stack(Elasticsearch, Fluentd, Kibana) on kuberntes using helm charts from elastic. I am trying to load the indices from kibana yet I don't see anything.

when i did curl localhost:9200/_cat/indices, i get:

green open .kibana_task_manager_1   S-EcONtaS52XBMhSg1gCSw 1 1  2 0 17.1kb  6.6kb
green open .apm-agent-configuration 6Mz4HGpXQt-8S02vl0SblA 1 1  0 0   566b   283b
green open .kibana_1                _dxUhiviSJ2id9Dtg41LdA 1 1 10 4 73.5kb 36.7kb

These are logs from fluentd pod:

2020-04-10 20:04:01 +0000 [info]: parsing config file is succeeded path="/etc/fluent/fluent.conf"
2020-04-10 20:04:01 +0000 [warn]: [elasticsearch] Detected ES 7.x or above: `_doc` will be used as the document `_type`.
2020-04-10 20:04:01 +0000 [info]: using configuration file: <ROOT>
  <match fluent.**>
    @type null
  </match>
  <source>
    @type forward
    port 24224
    bind "0.0.0.0"
  </source>
  <match fluentd.**>
    @type null
  </match>
  <source>
    @type http
    port 9880
    bind "0.0.0.0"
  </source>
  <source>
    @type monitor_agent
    bind "0.0.0.0"
    port 24220
    tag "fluentd.monitor.metrics"
  </source>
  <match **>
    @id elasticsearch
    @type elasticsearch
    @log_level "info"
    include_tag_key true
    host "elasticsearch-master.default.svc.cluster.local"
    port 9200
    scheme http
    ssl_version TLSv1
    logstash_format true
    <buffer>
      @type "file"
      path "/var/log/fluentd-buffers/kubernetes.system.buffer"
      flush_mode interval
      retry_type exponential_backoff
      flush_thread_count 2
      flush_interval 5s
      retry_forever 
      retry_max_interval 30
      chunk_limit_size 2M
      queue_limit_length 8
      overflow_action block
    </buffer>
  </match>
  <system>
    root_dir "/tmp/fluentd-buffers/"
  </system>
</ROOT>
2020-04-10 20:04:01 +0000 [info]: starting fluentd-1.3.3 pid=1 ruby="2.3.3"
2020-04-10 20:04:01 +0000 [info]: spawn command to main:  cmdline=["/usr/bin/ruby2.3", "-Eascii-8bit:ascii-8bit", "/usr/local/bin/fluentd", "--under-supervisor"]
2020-04-10 20:04:02 +0000 [info]: gem 'fluent-plugin-concat' version '2.3.0'
2020-04-10 20:04:02 +0000 [info]: gem 'fluent-plugin-detect-exceptions' version '0.0.11'
2020-04-10 20:04:02 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '3.0.2'
2020-04-10 20:04:02 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.1.6'
2020-04-10 20:04:02 +0000 [info]: gem 'fluent-plugin-multi-format-parser' version '1.0.0'
2020-04-10 20:04:02 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.3.0'
2020-04-10 20:04:02 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.1'
2020-04-10 20:04:02 +0000 [info]: gem 'fluentd' version '1.3.3'
2020-04-10 20:04:02 +0000 [info]: adding match pattern="fluent.**" type="null"
2020-04-10 20:04:02 +0000 [info]: adding match pattern="fluentd.**" type="null"
2020-04-10 20:04:02 +0000 [info]: adding match pattern="**" type="elasticsearch"
2020-04-10 20:04:02 +0000 [warn]: #0 [elasticsearch] Detected ES 7.x or above: `_doc` will be used as the document `_type`.
2020-04-10 20:04:02 +0000 [info]: adding source type="forward"
2020-04-10 20:04:02 +0000 [info]: adding source type="http"
2020-04-10 20:04:02 +0000 [info]: adding source type="monitor_agent"
2020-04-10 20:04:02 +0000 [info]: #0 starting fluentd worker pid=11 ppid=1 worker=0
2020-04-10 20:04:02 +0000 [info]: #0 listening port port=24224 bind="0.0.0.0"
2020-04-10 20:04:02 +0000 [info]: #0 fluentd worker is now running worker=0

Please help

This sounds like a fluentd problem, so I am not sure if this is the best place to get help. Are there any errors reported in the Elasticsearch logs?