FsCrawler snapshot 2.10 not compatible with the Elastic search version 9.2.4 and 9.3.0 getting error

FsCrawler snapshot 2.10 not compatible with the Elastic search version 9.2.4 and 9.3.0 getting error - Getting below error need to resolve this not sure what error is it Elasticsearch 9.2.4 and 9.3.0 says to install the java 21 and above so I did and set both Java_Home adn ES_Java_Home and tried to crawl the files using the Fscralwer2.10 but getting below error - 16:28:50,553 FATAL [f.p.e.c.f.c.FsCrawlerCli] We can not start Elasticsearch Client. Exiting.
java.lang.NullPointerException: null
at java.base/java.io.Reader.(Reader.java:168) ~[?:?]
at java.base/java.io.InputStreamReader.(InputStreamReader.java:88) ~[?:?]
at fr.pilato.elasticsearch.crawler.fs.client.ElasticsearchClient.loadResourceFile(ElasticsearchClient.java:541) ~[fscrawler-elasticsearch-client-2.10-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.client.ElasticsearchClient.loadAndPushComponentTemplate(ElasticsearchClient.java:520) ~[fscrawler-elasticsearch-client-2.10-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.client.ElasticsearchClient.createIndexAndComponentTemplates(ElasticsearchClient.java:495) ~[fscrawler-elasticsearch-client-2.10-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.service.FsCrawlerDocumentServiceElasticsearchImpl.createSchema(FsCrawlerDocumentServiceElasticsearchImpl.java:71) ~[fscrawler-core-2.10-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.FsCrawlerImpl.start(FsCrawlerImpl.java:129) ~[fscrawler-core-2.10-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.cli.FsCrawlerCli.startEsClient(FsCrawlerCli.java:429) [fscrawler-cli-2.10-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.cli.FsCrawlerCli.runner(FsCrawlerCli.java:405) [fscrawler-cli-2.10-SNAPSHOT.jar:?]
at fr.pilato.elasticsearch.crawler.fs.cli.FsCrawlerCli.main(FsCrawlerCli.java:140) [fscrawler-cli-2.10-SNAPSHOT.jar:?]
16:28:50,576 INFO [f.p.e.c.f.FsCrawlerImpl] FS crawler [job2] stopped

need help to resolve this

Regrettably, I’m seeing no such issue with Elastic Stack v9.3.0 and FSCrawler 2.10-SNAPSHOT running containers based on the official images on Docker Hub.

Details of my config to come.

services:

  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    user: root
    env_file:
      - ./.es_common.env
    volumes:
      - ./escerts:/usr/share/elasticsearch/config/certs

  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    env_file:
      - ./.es_common.env
      - ./.es01.env
    mem_limit: ${MEM_LIMIT}
    ports:
      - ${ES_PORT}:9200
    volumes:
      - ./escerts:/usr/share/elasticsearch/config/certs
      - ./es01-data:/usr/share/elasticsearch/data
      - ./logs/elasticsearch/es01:/usr/share/elasticsearch/logs
      - ./es-cluster-backup:/mount/backups
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test: [ "CMD-SHELL", "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'" ]
      interval: 10s
      timeout: 10s
      retries: 120

  es02:
    depends_on:
      - es01
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    env_file:
      - ./.es_common.env
      - ./.es02.env
    mem_limit: ${MEM_LIMIT}
    volumes:
      - ./escerts:/usr/share/elasticsearch/config/certs
      - ./es02-data:/usr/share/elasticsearch/data
      - ./logs/elasticsearch/es02:/usr/share/elasticsearch/logs
      - ./es-cluster-backup:/mount/backups
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test: [ "CMD-SHELL", "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'" ]
      interval: 20s
      retries: 10
      timeout: 20s

  es03:
    depends_on:
      - es02
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    env_file:
      - ./.es_common.env
      - ./.es03.env
    mem_limit: ${MEM_LIMIT}
    volumes:
      - ./escerts:/usr/share/elasticsearch/config/certs
      - ./es03-data:/usr/share/elasticsearch/data
      - ./logs/elasticsearch/es03:/usr/share/elasticsearch/logs
      - ./es-cluster-backup:/mount/backups
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test: [ "CMD-SHELL", "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'" ]
      interval: 20s
      retries: 10
      timeout: 20s

  fscrawler:
    depends_on:
      es01:
        condition: service_healthy
      es02:
        condition: service_healthy
      es03:
        condition: service_healthy
    build:
      dockerfile_inline: |
        FROM dadoonet/fscrawler:${FSCRAWLER_VERSION}
        USER root
        COPY ./http_ca.crt /usr/local/share/ca-certificates/
        RUN update-ca-certificates && \
            keytool \
              -import \
              -noprompt \
              -cacerts \
              -trustcacerts \
              -alias usaftc_es_ca_cert \
              -file /usr/local/share/ca-certificates/http_ca.crt \
              -storepass changeit
        COPY ./fscrawler_config /root/.fscrawler
        RUN sed -Ei "s/f==__FSCRAWLER_API_KEY_Jan_2026__f==/${API_KEY_FROM_ES}/g" /root/.fscrawler/resumes/_settings.yaml
    env_file:
      - ./.es_common.env
    mem_limit: ${MEM_LIMIT}
    ports:
      - ${FSCRAWLER_PORT}:8080
    volumes:
      ## Uncomment below to add 3rd party library .jar files
      # - ./external:/usr/share/fscrawler/external
      - ./resumes:/tmp/es:ro
      - ./logs/fscrawler:/usr/share/fscrawler/logs
    command: [ "resumes", "--rest" ]

  kibana:
    depends_on:
      es01:
        condition: service_healthy
      es02:
        condition: service_healthy
      es03:
        condition: service_healthy
    build:
      dockerfile_inline: |
        FROM docker.elastic.co/kibana/kibana:${STACK_VERSION}
        ADD ./kibana.yml /usr/share/kibana/config/
    env_file:
      - ./.es_common.env
      - ./.kibana.env
    environment:
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
    mem_limit: ${MEM_LIMIT}
    ports:
      - ${KIBANA_PORT}:5601
    volumes:
      - ./escerts:/usr/share/kibana/config/certs
      - ./kibana-data:/usr/share/kibana/data
    healthcheck:
      test: [ "CMD-SHELL", "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'" ]
      interval: 20s
      retries: 10
      timeout: 20s

It requires some environment variables in the listed files (.es01.env, .es02.env, .es03.env, .es_common.env, and .kibana.env) and some supporting shell scripts that time the dependency chain (which I can also share, if interested), but this should give a good starting framework.

But I get error if I set push_templates: false ie Can't find stored field name to check existing filenames in path [C:\Images\Data\Docs\1B5085]. Please set store: true on field [file.filename] 12:04:27,339 WARN [f.p.e.c.f.FsParserAbstract] Error while crawling C:\Caches\ThirdPartyR581\Images\Data\Docs\: Mapping is incorrect: please set stored: true on field [file.filename]. my fscrawler is giving issue so does that mean it's not processing other files ? coz yellow open elasticindex-r815-1 kWRrderRTSmHAYlp6U6Fzg 1 1 4 0 90.6kb 90.6kb 90.6kb yellow open job1_folder OidAs0J2QyeQSqU552T9lg 1 1 5 0 27.6kb 27.6kb 27.6kb numbers not increasing when this error starts coming the it doesn’t crawl any document futher

I am using service installation on my windows machine