File input from a directory mounted with blobfuse

[background]
We are considering an architecture that imports external service log files stored in Azure Storage into Logstash. Log files are intermittently put to Storage, and once stored, the files are never updated.
We considered Elastic stack's filebeat as the first option, but due to the reasons on the page below, filebeat cannot be used.
A huge number of Listblobs commands are being executed from filebeat to Azure Storage - Elastic Stack / Beats - Discuss the Elastic Stack

As a second option, we are considering using blobfuse to locally mount Azure storage and using logstash's input file plugin to read file information from the locally mounted directory.

[problem]
After completing the blobfuse configuration, I checked the file input operation using logstash, but the file cannot be read.
I checked with the log level set to debug, but it seems that the file information for the directory cannot be obtained, and no error log is output.

[question]
As a product specification of Logstash, is it not possible to read files on a directory mounted with a Linux file system using blobfuse using the input file plugin?

[logstash settings (partial excerpt)]

input {
  file {
    path => "/mountdir/*-ds"
    start_position => "beginning"
    discover_interval => 10
    stat_interval => 5
    sincedb_path => "/dev/null"
    type => "test_type"
  }
}

I don't think this is tested or supported, if you check the documentation it mentions that it is not tested on those kind of filesystems.

The file input is not thoroughly tested on remote filesystems such as NFS, Samba, s3fs-fuse, etc, however NFS is occasionally tested.

Even if you use NFS, you may have issue reading the files, reading files from a network share from both Logstash or Filebeat may lead to unsolvable issues and I don't think it is recommended.

But, have you checked permissions? What do you have in the logs? How are you running Logstash, as a service or using the command line?

Also, I'm not sure if this would have much difference from the number of listblob API requests you had with filebeat, did you make any benchmark? From what I understand mounting the storage using fuse would abstract the API calls from the application, but those API calls would still be made by the fuse filesystem.

Reading logs from cloud storage, be it on Azure, AWS or GCP, can get pretty expensive as you need to keep making API calls to see what files are present on the bucket and also API calls to download it.

@leandrojmp

Thank you for your kind reply.
I'm glad to know that reading files over the network is not recommended.

filebeat executes the listblob command as many times as there are files in the container at each polling interval.
For example, if the interval is 30s and the number of files in the container is 5000, 10000 listblob commands are executed every minute.
Based on the answer in a separate article, I understand that this is due to filebeat specifications and cannot be avoided.

In contrast, blobfuse is designed to issue Azure APIs when the Linux kernel requests access.
For example, when you run the ls command, the listblobs API is executed, and when you run the cat command, the getblobs API is executed.

I'm not sure how logstash executes kernel commands. If the specification is to execute the ls command (or a similar program) for each file stored in the collection target directory, I think that listblobs will be executed as many times as filebeat as you say.
I wanted to confirm this behavior, but since the file information in the directory could not be read in the first place, I was unable to verify it.