I have installed Elasticsearch with Kibana on Localhost on Linux Mint, everything works perfectly but I can't seem to set it to automatically ingest files from certain folders.
I want it to automatically ingest all files in the folders (path):
Elasticsearch does not ingest files from disk on its own, it it primarily a HTTP service capable of being passed JSON documents via REST APIs. However there are a large variety of tools that Elastic has available that pair with Elasticsearch in order to index your data.
Filebeat - if your files are "plain text" (extensions like .txt, .csv, .log, .xml, etc) you can use Filebeat to read from a filesystem and index into Elasticsearch
Network Drives Connector - if your files are binary documents (.pdf, .doc, .docx, .ppt, .xls, etc) and you are interested in using Elastic Workplace Search, the Network Drives Connector Package may be what you're looking for. Note that this feature is in Beta.
FsCrawler - this is a community-built, open-source project, that has become quite popular for indexing documents into Elasticsearch. It can also index into Workplace Search.
Google Drive Connector - I notice that your example paths say Google Drive. If what you're really wanting is to index documents from Google Drive, check out the Workplace Search Google Drive Connector.
Language clients - if none of these are quite what you're looking for, there are are a wide variety of language clients to help you index data into Elasticsearch. You can write your own code to traverse and transform your files the way you want, and then ship the resulting data to Elasticsearch for storage and search.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.