Localhost path to folder to automatically ingest existent files


I have installed Elasticsearch with Kibana on Localhost on Linux Mint, everything works perfectly but I can't seem to set it to automatically ingest files from certain folders.

I want it to automatically ingest all files in the folders (path):

/media/marius/2-TB-Volume/Google Drive Linux/Bug Map
/media/marius/2-TB-Volume/Google Drive Linux/Log Map

Please specify step-by-step as for a beginner (this is my first day using Elasticsearch).


  1. Which file should I edit?
  2. How do I add the path to my folders?
  3. Can Elasticsearch import the files in those folders if the path is to the 2nd Hard Drive?

Thanks in advance!

Hi @marius03 ,

Elasticsearch does not ingest files from disk on its own, it it primarily a HTTP service capable of being passed JSON documents via REST APIs. However there are a large variety of tools that Elastic has available that pair with Elasticsearch in order to index your data.

  • Filebeat - if your files are "plain text" (extensions like .txt, .csv, .log, .xml, etc) you can use Filebeat to read from a filesystem and index into Elasticsearch
  • Network Drives Connector - if your files are binary documents (.pdf, .doc, .docx, .ppt, .xls, etc) and you are interested in using Elastic Workplace Search, the Network Drives Connector Package may be what you're looking for. Note that this feature is in Beta.
  • FsCrawler - this is a community-built, open-source project, that has become quite popular for indexing documents into Elasticsearch. It can also index into Workplace Search.
  • Google Drive Connector - I notice that your example paths say Google Drive. If what you're really wanting is to index documents from Google Drive, check out the Workplace Search Google Drive Connector.
  • Language clients - if none of these are quite what you're looking for, there are are a wide variety of language clients to help you index data into Elasticsearch. You can write your own code to traverse and transform your files the way you want, and then ship the resulting data to Elasticsearch for storage and search.
1 Like

@marius03 would love to know which strategy you are employing to ingest your documents here!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.