Strategy for remove invalid documents

(timpamungkas) #1

I use elasticsearch to indexing filesystem and create search engine for all PDF documents (around 3 million of them).
My current work flow:

  1. I create a small java program that runs everyday at midnight, crawling filesystem for all PDF files
  2. At the beginning of run, the java programs deletes all documents under my-index
  3. For each pdf files found, I save reference to them to elasticsearch, under my-index. The json is simple, only contains path_to_file, filename, last_modified_date, size_kb

The data keep changing. Sometimes, pdf file renamed or deleted.

The drawback of my approach: my crawler took almost three hours for complete. So within that time interval, some pdf cannot be found on ES. I'd like to keep the documents, only deleting all PDF that no longer exists on filesystem (due to renamed or deleted).

This is my strategy. Is this a good practice, please advise?

  1. Create a new index my-another-index
  2. Create new Java program on midnight. This one not deletes data from my-another-index, but keep my-index intact
  3. Crawl the filesystem for PDF files, put reference to my-another-index
  4. By the end of crawl, my-another-index will have updated contents
  5. Handle deleted pdf : Compare my-index with my-another-index. Remove all documents that does not exists on my-another-index
  6. Handle new pdf : Modify original crawler, don't delete documents from my-index. Only crawl for new files.
  7. Document id for both index are same. Using hashCode of file path.

The problem are:

  1. Is this a correct way?
  2. What is the efficient way to compare two indexes, basically subtracts my-index minus my-another-index? Those are the documents to be deleted from my-index


(David Pilato) #2

Looks similar to FSCrawler project for which I'm using dates but it does not always work very well.

I'm now considering other implementations such as using a rsync method ( or using a WatchService implementation (
Or using something similar to what filebeat is doing and may be rewrite some crawler agents in Golang...

My 2 cents