Strategy for remove invalid documents

I use elasticsearch to indexing filesystem and create search engine for all PDF documents (around 3 million of them).
My current work flow:

  1. I create a small java program that runs everyday at midnight, crawling filesystem for all PDF files
  2. At the beginning of run, the java programs deletes all documents under my-index
  3. For each pdf files found, I save reference to them to elasticsearch, under my-index. The json is simple, only contains path_to_file, filename, last_modified_date, size_kb

The data keep changing. Sometimes, pdf file renamed or deleted.

The drawback of my approach: my crawler took almost three hours for complete. So within that time interval, some pdf cannot be found on ES. I'd like to keep the documents, only deleting all PDF that no longer exists on filesystem (due to renamed or deleted).

This is my strategy. Is this a good practice, please advise?

  1. Create a new index my-another-index
  2. Create new Java program on midnight. This one not deletes data from my-another-index, but keep my-index intact
  3. Crawl the filesystem for PDF files, put reference to my-another-index
  4. By the end of crawl, my-another-index will have updated contents
  5. Handle deleted pdf : Compare my-index with my-another-index. Remove all documents that does not exists on my-another-index
  6. Handle new pdf : Modify original crawler, don't delete documents from my-index. Only crawl for new files.
  7. Document id for both index are same. Using hashCode of file path.

The problem are:

  1. Is this a correct way?
  2. What is the efficient way to compare two indexes, basically subtracts my-index minus my-another-index? Those are the documents to be deleted from my-index

Thanks

Looks similar to FSCrawler project for which I'm using dates but it does not always work very well.

I'm now considering other implementations such as using a rsync method (https://github.com/dadoonet/fscrawler/issues/377) or using a WatchService implementation (https://github.com/dadoonet/fscrawler/issues/399).
Or using something similar to what filebeat is doing and may be rewrite some crawler agents in Golang...

My 2 cents

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.