Using a dictionary in es tokenization for filtering?

I'm talking about functionality similar to the documentation here:

https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-hunspell-tokenfilter.html

My question is, is it possible to use a dictionary, such as hunspell or custom, to filter out tokens; for example, invalid English words (similar to the python nltk library nltk.is_english_word(word) method)? Even though the link I posted refers to a "filter" it doesn't seem to be filtering in the way I understand the term and instead does stemming, but leaves in words that aren't in the dictionary.

Thanks for any help.

I'll update for anyone who happens upon this question. There is an option in es for this:

https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-keep-words-tokenfilter.html

You can just use any set of words (say from open source dictionaries online) and put them in a file. Then you use the keep_words_path and you're cooking with gas.