N-gram Tokenizer by words

In elasticsearch , the n-gram means divide the string by letter.
For example , "this is a dog" will divide to ['t','th','thi','this' .....etc].
Is there a way to do n-gram by two words.
For example , "this is a dog" will divide to ['this is','is a','a dog'].

Welcome!

Have a look at shingles: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-shingle-tokenfilter.html

Thank you so mush.
That's what I need.