ngram_range parameter of BERTopic is outputting n-grams with words far away from each other
After setting the ngram_range=(2,2), the trained BERTopic model generates topics with 2-gram phrases such as Topic_1: {"Model Router", "Network Setup", etc}, but the individual words of each 2-gram are not adjacent to each other within the document and they are far away form each other. It seems that the BERTopic model is not considering 2-gram at all. Is there any way to make sure that the individual words in the 2-gram phrases of each topic are not far away from each other within the related documents? I don't want BERTopic considers "Modem Router" as a 2-gram if there is no sentence in the whole document set with Modem and Router adjacent to each other