from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
dataset['text'] = dataset['text'].apply(lambda word_list: [tokenizer.tokenize(word) for word in word_list])
dataset['text'].head()
The above code shows an error
expected string or bytes-like object, got 'list'