Getting error while applying the RegexpTokenizer

14 Views Asked by At
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'w+')
dataset['text'] = dataset['text'].apply(tokenizer.tokenize)
dataset['text'].head()

GETTING THIS ERROR: enter image description hereenter image description here

what is the solution for this ??

enter image description here

I was expecting this output

0

There are 0 best solutions below