CountVectorizer fails with bad words

168 Views Asked by At

I am using a pandas dataFrame and I am trying to get the number of occurrences of words for a specific column that has strings. The code runs well until some row with the following error

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-36-af8291199984> in <module>
      6 
      7 cv = CountVectorizer(stop_words=None)
----> 8 cv_fit=cv.fit_transform(texts)
      9 word_list = cv.get_feature_names();
     10 count_list = cv_fit.toarray().sum(axis=0)

~/anaconda3/envs/turiCreate/lib/python3.8/site-packages/sklearn/feature_extraction/text.py in fit_transform(self, raw_documents, y)
   1196         max_features = self.max_features
   1197 
-> 1198         vocabulary, X = self._count_vocab(raw_documents,
   1199                                           self.fixed_vocabulary_)
   1200 

~/anaconda3/envs/turiCreate/lib/python3.8/site-packages/sklearn/feature_extraction/text.py in _count_vocab(self, raw_documents, fixed_vocab)
   1127             vocabulary = dict(vocabulary)
   1128             if not vocabulary:
-> 1129                 raise ValueError("empty vocabulary; perhaps the documents only"
   1130                                  " contain stop words")
   1131 

ValueError: empty vocabulary; perhaps the documents only contain stop words

And this is my code addressing this string:

import pandas as pd
import numpy as np    
from sklearn.feature_extraction.text import CountVectorizer

texts=[":)"]    

cv = CountVectorizer(stop_words=None)   
cv_fit=cv.fit_transform(texts)    
word_list = cv.get_feature_names();    
count_list = cv_fit.toarray().sum(axis=0)

print(word_list)
print(dict(zip(word_list,count_list)))

How I make CountVectorizer overcome this issue?

1

There are 1 best solutions below

1
On BEST ANSWER

The issue you're running into is with tokenization pattern, specified in token_pattern='(?u)\\b\\w\\w+\\b'. You may adapt it to your task:

import pandas as pd
import numpy as np    
from sklearn.feature_extraction.text import CountVectorizer

texts=["hello :)"]    

cv = CountVectorizer(stop_words=None, token_pattern=r'(?u)\b\w\w+\b|[:)]+')  
cv_fit=cv.fit_transform(texts)    
word_list = cv.get_feature_names();    
count_list = cv_fit.toarray().sum(axis=0)

print(word_list)
print(dict(zip(word_list,count_list)))
[':)', 'hello']
{':)': 1, 'hello': 1}

If you're concerned with emojis, a more robust solution towards your goals ("industrial" as they say) could be with spacy:

import spacy
from spacymoji import Emoji
from collections import Counter
nlp = spacy.load('en_core_web_sm')
emoji = Emoji(nlp)
nlp.add_pipe(emoji, first=True)

tokens = [tok for tok in nlp.tokenizer("Hi :) ")]
counts = Counter(tokens)
print(counts)
Counter({Hi: 1, :): 1, : 1})