One Hot Encoding for words from a text corpus

3.1k Views Asked by At

How can I create one hot encoding of words with each word represented by a sparse vector of vocab size and the index of that particular word equated to 1 , using tensorflow ?

something like

oneHotEncoding(words = ['a','b','c','d']) -> [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]] ?

1

There are 1 best solutions below

0
On

Scikits one hot encoder takes an int-array (http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html). Building on your example you could us a dictionary to map words to integers and go from there:

import numpy as np
from sklearn.preprocessing import OneHotEncoder
wdict = {'a': 0, 'b': 1, 'c': 2, 'd': 3}
dictarr = np.asarray(wdict.values()).reshape(-1, 1)
enc = OneHotEncoder()
enc.fit(dictarr)
enc.transform([[2]]).toarray()

which yields

array([[ 0.,  0.,  1.,  0.]])