How to apply t-SNE on Word2Vec Model

1.7k Views Asked by At

I am Working On Sentiment Analysis on Amazon Food Reviews and I am trying to apply Word2Vec on the Reviews and Visualise it Using t-SNE.

I was easily able to Visualise using Bag of words representation of the same using following code:

    from sklearn.manifold import TSNE
    data_2000 = final_counts[0:2000,:]
    top_2000 = data_2000.toarray()
    labels = final['Score']
    labels_2000 = labels[0:2000]

    model = TSNE(n_components=2, random_state=0)
    tsne_data = model.fit_transform(top_2000)

    # creating a new data frame which help us in ploting the result 

      tsne_data = np.vstack((tsne_data.T, labels_2000)).T
      tsne_df = pd.DataFrame(data=tsne_data, columns=("Dim_1", "Dim_2", 
      "label"))

    # Ploting the result of tsne

       sns.FacetGrid(tsne_df, hue="label", size=6).map(plt.scatter, 
      'Dim_1', 'Dim_2').add_legend()
       plt.show()

Also, The same code doesn't work when I feed w2v_model model which is of type gensim.models.word2vec.Word2Vec

I obtained the model by using following Code:

     w2v_model=gensim.models.Word2Vec(list_of_sent,min_count=5,size=50, 
     workers=4)
2

There are 2 best solutions below

0
On
from torchtext import vocab
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np

glove = vocab.GloVe(name = '6B', dim = 100)

print(f'There are {len(glove.itos)} words in the vocabulary')

def tsne_plot(glove,n=200,n_components=2):
    "Creates and TSNE model and plots it"
    labels = []
    tokens = []

    for word , tensor_value in zip(glove.itos[:n],glove.vectors[:n]):
        tokens.append(tensor_value.numpy())
        labels.append(word)

    tsne_model = TSNE(perplexity=40, n_components=n_components, init='pca', n_iter=2500, random_state=23)
    new_values = tsne_model.fit_transform(tokens)
    fig = plt.figure(figsize=(16, 16))
    if n_components==3:
        ax = fig.add_subplot(111, projection='3d')
        ax.scatter(new_values[:,0],new_values[:,1],new_values[:,2],c="r",marker="o")
        for i in range(len(new_values)):
            ax.text(new_values[i][0],new_values[i][1],new_values[i][2],labels[i])
    else:
        plt.scatter(new_values[:,0],new_values[:,1])
        for i in range(len(new_values)):
            plt.annotate(labels[i],
                        xy=(new_values[i][0],new_values[i][1]),
                        xytext=(5, 2),
                        textcoords='offset points',
                        ha='right',
                        va='bottom')
    return new_values,labels
new_values,labels = tsne_plot(glove,n_components=2)
6
On

You need to extract all the word embeddings after the model is trained. I would recommend extraction into a pd.DataFrame in a following way:

all_vocab = list(w2v_model.wv.vocab.keys())
data_dict = {word: w2v_model.wv[word] for word in all_vocab}
result = pd.DataFrame(data=data_dict).transpose()

If you then want to perform dimensionality reduction in scikit, simply access the array of embeddings via result.values