Usage of nltk Sentiwordnet with python

11.2k Views Asked by At

I am doing sentiment analysis on twitter data using python NLTK. I need a dictionary which contains +ve and -ve polarities of words. I have read so much stuff regarding sentiwordnet but when I am using it for my project it is not giving efficient and fast results. I think I'm not using it correctly. Can anyone tell me correct way to use it? Here are the steps I did up to now:

  1. tokenization of tweets
  2. POS tagging of tokens
  3. passing each tags to sentinet

I am using the nltk package for tokenization and tagging. See a part of my code below:

import nltk
from nltk.stem import *
from nltk.corpus import sentiwordnet as swn

tokens=nltk.word_tokenize(row) #for tokenization, row is line of a file in which tweets are saved.
tagged=nltk.pos_tag(tokens) #for POSTagging

for i in range(0,len(tagged)):
     if 'NN' in tagged[i][1] and len(swn.senti_synsets(tagged[i][0],'n'))>0:
            pscore+=(list(swn.senti_synsets(tagged[i][0],'n'))[0]).pos_score() #positive score of a word
            nscore+=(list(swn.senti_synsets(tagged[i][0],'n'))[0]).neg_score()  #negative score of a word
    elif 'VB' in tagged[i][1] and len(swn.senti_synsets(tagged[i][0],'v'))>0:
           pscore+=(list(swn.senti_synsets(tagged[i][0],'v'))[0]).pos_score()
           nscore+=(list(swn.senti_synsets(tagged[i][0],'v'))[0]).neg_score()
    elif 'JJ' in tagged[i][1] and len(swn.senti_synsets(tagged[i][0],'a'))>0:
           pscore+=(list(swn.senti_synsets(tagged[i][0],'a'))[0]).pos_score()
           nscore+=(list(swn.senti_synsets(tagged[i][0],'a'))[0]).neg_score()
    elif 'RB' in tagged[i][1] and len(swn.senti_synsets(tagged[i][0],'r'))>0:
           pscore+=(list(swn.senti_synsets(tagged[i][0],'r'))[0]).pos_score()
           nscore+=(list(swn.senti_synsets(tagged[i][0],'r'))[0]).neg_score()

At the end I will be calculating how many tweets are positive and how many tweets are negative. Where am I wrong? How should I use it? And is there any other similar kind of dictionary which is easy to use?

2

There are 2 best solutions below

0
On

calculate the sentiment

alist = [all_tokens_in_doc]

totalScore = 0

count_words_included = 0

for word in all_words_in_comment:

    synset_forms = list(swn.senti_synsets(word[0], word[1]))

    if not synset_forms:

        continue

    synset = synset_forms[0] 

    totalScore = totalScore + synset.pos_score() - synset.neg_score()

    count_words_included = count_words_included +1

final_dec = ''

if count_words_included == 0:

    final_dec = 'N/A'

elif totalScore == 0:

    final_dec = 'Neu'        

elif totalScore/count_words_included < 0:

    final_dec = 'Neg'

elif totalScore/count_words_included > 0:

    final_dec = 'Pos'

return final_dec
0
On

Yes, there are other lexicons that you can use. You can find a small list of lexicons here: http://sentiment.christopherpotts.net/lexicons.html#resources It seems Bing Liu's Opinion Lexicon is quite easy to use.

Apart from linking to those lexicons that website is a very nice tutorial on sentiment analysis.