Tokenize Sentences or Tweets with Emoji Skin Tone Modifiers

596 Views Asked by At

I want to tokenize a tweet containing multiple emojis and they are not space-separated. I tried both NLTK TweetTokenizer and Spacy but they fail to tokenize Emoji Skin Tone Modifiers. This needs to be applied to a huge dataset so performance might be an issue. Any suggestions?

You may need to use Firefox or Safari to see the exact color tone emoji because Chrome sometimes fails to render it!

# NLTK
from nltk.tokenize.casual import TweetTokenizer
sentence = "I'm the most famous emoji  but what about  and "
t = TweetTokenizer()
print(t.tokenize(sentence))

# Output
["I'm", 'the', 'most', 'famous', 'emoji', '', '', '', 'but', 'what', 'about', '', 'and', '', '', '', '', '', '']

And

# Spacy
import spacy
nlp = spacy.load("en_core_web_sm")
sentence = nlp("I'm the most famous emoji  but what about  and ")
print([token.text for token in sentence])

Output
['I', "'m", 'the', 'most', 'famous', 'emoji', '', '', '', 'but', 'what', 'about', '', 'and', '', '', '', '', '', '']

Expected Output

["I'm", 'the', 'most', 'famous', 'emoji', '', '', '', 'but', 'what', 'about', '', 'and', '', '', '', '']
2

There are 2 best solutions below

0
On

Skin tone modifiers are just a set of hex codes utilized in conjunction with the emoji's base hex code. These are the skin tone modifiers : http://www.unicode.org/reports/tr51/#Diversity

enter image description here

You can use spacy retokenizer's merge method after finding the bounds of a token which is an emoji + its skin tone modifier.

See this answer of mine for how to merge tokens based on regex pattern : https://stackoverflow.com/a/43390171/533399

4
On

You should try using spacymoji. It's an extension and pipeline component for spaCy that can optionally merge combining emoji like skin tone modifiers into single token.

Based on the README you can do something like this:

import spacy
from spacymoji import Emoji

nlp = spacy.load('en')
emoji = Emoji(nlp, merge_spans=True) # this is actually the default
nlp.add_pipe(emoji, first=True)

doc = nlp(...)

That should do it.