So I came across this simple problem of printing emojis in python. I assume there's multiple methods to do so, but these are the three major ones that I found:-
- Using UNICODE of the emoji
- Using CLDR names of the emoji
- Using the emoji module
I wanted to make a program (using each of the three methods) where we take an input from the user asking which emoji they want to print, and then print it in the next line.
This meant- If the program were created using method 1, the user had to input a unicode. If it were created using method 2, they would have to input the CLDR name If it were created using method 3, they would have to input the name of the emoji (based on the syntax of the emoji module).
The actual problem that I'm facing is in storing the input taken from the user into a variable and then trying to use that variable to generate an emoji. This is because the input gets stored as a string, and so while printing it, the print command simply prints the string instead of looking at it as a unicode.
For method 1: I tried
user_emoji = input("Enter the unicode:- ")
print(r"\{}".format(user_emoji))
But this just gave me the following when I tried entering the unicode
\U0001f0cf
I found a solution for this when I looked up online for it here but didn't really understand what was actually going on here.
For method 2: I tried
user_emoji = input("Enter the CLDR name:- ")
print(r"\N{}".format(user_emoji))
But once again I got just normal text when I entered 'slightly smiling face'.
\Nslightly smiling face
Here I thought that if there was a way to convert the CLDR name to a UNICODE then I can use the solution from above and brute force my way to get the result, but I couldn't find a way to do so either.
For method 3 I tried
import emoji
user_emoji = input("Enter the emoji name:- ")
user_emoji = user_emoji.replace(" ","_")
print(emoji.emojize(r':{}:'.format(user_emoji)))
This was the only method that gave me the result I wanted when I gave 'slightly smiling face' as an input.
Hope someone can explain how the solution in method 1 works and what I need to do to make method 2 work.

You are fundamentally misunderstanding the difference between a string literal (which is source code that creates the string object) and a string (the actual object).
If I write
In source code, that is a string literal which evaluates to a string with a single character, the newline character.
If I write
r"\{}".format('n')this creates a string, which looks like the string literal for a newline, but it isn't source code. It is a string with two characters, the backslash character and the 'n' character:If you want to accept the unicode code point, that is simply a number, in your case, you seem to provide it in base 16. All the code you linked to does is convert a string which represents a base-16 number to an
intobject (the second argument is the base, it defaults to base 10), and then it uses thechrfunction to retreive the unicode character from that number:Finally, if you want to use the CLDR name, you can use the
unicodedatamodule (part of the standard library):