What is the criteria for encoding a mirrored version of a character in Unicode? There seems to be a lot of inconsistencies in currently encoded characters. For eg:
- '∼'(U+223C, Tilde operator) is mirrored with '∽'(U+223D, Reversed tilde). But similar looking (why there are two characters for tilde is yet another question by itself :|) '~'(U+007E, Tilde) is not mirrored with anything.
- '≃'(U+2243, Asymptotically equal to) is mirrored with '⋍'(U+22CD, Reverse tilde equals). Whereas related '≄'(U+2244, Not asymptotically equal to) and '≈'(U+2248, Almost equal to) are mirrored but don't have dedicated characters.
- '≾'(U+227E, Precedes or equivalent to) is wrongly mirrored with '≿'(U+227F, Succeeds or equivalent to), since it's upper half is only mirrored here and the lower half remains unmirrored.
Is there any source from where I can know why a character was encoded in UCS standard? Like many redundant characters have been encoded for legacy compatibility reasons. But there many other which I don't understand, like, if mirror of '≈'(U+2248, Almost equal to) is left to the rendering system while mirror of '≃'(U+2243, Asymptotically equal to) is explicitly encoded. I know Mirror of '<'(U+003C, Less-than sign) is also encoded. But it just so happens to be another different character '>'(U+003E, Greater-than sign), unlike the above case.