Im converting some tests that use the memcmp function and don't get the expected output. Now I've been trying to figure out why there is a difference in the windows vs linux output and I ended up on godbolt.org. There I played around with different gcc versions and to my surprise there is a difference between x86-64 gcc 10.3
and x86-64 gcc 11.1
. Can you help me figure out what the correct output is?
The code that is used:
#include <string.h>
#include <iostream>
int main()
{
char16_t const * p10 = u"Same";
char16_t const * p210 = u"NotSame";
auto result10 = memcmp(p10, p210, sizeof(p10));
std::cout << result10 << "\n";
char16_t const p11[] = u"Same";
char16_t const p211[] = u"NotSame";
auto result11 = memcmp(&p11, &p211, sizeof(p11));
std::cout << result11 << "\n";
}
Gcc 10.3 output
5
5
Gcc 11.1 output
5
1
VS 2019 / MSVC 14.29.30133 output
1
1
It looks like in this example MSVC always returns exactly 1. For gcc this isnt the case sometimes, because it seems like it will return the difference. So between 83'S'
and 78'N'
is 5 so that is returned. Now my question is, is this the correct output or should it just be "1" in this case to indicate that there is a difference and ptr1 is higher than ptr2? I looked at some documentation but its a bit vague as to what it should be.