unexpected result in mspgcc

397 Views Asked by At

I wrote a simple code in C, but when I port it to mspgcc it doesn't give me the right value. This is a part of my code:

unsigned short int xk=3588, yk=47541, yk1, sig=10, de=1;

unsigned long int xk1;

xk1=( xk+(sig*(yk-xk)*de));

yk1=xk1 % 65535;

the result that I expect is xk1=443118 and yk1=49908, but in mspgcc it gives me xk1=yk1=49902. I don't know where is the fault may be in the type choice?

Edit

this is my full code

#include <stdio.h>
#include "uart1.h"
#include <stdlib.h> 
#include <stdint.h>
#include <string.h>
#include <math.h>

int putchar(int c)
{
   return uart1_putchar(c);
}

int main(void) 
{
   //variables
   unsigned short int xk=3588, yk=47541, yk1, sig=10, de=1;
   unsigned long xk1;
   uart1_init();
   xk1=( xk+(sig*((unsigned long)yk-xk)*de));
   yk1=xk1 % 65536;
   printf("xk1=%6lx\t\n,",xk1);
   printf("yk1=%u\t\n,",yk1);
}
1

There are 1 best solutions below

3
On

The size of an integer must be 16 bits with this compiler, which is a perfectly legitimate system.

You expected xk1 to be 443118. 443118 % 65536 is 49902.

Since the calculation:

unsigned short int xk=3588, yk=47541, yk1, sig=10, de=1;

unsigned long int xk1;

xk1=( xk+(sig*(yk-xk)*de));

involves only unsigned short values, these are promoted to unsigned int, then the result is computed as an unsigned int, and finally that value is assigned to the unsigned long. But the excess bits have long since been lost...the calculation was done in 16-bit unsigned arithmetic.


Experiment

Conducted on a 64-bit RHEL5 (AMD x86/64) machine with GCC 4.1.2. To simulate a 16-bit integer computation, I've liberally laced (a second copy of) the expression with ((unsigned short)(...)) casts. The double multiplication only gets a single cast; the result doesn't change regardless of the order in which the two multiplies are done (doubly not since one of the multiplicands is 1). And I've included (a third copy of) the expression with an (unsigned long) cast.

Test program:

#include <stdio.h>

int main(void)
{
    unsigned short int xk=3588, yk=47541, sig=10, de=1;
    unsigned long int xk1;

    xk1 = (xk+(sig*(yk-xk)*de));
    printf("No Cast: %6lu = (%u+(%u*(%u-%u)*%u))\n", xk1, xk, sig, yk, xk, de);
    xk1 = ((unsigned short)(xk+((unsigned short)(sig*((unsigned short)(yk-xk))*de))));
    printf("US Cast: %6lu = (%u+(%u*(%u-%u)*%u))\n", xk1, xk, sig, yk, xk, de);
    xk1 = (xk+(sig*((unsigned long)yk-xk)*de));
    printf("UL Cast: %6lu = (%u+(%u*(%u-%u)*%u))\n", xk1, xk, sig, yk, xk, de);
    return 0;
}

The output is:

$ gcc -Wall -Wextra -g -O3 -std=c99 xx.c -o xx && ./xx
No Cast: 443118 = (3588+(10*(47541-3588)*1))
US Cast:  49902 = (3588+(10*(47541-3588)*1))
UL Cast: 443118 = (3588+(10*(47541-3588)*1))
$

I think the second, almost unreadable expression accurately reflects (or sufficiently accurately reflects) the way a 16-bit compiler would evaluate the expression - and the agrees with what you saw.

The result of (47541-3588) is 43953. The result of (10 * 43953) % 65536 is 46314. Add the 3588, and the result is, as it should be, 49902.

I also added an (unsigned long) cast to yk and ran the expression. Maybe for full fidelity with your machine with 32-bit unsigned long, I should have used unsigned int, but the result doesn't change. I don't know where you got your alternative value from - I'd need to see your full working program (analogous to mine) to get any ideas on that. It looks as though you had some part of the calculation 'go negative' on you, leaving you with large (positive) unsigned values, but there is no obvious excuse for the computation to go negative.


Taking the code from the comment:

#include <stdio.h>
// -unused- #include "uart1.h"
// -unused- #include <stdlib.h>
// -unused- #include <stdint.h>
// -unused- #include <string.h>
// -unused- #include <math.h>

// -unused- int putchar(int c) { return uart1_putchar(c); }

int main(void)
{
    //variables
    unsigned short int xk=3588, yk=47541, yk1, sig=10, de=1;
    unsigned long xk1;
    // -not-needed-in-demo uart1_init();
    xk1=( xk+(sig*((unsigned long)yk-xk)*de));
    yk1=xk1 % 65535;
    //printf("xk1=%6lx\t\n,",xk1);
    //printf("yk1=%u\t\n,",yk1);
    printf("xk1 = %6lx = %6u\n", xk1, xk1);
    printf("yk1 = %6x = %6u\n", yk1, yk1);
}

The 65535 should be 65536. The tab at the end of the line is unnecessary, as is the comma at the beginning of the next (but these are pure cosmetics).

More serious (but immaterial to the problem at hand because it is unused), the <stdio.h> defines a function (and usually a macro too) called putchar(). You should probably not be defining your own function called putchar, but if you must, you should normally undefine the macro (assuming there is one) from <stdio.h>. I admit the code compiled OK on my machine - didn't link, but that was expected; one day, maybe, I'll track down what putchar() really is on this machine.

The code shown produces the correct/expected answer.

The only way I can see to produce the observed incorrect behaviour is this, where I've removed the superfluous code:

#include <stdio.h>

int main(void)
{
    unsigned short int xk=3588, yk=47541, yk1, sig=10, de=1;
    unsigned long xk1;
    xk1=( xk+(sig*((unsigned long)yk-xk)*de));
    yk1=xk1 % 65536;
    printf("xk1= %6lx = %6lu\n", xk1, xk1);
    printf("yk1= %6x = %6u\n", yk1, yk1);

    xk1=( xk+(sig*((short)yk-xk)*de));
    yk1=xk1 % 65536;
    printf("xk1= %6lx = %6lu\n", xk1, xk1);
    printf("yk1= %6x = %6u\n", yk1, yk1);
}

When run on my 64-bit machine (currently, MacOS X 10.6.7 with GCC 4.6.0), I get:

xk1=  6c2ee = 443118
yk1=   c2ee =  49902
xk1= fffffffffffcc2ee = 18446744073709339374
yk1=   c2ee =  49902

Ignoring the 8 extra F's in the hex value, I get 0xFFFCC2EE rather than the 0xFFFBC2EE you are getting. I have no explanation for that discrepancy. But you can see that if the intermediate result is a signed 16-bit quantity, you can end up with pretty much the result you're seeing.

Then the question is: why is there a signed operation in there? I have no good explanation. I think you may have to look at the assembler code and work out what is going on; you may even be tickling a bug in the compiler.