I wrote a simple code in C, but when I port it to mspgcc it doesn't give me the right value. This is a part of my code:
unsigned short int xk=3588, yk=47541, yk1, sig=10, de=1;
unsigned long int xk1;
xk1=( xk+(sig*(yk-xk)*de));
yk1=xk1 % 65535;
the result that I expect is xk1=443118
and yk1=49908
, but in mspgcc it gives me xk1=yk1=49902
.
I don't know where is the fault may be in the type choice?
Edit
this is my full code
#include <stdio.h>
#include "uart1.h"
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <math.h>
int putchar(int c)
{
return uart1_putchar(c);
}
int main(void)
{
//variables
unsigned short int xk=3588, yk=47541, yk1, sig=10, de=1;
unsigned long xk1;
uart1_init();
xk1=( xk+(sig*((unsigned long)yk-xk)*de));
yk1=xk1 % 65536;
printf("xk1=%6lx\t\n,",xk1);
printf("yk1=%u\t\n,",yk1);
}
The size of an integer must be 16 bits with this compiler, which is a perfectly legitimate system.
You expected
xk1
to be 443118. 443118 % 65536 is 49902.Since the calculation:
involves only
unsigned short
values, these are promoted tounsigned int
, then the result is computed as anunsigned int
, and finally that value is assigned to theunsigned long
. But the excess bits have long since been lost...the calculation was done in 16-bit unsigned arithmetic.Experiment
Conducted on a 64-bit RHEL5 (AMD x86/64) machine with GCC 4.1.2. To simulate a 16-bit integer computation, I've liberally laced (a second copy of) the expression with
((unsigned short)(...))
casts. The double multiplication only gets a single cast; the result doesn't change regardless of the order in which the two multiplies are done (doubly not since one of the multiplicands is 1). And I've included (a third copy of) the expression with an(unsigned long)
cast.Test program:
The output is:
I think the second, almost unreadable expression accurately reflects (or sufficiently accurately reflects) the way a 16-bit compiler would evaluate the expression - and the agrees with what you saw.
The result of (47541-3588) is 43953. The result of (10 * 43953) % 65536 is 46314. Add the 3588, and the result is, as it should be, 49902.
I also added an
(unsigned long)
cast toyk
and ran the expression. Maybe for full fidelity with your machine with 32-bitunsigned long
, I should have usedunsigned int
, but the result doesn't change. I don't know where you got your alternative value from - I'd need to see your full working program (analogous to mine) to get any ideas on that. It looks as though you had some part of the calculation 'go negative' on you, leaving you with large (positive) unsigned values, but there is no obvious excuse for the computation to go negative.Taking the code from the comment:
The 65535 should be 65536. The tab at the end of the line is unnecessary, as is the comma at the beginning of the next (but these are pure cosmetics).
More serious (but immaterial to the problem at hand because it is unused), the
<stdio.h>
defines a function (and usually a macro too) calledputchar()
. You should probably not be defining your own function calledputchar
, but if you must, you should normally undefine the macro (assuming there is one) from<stdio.h>
. I admit the code compiled OK on my machine - didn't link, but that was expected; one day, maybe, I'll track down whatputchar()
really is on this machine.The code shown produces the correct/expected answer.
The only way I can see to produce the observed incorrect behaviour is this, where I've removed the superfluous code:
When run on my 64-bit machine (currently, MacOS X 10.6.7 with GCC 4.6.0), I get:
Ignoring the 8 extra F's in the hex value, I get 0xFFFCC2EE rather than the 0xFFFBC2EE you are getting. I have no explanation for that discrepancy. But you can see that if the intermediate result is a signed 16-bit quantity, you can end up with pretty much the result you're seeing.
Then the question is: why is there a signed operation in there? I have no good explanation. I think you may have to look at the assembler code and work out what is going on; you may even be tickling a bug in the compiler.