- Dec 3, 2001
- 3,720
- 1
- 0
Originally posted by: Spencer278
no char is like an integer in that it will always round down.
Originally posted by: fivespeed5
Originally posted by: Spencer278
no char is like an integer in that it will always round down.
or you can think of it as truncating everything after the decimal (floor)
Originally posted by: BingBongWongFooey
And remember that a char's range is only -255 to 255, and an unsigned char's range is 0 to 511.
Originally posted by: BingBongWongFooey
And remember that a char's range is only -255 to 255, and an unsigned char's range is 0 to 511.
Originally posted by: BingBongWongFooey
Hm yeah, I was one bit too big.So 0 through 255 and -127 through 127. wchar_t is part of C99 iirc.
Originally posted by: BingBongWongFooey
Hmm.. I checked and you are right, but I've been trying to visualize why. I think I get it:
0 through 127 would have the largest bit in one state
-128 through -1 would have the largest bit in the other state
but then how would -127 and -128 differ?
I really don't understand why programming languages don't provide facilities for printing data in binary, since they generally do hex and octal. :|
Originally posted by: BingBongWongFooey
I really don't understand why programming languages don't provide facilities for printing data in binary, since they generally do hex and octal. :|
Originally posted by: BingBongWongFooey
Ahh interesting. For some reason I always thought that signed integer types had the same format for the non-sign bits, and the only thing that changed to make them positive or negative was the sign bit. i.e.:
0111 1111 == 127
1111 1111 == -127
guess not though.Interesting how they squeeze everything out of the bits that they can. Kinda reminds me of floating point numbers.
Originally posted by: glugglug
Originally posted by: BingBongWongFooey
I really don't understand why programming languages don't provide facilities for printing data in binary, since they generally do hex and octal. :|
When would you ever use binary strings?
Originally posted by: glugglug
Originally posted by: BingBongWongFooey
Ahh interesting. For some reason I always thought that signed integer types had the same format for the non-sign bits, and the only thing that changed to make them positive or negative was the sign bit. i.e.:
0111 1111 == 127
1111 1111 == -127
guess not though.Interesting how they squeeze everything out of the bits that they can. Kinda reminds me of floating point numbers.
Actually the reason for it is to make addition simpler.
11111111 + 1 = 100000000
But if your register only has 8 bits to work with the 1 in front gets chopped off and you have 0. So by using 2's complement, they keep it so that 1 + -1 = 0. So the only difference between adding signed and unsigned numbers is how the carry and overflow flags work.
Originally posted by: bolido2000
To double check. If I do
char c = 3.22222
c is 3.
char c 3.56
c is 4
right?
