Quick questio about data type char in C++

wyvrn

Lifer
Feb 15, 2000
10,074
0
0
This involves type casting I would think, since you're going from doubles to characters. I don't think typecasting rounds the number, but just cuts off what will not fit. So I would think that both answers are 3. But I could be wrong :p
 

fs5

Lifer
Jun 10, 2000
11,774
1
0
Originally posted by: Spencer278
no char is like an integer in that it will always round down.

or you can think of it as truncating everything after the decimal (floor)
 

wyvrn

Lifer
Feb 15, 2000
10,074
0
0
Originally posted by: fivespeed5
Originally posted by: Spencer278
no char is like an integer in that it will always round down.

or you can think of it as truncating everything after the decimal (floor)

He said it better than me :)

 

AgentEL

Golden Member
Jun 25, 2001
1,327
0
0
Originally posted by: BingBongWongFooey
And remember that a char's range is only -255 to 255, and an unsigned char's range is 0 to 511.

I think it's -128 to 127 for signed and 0-255 for unsigned.
 

DanceMan

Senior member
Jan 26, 2001
474
0
0
Originally posted by: BingBongWongFooey
And remember that a char's range is only -255 to 255, and an unsigned char's range is 0 to 511.

Ahh, no. It is in the range 0 to 255 unsigned (the signed value is something like -127 to 128 two's compliment I think, not as sure about that as I am unsigned).


There are new unicode char types, but these are not part of the basic ANSI-C that I know of (unless there's been a recent update)

 

AgentEL

Golden Member
Jun 25, 2001
1,327
0
0
Originally posted by: BingBongWongFooey
Hm yeah, I was one bit too big. :p So 0 through 255 and -127 through 127. wchar_t is part of C99 iirc.

-128 through 127 :D:beer:
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Hmm.. I checked and you are right, but I've been trying to visualize why. I think I get it:

0 through 127 would have the largest bit in one state
-128 through -1 would have the largest bit in the other state

but then how would -127 and -128 differ? :)

I really don't understand why programming languages don't provide facilities for printing data in binary, since they generally do hex and octal. :|
 

AgentEL

Golden Member
Jun 25, 2001
1,327
0
0
Originally posted by: BingBongWongFooey
Hmm.. I checked and you are right, but I've been trying to visualize why. I think I get it:

0 through 127 would have the largest bit in one state
-128 through -1 would have the largest bit in the other state

but then how would -127 and -128 differ? :)

I really don't understand why programming languages don't provide facilities for printing data in binary, since they generally do hex and octal. :|

hehe... i had to work it out for myself to check as well.

It all has to do with two-complement numbering system.

If the number starts with zero in binary, there's no problem. You simply convert to base 10 and there you go.

ie. 0x0A = 0000 1010 (base 2) = 10 (base 10)

However, if you have a leading 1, that means your number is negative and you have to do a conversion to account for two's complement.

0xFF = 1111 1111 (base 2) = -1 (base 10)
0x80 = 1000 0000 (base 2) = -128 (base 10)
0x81 = 1000 0001 (base 2) = -127 (base 10)
 

AgentEL

Golden Member
Jun 25, 2001
1,327
0
0
Basically, yes.

Maybe it would help more if I showed how to get the number. If you have a number in two's complement and it starts with a 1, then it is negative. To get the base 10 representation, you have to invert all the bits and add 1.

Here's an example:
0xFF = 1111 1111 <- You see that it starts with a 1, so you know it is a negative number.

Negative of what number though? Flip the bits and add 1.

1111 1111
0000 0000 (All bits flipped)
0000 0001 (added 1)

So, you get a 1 (base 10). But, you know from the beginning that you were trying to find the negative number, so you add the negative sign. Therefore, you get -1.

Here's another example:

0x81
1000 0001
0111 1110 (all bits flipped)
0111 1111 (added 1)
127 (base 10)
but, it's really -127.

One more
0x80
1000 0000
0111 1111 (all bits flipped)
1000 0000 (added 1)
128 (base 10)
but, it's really -128 (since you were looking for the negative)
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Ahh interesting. For some reason I always thought that signed integer types had the same format for the non-sign bits, and the only thing that changed to make them positive or negative was the sign bit. i.e.:

0111 1111 == 127
1111 1111 == -127

guess not though. ;) Interesting how they squeeze everything out of the bits that they can. Kinda reminds me of floating point numbers.
 

AgentEL

Golden Member
Jun 25, 2001
1,327
0
0
yeah, I can see how people might think that.

The first bit is called the sign bit afterall.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: BingBongWongFooey

I really don't understand why programming languages don't provide facilities for printing data in binary, since they generally do hex and octal. :|

When would you ever use binary strings? If you really need them the function is not difficult to write.....

Hex is a lot easier to read.
Really octal isn't used much either except for one thing: Turning things into pairs of octal chars provides an easy cheesy method of base64 encoding.

 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Originally posted by: BingBongWongFooey
Ahh interesting. For some reason I always thought that signed integer types had the same format for the non-sign bits, and the only thing that changed to make them positive or negative was the sign bit. i.e.:

0111 1111 == 127
1111 1111 == -127

guess not though. ;) Interesting how they squeeze everything out of the bits that they can. Kinda reminds me of floating point numbers.


Actually the reason for it is to make addition simpler.
11111111 + 1 = 100000000
But if your register only has 8 bits to work with the 1 in front gets chopped off and you have 0. So by using 2's complement, they keep it so that 1 + -1 = 0. So the only difference between adding signed and unsigned numbers is how the carry and overflow flags work.
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
Originally posted by: glugglug
Originally posted by: BingBongWongFooey

I really don't understand why programming languages don't provide facilities for printing data in binary, since they generally do hex and octal. :|

When would you ever use binary strings?

Read this thread :)
 

AgentEL

Golden Member
Jun 25, 2001
1,327
0
0
Originally posted by: glugglug
Originally posted by: BingBongWongFooey
Ahh interesting. For some reason I always thought that signed integer types had the same format for the non-sign bits, and the only thing that changed to make them positive or negative was the sign bit. i.e.:

0111 1111 == 127
1111 1111 == -127

guess not though. ;) Interesting how they squeeze everything out of the bits that they can. Kinda reminds me of floating point numbers.


Actually the reason for it is to make addition simpler.
11111111 + 1 = 100000000
But if your register only has 8 bits to work with the 1 in front gets chopped off and you have 0. So by using 2's complement, they keep it so that 1 + -1 = 0. So the only difference between adding signed and unsigned numbers is how the carry and overflow flags work.

From my understanding, 2's complement came about to make hardware simpler. In 1's complement, you had two zeros:

0000 0000 == +0
1111 1111 == -0

In one's complement 1 + -1 = 0. The problem comes if you add 1 + -0. In 1's complement 1 + -0 = +0. Now they could put in complex hardware to check for these cases. However, if you simply move to 2's complement, the hardware is much less complex, you don't have any weird +0 or -0 checking, you have one 0, and computers are cheaper.

In 1's or 2's complement, you will still have overflow problems with signed and unsigned integers. It is a matter of resolution and how many different integers you can represent.

ie.:

1111 1111 + 1 in signed or unsigned, 1's or 2's complement, will yield wrong results if you're thinking straight addition.
 

Ulukai

Member
Nov 29, 2003
28
0
0
Originally posted by: bolido2000
To double check. If I do
char c = 3.22222
c is 3.

char c 3.56
c is 4

right?

Since you're converting a floating point number to an integer some of the data will be lost ie. *everything* after the point.

BUT you can get it to behave like it's rounding properly by adding 0.5 to it before typecasting as char.

Taking your examples from earlier......

char c = (char)( 3.22222 + 0.5 );
is equivalent to:
char c = (char)( 3.72222 );

c is 3


char c = (char)( 3.56 + 0.5 );
is equivalent to:
char c = (char)( 4.06 );

c is 4