- Jul 16, 2001
- 7,569
- 172
- 106
Does anyone know of a way to keep the leading 0 when a decimal is converted into a binary?
Say I'm converting 121 into binary, which would be 01111001. Instead, it hacks off the 0 and gives me 1111001, which is also correct, but I require that leading 0 so my program doesn't lose its place. Is there any way around that, or should I be using a different method?
Say I'm converting 121 into binary, which would be 01111001. Instead, it hacks off the 0 and gives me 1111001, which is also correct, but I require that leading 0 so my program doesn't lose its place. Is there any way around that, or should I be using a different method?