Computers do all their arithmetic in binary. We generally use decimal when we write code, just because its the easiest for us to understand. There is nothing stopping you from using hex or even binary numbers when you write code.
The following lines generate the exact same code:
Code:
byte a = 10; //decimal
byte a = 0x0A; //hex
byte a = 0b00001010; //binary
You can do the same kind of thing with shorts, ints and longs too. Any numbers.
So to work with hex, you would just write your numbers in hex and prefix them with "0x". Everything else is the same
Example:
Code:
//decimal
short a = 10;
a = a + 4;
is the exact same as:
Code:
//hex
short a = 0x000A;
a = a + 0x0004;
is the exact same as:
Code:
//binary
short a = 0b0000000000001010;
a = a + 0b0000000000000100;