Got an obscure one today.
The code that follows illustrates the problem but is not the code where the problem was
found.
We were expecting that the hex integer literal 0X80000000 would correspond to the integer value of (-1)(2^31) since this is a legal value. We tried this several ways (2) (3) (4) and they all resulted in an error. This is because the code which parses hex integer literals does not recognize the 0X80000000 as valid int value as such it then tries to create it as uint. If you remove comment (1) you will see these (2) works.
What is a little disturbing is (5) the decimal integer literal for (-1)(2^31) is a valid value.
===Code Follows===
enum Range //(1)// : uint
{
flag1 = 0X1,
flag2 = -0X2,
flag3 = 0X4,
flag4 = 0X8,
flag5 = 0X10,
flag6 = 0X20,
flag7 = 0X40,
flag8 = 0X80,
flag9 = 0X100,
flag10 = 0X200,
flag11 = 0X400,
flag12 = 0X800,
flag13 = 0X1000,
flag14 = 0X2000,
flag15 = 0X4000,
flag16 = 0X8000,
flag17 = 0X10000,
flag18 = 0X20000,
flag19 = 0X40000,
flag20 = 0X80000,
flag21 = 0X100000,
flag22 = 0X200000,
flag23 = 0X400000,
flag24 = 0X800000,
flag25 = 0X1000000,
flag26 = 0X2000000,
flag27 = 0X4000000,
flag28 = 0X8000000,
flag29 = 0X10000000,
flag30 = 0X20000000,
flag31 = 0X40000000,
//2// flag32 = 0X80000000 // Cannot implicitly convert 'uint' to 'int'
//3// flag32 = -0X80000000 // Cannot implicitly convert 'uint' to 'int'
//4// flag32 = 0X80000000u // Cannot implicitly convert 'uint' to 'int'
//5// flag32 = -2147483648 // Works
}
No comments:
Post a Comment