Having ported code back and forth between platforms in C, one of the things that is very frustrating is the very vague definitions of "int", "short", "char", "long", "float" and "double". While it is true that there were CPUs where a machine word was not 8, 16, 32 or 64 bits, that is extremely rare now. I can't think of a single processor (that is actually used) where that is not the case now. Chuck Moore's FORTH CPUs are the only ones I can think of and those are not exactly mainstream.
I find myself never using int, short, or char, and instead I always include stdint.h and use int32_t, int16_t and int8_t. I need to know what size those integral types are.
Java made this mandatory. "int" in Java is a signed 32 bit in. Period. No exceptions. "long" is a signed 64-bit int. Again, no exceptions. While it is very annoying that Java decided that unsigned integers were not interesting, this conformity across platforms is quite handy. I can use an int in a for loop in Java without worrying about what might happen if it is actually 16-bits on that platform.
I think it would be interesting if C2 would define char, short, int and long (and float and double) to be what people usually think they are (unless you are Microsoft): 8, 16, 32 and 64-bit words. The translation to IR is straightforward if I understood the LLVM docs correctly (very possibly not!). Any translation to C would be a simple mapping to one of the intX_t types.
Thoughts?
Best,
Kyle