OK, most of the time it doesn't make any difference, but I've found it makes the intention of some code clearer.
I put these in my "top level" header that gets included everywhere:
typedef unsigned long ulong; typedef unsigned int uint; typedef unsigned short ushort; typedef unsigned char uchar;
These too, so they can be redefined if the compiler you're on has different ideas about word sizes and you need to read binary files across platforms:
typedef __int64 int64; typedef unsigned __int64 uint64; typedef int int32; typedef unsigned int uint32; typedef short int16; typedef unsigned short uint16; typedef char int8; typedef unsigned char uint8;
I find I use uint a lot. There are a lot of things, like array indices or container sizes, that can never be negative (or if they are, you have a problem), so uint makes more sense.
On the other hand, I have found myself using iterators a lot more than indices recently.
One caveat: The conversion from unsigned-to-float is pretty expensive on PC compared to signed-to-float. You may want to keep that in mind if you're mixing floating point and unsigned values.
I even had one case where I tried to cast the unsigned to int first, even going so far as to pass it through an inline function, and Visual Studio just said, "oh, you REALLY meant to convert from unsigned to float" and went through the slow-path conversion code.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.