When to use size_t vs uint32_t? I saw a a method in a project that receives a parameter called length (of type uint32_t) to denote the length of byte data to deal with and the method is for calculating CRC of the byte data received. The type of the parameter was later refactored to size_t. Is there a technical superiority to using size_t in this case?
e.g.
- (uint16_t)calculateCRC16FromBytes:(unsigned char *)bytes length:(uint32_t)length;
- (uint16_t)calculateCRC16FromBytes:(unsigned char *)bytes length:(size_t)length;
According to the C specification
size_t ... is the unsigned integer type of the result of the sizeof
operator
So any variable that holds the result of a sizeof operation should be declared as size_t. Since the length parameter in the sample prototype could be the result of a sizeof operation, it is appropriate to declare it as a size_t.
e.g.
unsigned char array[2000] = { 1, 2, 3 /* ... */ };
uint16_t result = [self calculateCRC16FromBytes:array length:sizeof(array)];
You could argue that the refactoring of the length parameter was pointlessly pedantic, since you'll see no difference unless:
a) size_t is more than 32-bits
b) the sizeof the array is more than 4GB
Related
I have this method which extracts data from NSData at a specific pointer. The method only extracts a certain amount of bytes, in this case it is 4 bytes as I return a uint32.
I pass in a pointer (int start) which is used to create the location for an NSRange, the length of the range is the size of a uint32, which creates the range as 4 bytes long.
This works perfectly fine, until the pointer gets to 2147483648. When it gets to this value, the range is not created with 2147483648 for the location value instead it is created as 18446744071562067968 which is out of bounds for the data, and causes an exception to occur halting my program which stops it from reading the rest of the data.
I have no idea what is causing it do what its doing, the start value is the correct value when it is passed into the method, but it changes when the range is created. It does not happen for any of the previous pointer values.
Have I done something silly in my code? Or is it a different problem? Help will be appreciated.
Thank you.
- (uint32)getUINT32ValueFromData:(NSData *)rawData pointer:(int)start {
uint32 value;
NSRange range;
int length = sizeof(uint32);
NSUInteger dataLength = rawData.length;
NSData *currentData;
NSUInteger remainingBytes = dataLength - start;
if (remainingBytes > length) {
range.location = start;
range.length = length;
//should be 2147483648, location in range is showing 18446744071562067968 which is out of bounds...
currentData = [rawData subdataWithRange:range];
uint32 hostData = CFSwapInt32BigToHost(*(const uint32 *)[currentData bytes]);
value = hostData;
pointer = start + length;
}
else
{
NSLog(#"Data Length Exceeded!");
}
return value;
}
It's seems to be an 32/64 bit and signed/unsigned mismatch issue.
You're using three different types
int is a 32 bit signed type
uint32 is a 32 bit unsigned type
NSUInteger is a 32/64 bit unsigned type depending on the processor architecture.
unit32 for the value is fine, but you should use NSUInteger for the offset into the NSData object.
I have java code:
Long long_value = 1;
ByteBuffer.allocate( 8).putLong( long_value).array();
I have result arrays of bytes: 0, 0, 0, 0, 0, 0, 0, 1 (in this order).
Please how correct port this code to Objective C?
Preamble
You say "in this order". When a multi-byte value is stored in memory the bytes can be stored big-endian - the most significant byte first, or little endian - the least significant byte first. For example the 2-byte hex value 0x1234 is stored as 0x12, 0x34 big-endian and 0x34, 0x12 little-endian. The endian order depends on the computer architecture in use, for example the Intel x86 is little-endian.
ByteBuffer is a Java class which stores values as byte according to its own endian flag, independent of the underlying hardware's endian order. The default setting of ByteBuffer is big-endian. In your sample you do not set this flag and the array you show is in big endian order.
Apple provides the functions described in Byte Order Utilities Reference for manipulating endian order. The function you need is CFSwapInt64HostToBig which takes a 64-bit signed or unsigned int in whatever endian order the host uses and returns an integer with the bytes arranged in big-endian order - the numeric value of the result is effectively meaningless at this point, it is an ordered collection of 8 bytes.
You also say you want 8 bytes, but a long may not be 8 bytes long - that depends on the hardware architecture, language and compiler. For example for OS X they are 8 bytes when compiling for 64-bit. There are two ways to address this, you can use the sizeof standard function which returns the number of bytes in a value or a type, you pass it a variable or a type name; or you can use the pre-defined sized types when you need a particular number of bytes, for example SInt64 is the pre-defined 8-byte signed integer type.
Finally you say you want an "array", but what kind of array? Objective-C/Cocoa has two: C language value arrays, and Objective-C object arrays NSArray and its mutable sibling NSMutableArray.
C language arrays: in C the name of an array variable is of type "pointer to element type", e.g. for the declaration float values[8] the type of values is float *. This means that pointers can be used as arrays, but they are not the same thing - a pointer variable holds a pointer to memory but does not allocate memory to point to, an array declaration allocates memory for its elements.
C Array
How to get a C "array" of bytes in big-endian order:
SInt64 long_value = 0x123456789ABCDEF; // use SInt64 rather than long
UInt64 BE_value = CFSwapInt64HostToBig(long_value); // use UInt64 as this value may not make numeric sense any more
UInt8 *BE_ptr = (UInt8 *)&BE_value; // UInt8 is a byte, take the address of BE_value
// BE_ptr is now a pointer to the 8 bytes of memory occupied by BE_value
// it can be treated as an array
// print it out to demo
for (int ix = 0; ix < 8; ix++)
{
NSLog(#"%d: %2x", ix, BE_ptr[ix]);
}
This will print out 12, 34 etc.
Objective-C array
You can build this on the above. You cannot store a byte directly in an Objective-C object, you must wrap it up as an object using NSNumber, and #() is a shorthand to do that:
SInt64 long_value = CFSwapInt64HostToBig(0x123456789ABCDEF);
UInt8 *BE_ptr = (UInt8 *)& long_value;
NSMutableArray *BE_array = [NSMutableArray arrayWithCapacity:8]; // create an object array
for (int ix = 0; ix < sizeof(long); ix++)
{
[BE_array addObject:#(BE_ptr[ix])]; // #(...) creates an NSNumber
}
NSLog(#"array: %#", BE_array);
This will print out the array (in decimal).
HTH
You could use a char pointer (chars are the same size as a byte, usually).
unsigned char *p = (unsigned char *)&myLong;
char byte1 = p[0];
char byte2 = p[1];
//etc...
I'm trying to store some game related information on the uint64_t context property of GKScore - to create a better gaming experience with the new Game Center Challenges. However, I'm not getting it right. I built a struct like below:
typedef struct{
unsigned int gameMode;
unsigned int destroyed;
unsigned int duration;
} GameInfo;
I try the following:
uint64_t myContext
GameInfo info;
info.gameMode = 2;
info.destroyed = 50;
info.duration = 100;
NSData *data = [NSData dataWithBytes:&info length:sizeof(info)];
[data getBytes:&myContext length:sizeof(myContext)];
to pack the struct to a NSData and then write the bytes to myContext.
Then, I try to recreate the information using the 64bit integer as follows:
NSData *newData = [NSData dataWithBytes:&myContext length:sizeof(myContext)];
GameInfo *result = (GameInfo*) [newData bytes];
however, when I log out the values, I see that I'm only able to capture the first two values (gameMode and destroyed). If I add more variables to the struct, I still only capture the first 2 variables.
What am I doing wrong? Is there a smarter way to do this?
You are trying to pack 96 bits of data (3 unsigned ints on iOS / ARM) into a 64 bit container. So you see the first two 32-bit values and not the third.
Maybe you could try using shorts or chars, depending on the range of values your struct will hold, and try to get the stuct's size down to < 64 bits. I think 3 char on ARM will get aligned out to 48 bits (might be wrong about that if so please let me know!). So maybe use short anyway.
EDIT: example of possible way to change your struct, assuming you'll only store 16-bit values in each field:
typedef struct{
uint16_t gameMode;
uint16_t destroyed;
uint16_t duration;
} GameInfo;
myclass.h:
#define BUTTON_NAVI 41;
#define BUTTON_SETTINGS 42;
#define BUTTON_INFO 43;
myclass.m:
int btnNavi = BUTTON_NAVI;
int btnSettings = BUTTON_SETTINGS;
int btnArray[2] = {btnNavi, btnSettings};
NSLog(#"count = %i", sizeof(btnArray));
[self addToolbarButtons:btnArray];
-> Log: count = 8
8?! What did I do wrong?
And inside "addToolbarButtons" count is 4... :-(
EDIT:
- (void)addToolbarButtons:(int[])buttonIdArray {
NSLog(#"count = %i", sizeof(buttonIdArray));
}
-> Log: count = 4
sizeof is giving you the size in bytes, 8 bytes sounds right for 2 integers (32-bit or 4 bytes each).
If what you want is the length of the array, you can do sizeof(arr) / sizeof(arr[0]) which will give you the size of the entire array divided by the size of each element. In this case you will get 8 / 4 == 2, which is what I take it you expect.
EDIT
To answer your second question, when you pass the array to the method, you're actually passing a pointer to the array. Hence, the size of the pointer is also 32-bits or 4 bytes. If you want the function to know the length of the array, you need to also pass its length along with said pointer.
sizeof is giving the size of the array in bytes. An int is 4 bytes, so a 2-element array of ints will be 8 bytes.
However, sizeof won't do what you want in your method. When a C array is passed into a function (or method), it actually get passed as a pointer, and sizeof will return the size of a pointer. You should modify your method to take a length parameter:
- (void)addToolbarButtons:(int *)buttonIdArray length:(size_t)len
{
NSLog(#"count = %d", len);
}
The sizeof operator gives you the size in bytes of the thing you pass to it. If an int is 4 bytes, then an array of two ints is 8 bytes.
However, sizeof is a compile-time check on the variable itself. At runtime, the bounds of an array are not known — nor are they known outside the scope where the variable was defined. When you declare the argument (int[])buttonIdArray, the compiler desugars that to (int *)buttonArray — it's just a plain int pointer. So when you do sizeof(buttonArray), it tells you the size of a pointer, 4.
Because the language doesn't keep track of their size for you, using C arrays is a pain. You have to pass the number of elements in the array to every function and method that acts on it.
Please tell me how to convert bytes to NSInteger/int in objective-c in iPhone programming?
What do you mean by "Bytes"?
If you want convert single byte representing integer value to int (or NSInteger) type, just use "=":
Byte b = 123;
NSInteger x;
x = b;
as Byte (the same as unsigned char - 1 byte unsigned integer) and NSInteger (the same as int - 4 bytes signed integer) are both of simple integer types and can be converted automatically. Your should read more about "c data types" and "conversion rules".
for example http://www.exforsys.com/tutorials/c-language/c-programming-language-data-types.html
If you want to convert several bytes storing some value to int, then convertion depends on structure of these data: how many bytes per value, signed or unsigned.
If by byte, you mean an unsigned 8 bit value, the following will do.
uint8_t foo = 3; // or unsigned char foo...
NSInteger bar = (NSInteger) foo;
or even
NSInteger bar = foo;
My guess:
unsigned char data[] = { 0x00, 0x02, 0x45, 0x28 };
NSInteger intData = *((NSInteger *)data);
NSLog(#"data:%d", intData); // data:675611136
NSLog(#"data:%08x", intData); // data:28450200
So, beware of byte-order.
NSInteger x = 3;
unsigned char y = x;
int z = x + y;
Use the "=" operator.