I want to pack a MIDI message into an NSData object.
int messageType = 3; // 0-15
int channel = 5; // 0-15
int data1 = 56; // 0-127
int data2 = 78; // 0-127
int packed = data2;
packed += data1 * 127;
packed += channel * 16129; // 127^2
packed += messageType * 258064; // 127^2 * 16
NSLog(#"packed %d", packed);
NSData *packedData = [NSData dataWithBytes:&packed length:sizeof(packed)];
int recovered;
[packedData getBytes:&recovered];
NSLog(#"recovered %d", recovered);
This works wonderfully and while I'm proud of myself, I know that the conversion to bytes is not done correctly: it should be a direct conversion without a lot of addition and multiplication. How can that be done?
Edit: I'm now aware that I can just do this
char theBytes[] = {messageType, channel, data1, data2};
NSData *packedData = [NSData dataWithBytes:&theBytes length:sizeof(theBytes)];
and on the Java side
byte[] byteBuffer = new byte[4]; // Receive buffer
while (in.read(byteBuffer) != -1) {
System.out.println("data2=" + byteBuffer[3]);
}
and it will work, but I'd like the solution to get an NSData with just 3 bytes.
Personally, I would go for an NSString:
NSString *dataString = [NSString stringWithFormat:#"%i+%i+%i+%i", messageType, channel, data1, data2];
NSData *packedData = [dataString dataUsingEncoding:NSUTF8StringEncoding];
Easy to use, and easy to transfer. Unpacking is a tiny bit more complicated, but not difficult at all either.
NSScanner *scanner = [NSScanner scannerWithString:[[[NSString alloc] initWithData:packedData encoding:NSUTF8StringEncoding] autorelease]];
int messageType, channel, data1, data2;
[scanner scanInt:&messageType];
[scanner scanInt:&channel];
[scanner scanInt:&data1];
[scanner scanInt:&data2];
Here's a 3-byte solution that I put together.
char theBytes[] = {message_type * 16 + channel, data1, data2};
NSData *packedData = [NSData dataWithBytes:&theBytes length:sizeof(theBytes)];
char theBytesRecovered[3];
[packedData getBytes:theBytesRecovered];
int messageTypeAgain = (int)theBytesRecovered[0]/16;
int channelAgain = (int)theBytesRecovered[0] % 16;
int data1Again = (int)theBytesRecovered[1];
int data2Again = (int)theBytesRecovered[2];
NSLog(#"packed %d %d %d %d", messageTypeAgain, channelAgain, data1Again, data2Again);
and on the other side of the wire, this is just as easy to pick up, because each byte is a byte. I just finished trying this on the iOS side and the Java side, and there are no problems on either. There is no problem with endian-ness, because each integer fits into one single byte (or two in one byte, in one case).
you have several options.
since it looks like you want a contiguous glob of data in the NSData representation...
you'll want to create a packed struct, and pass the data to the NSData call as a predefined endianness (so both ends know how to unarchive the data glob).
/* pack this struct's ivars and and enable -Wreorder to sanity check that the compiler does not reorder members -- i see no reason for the compiler to do this since the fields are equal size/type */
struct t_midi_message {
UInt8 message_type; /* 0-15 */
UInt8 channel; /* 0-15 */
UInt8 data1; /* 0-127 */
UInt8 data2; /* 0-127 */
};
union t_midi_message_archive {
/* members - as a union for easy endian swapping */
SInt32 glob;
t_midi_message message;
enum { ValidateSize = 1 / (4 == sizeof(t_midi_message)) };
/* nothing unusual here, although you may want a ctor which takes NSData as an argument */
t_midi_message_archive();
t_midi_message_archive(const t_midi_message&);
t_midi_message_archive(const t_midi_message_archive&);
t_midi_message_archive& operator=(const t_midi_message_archive&);
/* swap routines -- just pass #member glob to the system's endian routines */
void swapToNativeEndianFromTransferEndian();
void swapToTransferEndianFromNativeEndian();
};
void a(const t_midi_message_archive& msg) {
t_midi_message_archive copy(msg);
copy.swapToTransferEndianFromNativeEndian();
NSData * packedData([NSData dataWithBytes:©.glob length:sizeof(copy.glob)]);
assert(packedData);
t_midi_message_archive recovered;
[packedData getBytes:&recovered.glob];
recovered.swapToNativeEndianFromTransferEndian();
/* recovered may now be used safely */
}
Related
My application uses AES 256 encryption to encrypt a string. The same code that was used before is generating a different result. This problem started when iOS 13 was released. And it happens only to applications that are shipped to the store or built with Xcode 11.
Here is the code used for the encryption:
- (NSData *)encrypt:(NSData *)plainText key:(NSString *)key iv:(NSString *)iv {
char keyPointer[kCCKeySizeAES256+2],// room for terminator (unused) ref: https://devforums.apple.com/message/876053#876053
ivPointer[kCCBlockSizeAES128+2];
BOOL patchNeeded;
bzero(keyPointer, sizeof(keyPointer)); // fill with zeroes for padding
patchNeeded= ([key length] > kCCKeySizeAES256+1);
if(patchNeeded)
{
NSLog(#"Key length is longer %lu", (unsigned long)[[self md5:key] length]);
key = [key substringToIndex:kCCKeySizeAES256]; // Ensure that the key isn't longer than what's needed (kCCKeySizeAES256)
}
//NSLog(#"md5 :%#", key);
[key getCString:keyPointer maxLength:sizeof(keyPointer) encoding:NSUTF8StringEncoding];
[iv getCString:ivPointer maxLength:sizeof(ivPointer) encoding:NSUTF8StringEncoding];
if (patchNeeded) {
keyPointer[0] = '\0'; // Previous iOS version than iOS7 set the first char to '\0' if the key was longer than kCCKeySizeAES256
}
NSUInteger dataLength = [plainText length];
//see https://developer.apple.com/library/ios/documentation/System/Conceptual/ManPages_iPhoneOS/man3/CCryptorCreateFromData.3cc.html
// For block ciphers, the output size will always be less than or equal to the input size plus the size of one block.
size_t buffSize = dataLength + kCCBlockSizeAES128;
void *buff = malloc(buffSize);
size_t numBytesEncrypted = 0;
//refer to http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-36064/CommonCrypto/CommonCryptor.h
//for details on this function
//Stateless, one-shot encrypt or decrypt operation.
CCCryptorStatus status = CCCrypt(kCCEncrypt, /* kCCEncrypt, etc. */
kCCAlgorithmAES128, /* kCCAlgorithmAES128, etc. */
kCCOptionPKCS7Padding, /* kCCOptionPKCS7Padding, etc. */
keyPointer, kCCKeySizeAES256, /* key and its length */
ivPointer, /* initialization vector - use random IV everytime */
[plainText bytes], [plainText length], /* input */
buff, buffSize,/* data RETURNED here */
&numBytesEncrypted);
if (status == kCCSuccess) {
return [NSData dataWithBytesNoCopy:buff length:numBytesEncrypted];
}
free(buff);
return nil;
}
- (NSString *) encryptPlainTextWith:(NSString *)plainText key:(NSString *)key iv:(NSString *)iv {
return [[[[CryptLib alloc] init] encrypt:[plainText dataUsingEncoding:NSUTF8StringEncoding] key:[[CryptLib alloc] sha256:key length:32] iv:iv] base64EncodedStringWithOptions:0];
}
/**
* This function computes the SHA256 hash of input string
* #param key input text whose SHA256 hash has to be computed
* #param length length of the text to be returned
* #return returns SHA256 hash of input text
*/
- (NSString*) sha256:(NSString *)key length:(NSInteger) length{
const char *s=[key cStringUsingEncoding:NSASCIIStringEncoding];
NSData *keyData=[NSData dataWithBytes:s length:strlen(s)];
uint8_t digest[CC_SHA256_DIGEST_LENGTH]={0};
CC_SHA256(keyData.bytes, (CC_LONG)keyData.length, digest);
NSData *out=[NSData dataWithBytes:digest length:CC_SHA256_DIGEST_LENGTH];
NSString *hash=[out description];
hash = [hash stringByReplacingOccurrencesOfString:#" " withString:#""];
hash = [hash stringByReplacingOccurrencesOfString:#"<" withString:#""];
hash = [hash stringByReplacingOccurrencesOfString:#">" withString:#""];
if(length > [hash length])
{
return hash;
}
else
{
return [hash substringToIndex:length];
}
}
##
I would like to know if something in the code path has changed in the way it works. The method called to do the encryptions is "encryptPlainTextWith". Thanks in advance.
Inside:
- (NSString*) sha256:(NSString *)key length:(NSInteger) length
I replaced
NSString *hash=[out description];
To
NSString *hash=[out debugDescription];
And everything got back to normal. Cheers Happy coding.
Alternative Solution as per #Rob Napier
create separate function for converting NSData to Hex
#pragma mark - String Conversion
-(NSString*)hex:(NSData*)data{
NSMutableData *result = [NSMutableData dataWithLength:2*data.length];
unsigned const char* src = data.bytes;
unsigned char* dst = result.mutableBytes;
unsigned char t0, t1;
for (int i = 0; i < data.length; i ++ ) {
t0 = src[i] >> 4;
t1 = src[i] & 0x0F;
dst[i*2] = 48 + t0 + (t0 / 10) * 39;
dst[i*2+1] = 48 + t1 + (t1 / 10) * 39;
}
return [[NSString alloc] initWithData:result encoding:NSASCIIStringEncoding];
}
After that Inside:
- (NSString*) sha256:(NSString *)key length:(NSInteger) length
I replaced
NSString *hash=[out description];
To
NSString *hash = [self hex:out];
I suspect that your key is longer than 32 UTF-8 bytes. In that case, this code is incorrect. Your patchNeeded conditional is basically creating a garbage key. The contents of buffer aren't promised if this function return returns false, but you're relying on them.
There is no secure way to truncate a key you were given, so I'm not really certain what behavior you want here. It depends on what kinds of strings you're passing.
This code is also incorrect if iv is shorter than 16 UTF-8 bytes. You'll wind up including random values from the stack. That part can be fixed with:
bzero(ivPointer, sizeof(ivPointer));
But if your previous version relied on random values, this will still be different.
Assuming you need to match the old behavior, the best way to debug this is to run your previous version in a debugger and see what keyPointer and ivPointer wind up being.
(Note that this approach to creating a key is very insecure. It's drastically shrinking the AES keyspace. How much depends on what kind of strings you're passing, but it's dramatic. You also should never reuse the same key+iv combination in two messages when using CBC, which this looks like it probably does. If possible, I recommend moving to a correct AES implementation. You can look at RNCryptor for one example of how to do that, or use RNCryptor directly if you prefer.)
I have openssl server and Objective-C client. I send message like this
uint32_t testD = 161;
err = SSL_write(ssl_, &testD, sizeof(uint32_t));
and read it by NSInputStream like
case NSStreamEventHasBytesAvailable:
{
uint8_t buffer[4];
int len;
while ([inStream hasBytesAvailable])
{
len = [inStream read:buffer maxLength:sizeof(buffer)];
if (len > 0)
{
NSString *output = [[NSString alloc] initWithBytes:buffer length:len encoding:NSASCIIStringEncoding];
NSData *theData = [[NSData alloc] initWithBytes:buffer length:len];
if (nil != output)
{
char buff;
[theData getBytes:&buff length:1];
uint32_t temp = (uint32_t)buffer;
}
...
So, in output I have "¡", it's 161-th ASCII symbol, in buff I have '\xa1' and in temp very big number, but actually I need 161 in temp.
I read that '\xa1' it's also 161, but I can't cast this to uint32_t.
What is the problem?
ANSWER:
The problem was in casting. This works fine for me:
unsigned char buff;
int temp = buff;
or
char buff;
int b = (unsigned char) buff;
No encoding is used by SSL_write(), and \xa1 == 161 is a mathematical identity, not the result of any encoding process. As you're successfully recovering \xa1, clearly no decoding is used by NSInputStream either.
It seems to me that you're casting the address of the buffer rather than its contents, which is why you get a high value that varies with compilation.
In addition you are possibly over-running the data by reading whatever is available and then only consuming four bytes of it: less in fact because you're incorrectly testing len >= 1 rather than len >= 4.
You should:
Use a buffer of exactly four bytes. No need to allocate it dynamically: you can declare it as a local array.
Read until you have read four bytes. This requires a loop.
Change the casting syntax (don't ask me how, I'm no Objective-C expert, but the code that recovers buff looks like a good start), so that you get the content of the buffer instead of the address.
After that you may then have endian issues.
Nothing to do with encoding.
What encoding is used in SSL_write and NSInputStream?
There is no encoding. Its bytes in and bytes out.
I think you are looking for network byte order/endianess.
Network byte order is big endian. So your code would become:
uint32_t testD = 161;
uint32_t be = htonl(testD);
err = SSL_write(ssl_, &be, sizeof(be));
Here's the description of htonl from the htonl(3) man pages:
The htonl() function converts the unsigned integer hostlong from host byte order to network byte order.
To convert back, you would use ntohl.
I'm not sure if Cocoa/CocoaTouch offers a replacement for htonl and ntohl. So you might have to use them in your iPhone projects, too. See, for example, Using ntohl and htonl problems on iPhone.
We can get a single byte value like this:
unsigned char buff;
int temp = buff;
Or
char buff;
int b = (unsigned char) buff;
I am trying to convert a nsstring with hex values into a float value.
NSString *hexString = #"3f9d70a4";
The float value should be = 1.230.
Some ways I have tried to solve this are:
1.NSScanner
-(unsigned int)strfloatvalue:(NSString *)str
{
float outVal;
NSString *newStr = [NSString stringWithFormat:#"0x%#",str];
NSScanner* scanner = [NSScanner scannerWithString:newStr];
NSLog(#"string %#",newStr);
bool test = [scanner scanHexFloat:&outVal];
NSLog(#"scanner result %d = %a (or %f)",test,outVal,outVal);
return outVal;
}
results:
string 0x3f9d70a4
scanner result 1 = 0x1.fceb86p+29 (or 1067282624.000000)
2.casting pointers
NSNumber * xPtr = [NSNumber numberWithFloat:[(NSNumber *)#"3f9d70a4" floatValue]];
result:3.000000
What you have is not a "hexadecimal float", as is produced by the %a string format and scanned by scanHexFloat: but the hexadecimal representation of a 32-bit floating-point value - i.e. the actual bits.
To convert this back to a float in C requires messing with the type system - to give you access to the bytes that make up a floating-point value. You can do this with a union:
typedef union { float f; uint32_t i; } FloatInt;
This type is similar to a struct but the fields are overlaid on top of each other. You should understand that doing this kind of manipulation requires you understand the storage formats, are aware of endian order, etc. Do not do this lightly.
Now you have the above type you can scan a hexadecimal integer and interpret the resultant bytes as a floating-point number:
FloatInt fl;
NSScanner *scanner = [NSScanner scannerWithString:#"3f9d70a4"];
if([scanner scanHexInt:&fl.i]) // scan into the i field
{
NSLog(#"%x -> %f", fl.i, fl.f); // display the f field, interpreting the bytes of i as a float
}
else
{
// parse error
}
This works, but again consider carefully what you are doing.
HTH
I think a better solutions is a workaround like this :
-(float) getFloat:(NSInteger*)pIndex
{
NSInteger index = *pIndex;
NSData* data = [self subDataFromIndex:&index withLength:4];
*pIndex = index;
uint32_t hostData = CFSwapInt32BigToHost(*(const uint32_t *)[data bytes]);
return *(float *)(&hostData);;
}
Where your parameter is an NSData which rapresents the number in HEX format, and the input parameter is a pointer to the element of NSData.
So basically you are trying to make an NSString to C's float, there's an old fashion way to do that!
NSString* hexString = #"3f9d70a4";
const char* cHexString = [hexString UTF8String];
long l = strtol(cHexString, NULL, 16);
float f = *((float *) &l);
// f = 1.23
for more detail please see this answer
I have function to convert an integer into byte array (for iPhone). To add dynamicity I have allocate the array using malloc. But I think this will leak memory. What's best way to manage this memory,
+ (unsigned char *) intToByteArray:(int)num{
unsigned char * arr = (unsigned char *)
malloc(sizeof(num) * sizeof(unsigned char));
for (int i = sizeof(num) - 1 ; i >= 0; i --) {
arr[i] = num & 0xFF;
num = num >> 8;
}
return arr;
}
When calling,
int x = 500;
unsigned char * bytes = [Util intToByteArray:x];
I want to avoid the call free(bytes) since, the calling function do not know or explicitly knows, the memory is allocated and not freed.
A few things:
The char type (and signed char and unsigned char) all have a size of 1 by definition, so sizeof(unsigned char) is unnecessary.
It looks like you just want to get the byte representation of an int object, if this is the case, it is not necessary to allocate more space for it, simply take the address of the int and cast it to a pointer to unsigned char *. If the byte order is wrong you can use the NSSwapInt function to swap the order of the bytes in the int and then take the address and cast to unsigned char *. For example:
int someInt = 0x12345678;
unsigned char *bytes = (unsigned char *) &someInt;
This cast is legal and reading from bytes is legal up until sizeof(int) bytes are read. This is accessing the “object representation”.
If you insist on using malloc, then you simply need to pass the buffer to free when you are done, as in:
free(bytes);
The name of your method does not imply the correct ownership of the returned buffer. If your method returns something that the caller is responsible for freeing, it is conventional to name the method using new, copy, or sometimes create. A more suitable name would be copyBytesFromInt: or something similar. Otherwise you could have the method accept a pre-allocated buffer and call the method getBytes:fromInt:, for example:
+ (void) getBytes:(unsigned char *) bytes fromInt:(int) num
{
for (int i = sizeof(num) - 1 ; i >= 0; i --) {
bytes[i] = num & 0xFF;
num = num >> 8;
}
}
You could wrap your bytes into a NSData instance:
NSData *data = [NSData dataWithBytesNoCopy:bytes length:sizeof(num) freeWhenDone:YES];
Make sure your method follows the usual object ownership rules.
Just call free(bytes); when you are done with the bytes (either at the end of method or in dealloc of the class)
since you want to avoid the free call, you could wrap your byte[] in a NSData object:
NSData *d = [NSData dataWithBytesNoCopy:bytes length:num freeWhenDone:YES];
The conventional way of handling this is for the caller to pass in an allocated byte buffer. That way the caller is responsible for freeing it. Something like:
int x = 500;
char *buffer = malloc(x * sizeof(char));
[Util int:x toByteArray:buffer];
…
free(buffer);
I would also consider creating an NSData to hold the bytes, this would take care of memory management for you, while still allowing you to alter the byte buffer:
+ (NSData *) intToByteArray:(int)num {
unsigned char * arr = (unsigned char *)
malloc(sizeof(num) * sizeof(unsigned char));
for (int i = sizeof(num) - 1 ; i >= 0; i --) {
arr[i] = num & 0xFF;
num = num >> 8;
}
return [NSData dataWithBytesNoCopy:arr length:num freeWhenDone:YES];
}
I do have the following code that read in from a socket:
Int8 buffer[102400];
UInt8 *buffer_p = buffer;;
int bytesRead;
bytesRead = CFReadStreamRead(stream, buffer, 102400);
The message i am expecting begin with short(2 bytes) short(2 bytes) integer(4 bytes).
I am not sure how to convert them to the corresponding types.
I tried the following:
uint16_t zero16 = NTOHS(buffer_p);
buffer_p += sizeof(uint16_t);
uint16_t msg_id16 = NTOHS(buffer_p);
buffer_p += sizeof(uint16_t);
uint32_t length32 = NTOHL(buffer_p);
buffer_p += sizeof(uint32_t);
or
NSMutableData *data = [NSMutableData dataWithBytes:buffer length:bytesRead];
NSRange firstshort = {0,2};
NSRange secondshort = {2,2};
NSRange intrange = {4,4};
short zero;
[data getBytes:&zero range:firstshort];
short msgid;
[data getBytes:&msgid range:secondshort];
int length;
[data getBytes:&length range:intrange];
But non is working. Thanks in advance.
You may want to look at OSByteOrder.h. This defines a bunch of macros that can be used to read various integer types or to do byte-swapping. Specifically, you could do something like
uint16_t zero16 = OSReadBigInt16(buffer_p, 0);
uint16_t msg_id16 = OSReadBigInt16(buffer_p, 2);
uint32_t length32 = OSReadBigInt32(buffer_p, 4);