How would this Objective-C code be written in Swift 3? - objective-c

I have this piece of Objective-C code and I'm breaking my had as to how to write this in Swift instead:
NSData *data = [NSData data];
char vals[value.length];
[value getBytes:vals length:value.length];
Point3D aPoint;
aPoint.x = ((float)((int16_t)((vals[0] & 0xff) | (((int16_t)vals[1] << 8) & 0xff00)))/ (float) 32768) * 255 * 1;
aPoint.y = ((float)((int16_t)((vals[2] & 0xff) | (((int16_t)vals[3] << 8) & 0xff00)))/ (float) 32768) * 255 * 1;
aPoint.z = ((float)((int16_t)((vals[4] & 0xff) | (((int16_t)vals[5] << 8) & 0xff00)))/ (float) 32768) * 255 * 1;
I've tried using this:
let data: NSData = NSData(data: characteristic.value!)
var vals = [CChar16](repeating: CChar16(), count: data.length)
but that only resulted in:
Cannot subscript a value of type 'inout [CChar16]' (aka 'inout
Array')
any help would be greatly appreciated.

Related

How to add padding to NSData so that it will be a multiple of 8 bytes?

In my code I construct a hex NSString first and then use the utility function below to convert it to NSData for transmission.
For example:
+ (NSData *)convertHexString:(NSString *)hexString {
NSString *command = [hexString stringByReplacingOccurrencesOfString:#" " withString:#""];
NSMutableData *commandToSend = [[NSMutableData alloc] init];
unsigned char whole_byte;
char byte_chars[3] = { '\0', '\0', '\0' };
int i;
for (i = 0; i < [command length] / 2; i++) {
byte_chars[0] = [command characterAtIndex:i * 2];
byte_chars[1] = [command characterAtIndex:i * 2 + 1];
whole_byte = strtol(byte_chars, NULL, 16);
[commandToSend appendBytes:&whole_byte length:1];
}
return commandToSend;
}
Now there is a requirement that specifies "NSData must be a minimum of 8 bytes and be a multiple of 8 bytes. NULL padding can be used to make the data a multiple length of 8 bytes." I am not sure how I can make this happen.
NSString* hexString = #"FF88";//this is two bytes right now.
//how do I add NULL padding so that it becomes 8 bytes?
Thank you!
The code below will align on 16 bytes. You could easily change it to 8 bytes as also indicated in the comments, but, depending on what you are implementing, 16 bytes might be better nowadays.
<whatever> * p;
<whatever> * p16;
// Unaligned pointer
p = malloc ( n * sizeof ( <whatever> ) + 15 ); // use 7 for 8 byte alignment
if ( p )
{
memset ( p, 0, n * sizeof ( <whatever> ) + 15 ); // 7 for 8 bytes
// 16 byte aligned pointer
p16 = ( ( uintptr_t ) p + 15 ) & ~ ( uintptr_t ) 0x0F; // 0x08 for 8 bytes
// now p16 is aligned - use as is
}
// else allocation failed handle error
Change <whatever> to taste.
PS : This is more a general pointer alignment solution, not for NSString. So you'd use it if you convert it to a char * somewhere.

iOS:CRC in obj c

i am new to iOS i need to create data packet by using CRC algorithm for the below commands
int comm[6];
comm[0]=0x01;
comm[1]=6;
comm[2]=0x70;
comm[3]=0x00;
comm[4]=0xFFFF;
comm[5]=0xFFFF;
i had a java code which as same thing developing in android
byte[] getCRC(byte[] bytes)
{
byte[] result = new byte[2];
try
{
short crc = (short) 0xFFFF;
for (int j = 0; j < bytes.length; j++)
{
byte c = bytes[j];
for (int i = 7; i >= 0; i--)
{
boolean c15 = ((crc >> 15 & 1) == 1)
boolean bit = ((c >> (7 - i) & 1) == 1);
crc <<= 1;
if (c15 ^ bit)
{
crc ^= 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
}
}
}
int crc2 = crc - 0xffff0000;
result[0] = (byte) (crc2 % 256);
result[1] = (byte) (crc2 / 256);
return result;
}
catch(Exception ex)
{
result = null;
return result;
}
}
Input for getCRC() method: The data packet for which CRC is to be calculated.
Output of getCRC() method: CRC for the packet.
The same thing i need to do in obj c please help if any sample code available also.
Objective-C also incorporates C, so the contents of your method will look almost the same as in Java. All that is needed is to pass your data into and out of the method, in this example using NSData:
- (NSData *)bytesCRCResult:(NSData *)dataBytes
{
unsigned char *result = (unsigned char *)malloc(2);
unsigned char *bytes = (unsigned char *)[dataBytes bytes]; // returns readonly pointer to the byte stream
uint16_t crc = (short) 0xFFFF;
for (int j = 0; j < dataBytes.length; j++)
{
unsigned char c = bytes[j];
for (int i = 7; i >= 0; i--)
{
bool c15 = ((crc >> 15 & 1) == 1);
bool bit = ((c >> (7 - i) & 1) == 1);
crc <<= 1;
if (c15 ^ bit)
{
crc ^= 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
}
}
}
uint16_t crc2 = crc - 0xffff0000;
result[0] = (unsigned char) (crc2 % 256);
result[1] = (unsigned char) (crc2 / 256);
NSData *resultsToData = [NSData dataWithBytes:result length:2];
free(result);
return resultsToData;
}
NSData can be read as raw bytes using the [NSData bytes] method call, and has a range of useful properties and methods.
For the boolean value, you have a few options:
"bool" seems to be the ISO C/C++ standard type
"Boolean" is defined as "typedef unsigned char"
"boolean_t" is defined as "typedef unsigned int" or "typedef int", depending on 64-bit compilation apparently
"BOOL", the Objective-C bool, which is defined as "typedef signed char", according to http://nshipster.com/bool/ and might therefore not behave as expected.
"uint8_t" can be substituted for "unsigned char", for clarity.
Please note: The above code compiles without warning or complaint, but wasn't tested with actual data.

What's wrong with my CRC-12 implementation?

This is what I have so far, but it doesn't seem to match http://zorc.breitbandkatze.de/crc.html all the time.
short crcTable[256];
for (int i = 0; i < 256; i++) {
int crc = (i << 4);
for (int j = 0; j < 8; j++) {
crc = (crc << 1) ^ ((crc & 0x800) ? 0x80F : 0);
}
crcTable[i] = crc & 0xFFF;
}
NSString *theString = #"blah";
unsigned char *string = (unsigned char *)[theString UTF8String];
int length = [theString length];
unsigned short crc = 0;
for (int i = 0; i < length; i++) {
crc = crcTable[(crc ^ string[i]) & 255] ^ (crc >> 8);
}
NSLog(#"%X", crc);
One of our implementations is incorrect, I'm assuming it's mine. But I have no idea what's wrong, or really how to go about working out what's wrong. Any help'd be much appreciated.
Alec
1 Replace
crc = crcTable[(crc ^ string[i]) & 255] ^ (crc >> 8);
by
crc = crcTable[(crc >> 4) ^ string[i]] ^ (crc << 8);
2 Do mirror the 8 bits of each of the message's bytes before using them to calculate the crc value.
3 Finally mirror the 12 bits of the final crc.
As an alternative to the last mod you could also just do a crc & 0xfff and tell the breitbandkatze to 'reverse data bytes'.
You will want to double check, but it appears you are building your table with big-endian code and calculating your CRC with little-endian code.
Try replacing this:
crc = crcTable[(crc ^ string[i]) & 255] ^ (crc >> 8);
with this:
crc = crc ^ (string[i] << 4);
crc = (crcTable[(crc >> 4) & 0xFF] ^ (crc << 4)) & 0xFFF;
-Jesse

Reading PVRTC image color information for each pixel

How do I read the image color information for each pixel of PVRTC image?
Here is my code extracting the integer arrays
NSData *data = [[NSData alloc] initWithContentsOfFile:path];
NSMutableArray *_imageData = [[NSMutableArray alloc] initWithCapacity:10];
BOOL success = FALSE;
PVRTexHeader *header = NULL;
uint32_t flags, pvrTag;
uint32_t dataLength = 0, dataOffset = 0, dataSize = 0;
uint32_t blockSize = 0, widthBlocks = 0, heightBlocks = 0;
uint32_t width = 0, height = 0, bpp = 4;
uint8_t *bytes = NULL;
uint32_t formatFlags;
header = (PVRTexHeader *)[data bytes];
pvrTag = CFSwapInt32LittleToHost(header->pvrTag);
if (gPVRTexIdentifier[0] != ((pvrTag >> 0) & 0xff) ||
gPVRTexIdentifier[1] != ((pvrTag >> 8) & 0xff) ||
gPVRTexIdentifier[2] != ((pvrTag >> 16) & 0xff) ||
gPVRTexIdentifier[3] != ((pvrTag >> 24) & 0xff))
{
return FALSE;
}
flags = CFSwapInt32LittleToHost(header->flags);
formatFlags = flags & PVR_TEXTURE_FLAG_TYPE_MASK;
if (formatFlags == kPVRTextureFlagTypePVRTC_4 || formatFlags == kPVRTextureFlagTypePVRTC_2)
{
[_imageData removeAllObjects];
if (formatFlags == kPVRTextureFlagTypePVRTC_4)
_internalFormat = GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG;
else if (formatFlags == kPVRTextureFlagTypePVRTC_2)
_internalFormat = GL_COMPRESSED_RGBA_PVRTC_2BPPV1_IMG;
_width = width = CFSwapInt32LittleToHost(header->width);
_height = height = CFSwapInt32LittleToHost(header->height);
if (CFSwapInt32LittleToHost(header->bitmaskAlpha))
_hasAlpha = TRUE;
else
_hasAlpha = FALSE;
dataLength = CFSwapInt32LittleToHost(header->dataLength);
bytes = ((uint8_t *)[data bytes]) + sizeof(PVRTexHeader);
// Calculate the data size for each texture level and respect the minimum number of blocks
while (dataOffset < dataLength)
{
if (formatFlags == kPVRTextureFlagTypePVRTC_4)
{
blockSize = 4 * 4; // Pixel by pixel block size for 4bpp
widthBlocks = width / 4;
heightBlocks = height / 4;
bpp = 4;
}
else
{
blockSize = 8 * 4; // Pixel by pixel block size for 2bpp
widthBlocks = width / 8;
heightBlocks = height / 4;
bpp = 2;
}
// Clamp to minimum number of blocks
if (widthBlocks < 2)
widthBlocks = 2;
if (heightBlocks < 2)
heightBlocks = 2;
dataSize = widthBlocks * heightBlocks * ((blockSize * bpp) / 8);
[_imageData addObject:[NSData dataWithBytes:bytes+dataOffset length:dataSize]];
for (int i=0; i < mipmapCount; i++)
{
NSLog(#"width:%d, height:%d",width,height);
data = [[NSData alloc] initWithData:[_imageData objectAtIndex:i]];
NSLog(#"data length:%d",[data length]);
//extracted 20 sample data, but all u could see are large integer number
for(int i = 0; i < 20; i++){
NSLog(#"data[%d]:%d",i,data[i]);
}
PVRTC is a 4x4 (or 8x4) texel, block-based compression system that takes into account surrounding blocks to represent two low frequency images with which higher frequency modulation data is combined in order to produce the actual texel output. A better explanation is available here:
http://web.onetel.net.uk/~simonnihal/assorted3d/fenney03texcomp.pdf
So the values you're extracting are actually parts of the encoded blocks and these need to be decoded correctly in order to get sensible values.
There are two ways to get to the colour information: decode/decompress the PVR texture information using a software decompressor or render the texture using a POWERVR graphics core and then read the result back. I'll only discuss the first option here.
It's rather tricky to assemble a decompressor from only the information there, but fortunately there's C++ decompression source code in the POWERVR SDK which you can get here - download one of the iPhone SDKs for instance:
http://www.imgtec.com/powervr/insider/powervr-sdk.asp
It's in the Tools/PVRTDecompress.cpp file.
Hope that helps.

How to convert a unichar value to an NSString in Objective-C?

I've got an international character stored in a unichar variable. This character does not come from a file or url. The variable itself only stores an unsigned short(0xce91) which is in UTF-8 format and translates to the greek capital letter 'A'. I'm trying to put that character into an NSString variable but i fail miserably.
I've tried 2 different ways both of which unsuccessful:
unichar greekAlpha = 0xce91; //could have written greekAlpha = 'Α' instead.
NSString *theString = [NSString stringWithFormat:#"Greek Alpha: %C", greekAlpha];
No good. I get some weird chinese characters. As a sidenote this works perfectly with english characters.
Then I also tried this:
NSString *byteString = [[NSString alloc] initWithBytes:&greekAlpha
length:sizeof(unichar)
encoding:NSUTF8StringEncoding];
But this doesn't work either.
I'm obviously doing something terribly wrong, but I don't know what.
Can someone help me please ?
Thanks!
unichar greekAlpha = 0x0391;
NSString* s = [NSString stringWithCharacters:&greekAlpha length:1];
And now you can incorporate that NSString into another in any way you like. Do note, however, that it is now legal to type a Greek alpha directly into an NSString literal.
Since 0xce91 is in the UTF-8 format and %C expects it to be in UTF-16 a simple solution like the one above won't work. For stringWithFormat:#"%C" to work you need to input 0x391 which is the UTF-16 unicode.
In order to create a string from the UTF-8 encoded unichar you need to first split the unicode into it's octets and then use initWithBytes:length:encoding.
unichar utf8char = 0xce91;
char chars[2];
int len = 1;
if (utf8char > 127) {
chars[0] = (utf8char >> 8) & (1 << 8) - 1;
chars[1] = utf8char & (1 << 8) - 1;
len = 2;
} else {
chars[0] = utf8char;
}
NSString *string = [[NSString alloc] initWithBytes:chars
length:len
encoding:NSUTF8StringEncoding];
The above answer is great but doesn't account for UTF-8 characters longer than 16 bits, e.g. the ellipsis symbol - 0xE2,0x80,0xA6. Here's a tweak to the code:
if (utf8char > 65535) {
chars[0] = (utf8char >> 16) & 255;
chars[1] = (utf8char >> 8) & 255;
chars[2] = utf8char & 255;
chars[3] = 0x00;
} else if (utf8char > 127) {
chars[0] = (utf8char >> 8) & 255;
chars[1] = utf8char & 255;
chars[2] = 0x00;
} else {
chars[0] = utf8char;
chars[1] = 0x00;
}
NSString *string = [[[NSString alloc] initWithUTF8String:chars] autorelease];
Note the different string initialisation method which doesn't require a length parameter.
Here is an algorithm for UTF-8 encoding on a single character:
if (utf8char<0x80){
chars[0] = (utf8char>>0) & (0x7F | 0x00);
chars[1] = 0x00;
chars[2] = 0x00;
chars[3] = 0x00;
}
else if (utf8char<0x0800){
chars[0] = (utf8char>>6) & (0x1F | 0xC0);
chars[1] = (utf8char>>0) & (0x3F | 0x80);
chars[2] = 0x00;
chars[3] = 0x00;
}
else if (utf8char<0x010000) {
chars[0] = (utf8char>>12) & (0x0F | 0xE0);
chars[1] = (utf8char>>6) & (0x3F | 0x80);
chars[2] = (utf8char>>0) & (0x3F | 0x80);
chars[3] = 0x00;
}
else if (utf8char<0x110000) {
chars[0] = (utf8char>>18) & (0x07 | 0xF0);
chars[1] = (utf8char>>12) & (0x3F | 0x80);
chars[2] = (utf8char>>6) & (0x3F | 0x80);
chars[3] = (utf8char>>0) & (0x3F | 0x80);
}
The code above is the moral equivalent of unichar foo = 'abc';.
The problem is that 'Α' doesn't map to a single byte in the "execution character set" (I'm assuming UTF-8) which is "implementation-defined" in C99 §6.4.4.4 10:
The value of an integer character constant containing more than one character (e.g., 'ab'), or containing a character or escape sequence that does not map to a single-byte execution character, is implementation-defined.
One way is to make 'ab' equal to 'a'<<8|b. Some Mac/iOS system headers rely on this for things like OSType/FourCharCode/FourCC; the only one in iOS that comes to mind is CoreVideo pixel formats. This is, however, unportable.
If you really want a unichar literal, you can try L'A' (technically it's a wchar_t literal, but on OS X and iOS, wchar_t is typically UTF-16 so it'll work for things inside the BMP). However, it's far simpler to just use #"Α" (which works as long as you set the source character encoding correctly) or #"\u0391" (which has worked since at least the iOS 3 SDK).