Converting int to bytes and switching endian efficiently - objective-c

I have to do some int -> byte conversion and switch to big endian for some MIDI data I'm writing. Right now, I'm doing it like:
int tempo = 500000;
char* a = (char*)&tempo;
//reverse it
inverse(a, 3);
[myMutableData appendBytes:a length:3];
and the inverse function:
void inverse(char inver_a[],int j)
{
int i,temp;
j--;
for(i=0;i<(j/2);i++)
{
temp=inver_a[i];
inver_a[i]=inver_a[j];
inver_a[j]=temp;
j--;
}
}
It works, but it's not real clean, and I don't like that I'm having to specify 3 both times (since I have the luxury of knowing how many bytes it will end up).
Is there a more convenient way I should be approaching this?

Use the Core Foundation byte swapping functions.
int32_t unswapped = 0x12345678;
int32_t swapped = CFSwapInt32HostToBig(unswapped);
char* a = (char*) &swapped;
[myMutableData appendBytes:a length:sizeof(int32_t)];

This should do the trick:
/*
Quick swap of Endian.
*/
#include <stdio.h>
int main(){
unsigned int number = 0x04030201;
char *p1, *p2;
int i;
p1 = (char *) &number;
p2 = (p1 + 3);
for (i=0; i<2; i++){
*p1 ^= *p2;
*p2 ^= *p1;
*p1 ^= *p2;
}
return 0;
}
You can pack it into a function in whatever way you want to use it. The bitwise swap should compile into some pretty neat assembly :)
Hope it helps :)

int tempo = 500000;
//reverse it
inverse(&tempo);
[myMutableData appendBytes:(char*)tempo length:sizeof(tempo)];
and the inverse function:
void inverse(int *value)
{
char inver_a = (char*)value;
int j = sizeof(*value); //or u can put 2
int i,temp;
// commenting this j--;
for(i=0;i<(j/2);i++)
{
temp=inver_a[i];
inver_a[i]=inver_a[j];
inver_a[j]=temp;
j--;
}
}

Related

iOS:CRC in obj c

i am new to iOS i need to create data packet by using CRC algorithm for the below commands
int comm[6];
comm[0]=0x01;
comm[1]=6;
comm[2]=0x70;
comm[3]=0x00;
comm[4]=0xFFFF;
comm[5]=0xFFFF;
i had a java code which as same thing developing in android
byte[] getCRC(byte[] bytes)
{
byte[] result = new byte[2];
try
{
short crc = (short) 0xFFFF;
for (int j = 0; j < bytes.length; j++)
{
byte c = bytes[j];
for (int i = 7; i >= 0; i--)
{
boolean c15 = ((crc >> 15 & 1) == 1)
boolean bit = ((c >> (7 - i) & 1) == 1);
crc <<= 1;
if (c15 ^ bit)
{
crc ^= 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
}
}
}
int crc2 = crc - 0xffff0000;
result[0] = (byte) (crc2 % 256);
result[1] = (byte) (crc2 / 256);
return result;
}
catch(Exception ex)
{
result = null;
return result;
}
}
Input for getCRC() method: The data packet for which CRC is to be calculated.
Output of getCRC() method: CRC for the packet.
The same thing i need to do in obj c please help if any sample code available also.
Objective-C also incorporates C, so the contents of your method will look almost the same as in Java. All that is needed is to pass your data into and out of the method, in this example using NSData:
- (NSData *)bytesCRCResult:(NSData *)dataBytes
{
unsigned char *result = (unsigned char *)malloc(2);
unsigned char *bytes = (unsigned char *)[dataBytes bytes]; // returns readonly pointer to the byte stream
uint16_t crc = (short) 0xFFFF;
for (int j = 0; j < dataBytes.length; j++)
{
unsigned char c = bytes[j];
for (int i = 7; i >= 0; i--)
{
bool c15 = ((crc >> 15 & 1) == 1);
bool bit = ((c >> (7 - i) & 1) == 1);
crc <<= 1;
if (c15 ^ bit)
{
crc ^= 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
}
}
}
uint16_t crc2 = crc - 0xffff0000;
result[0] = (unsigned char) (crc2 % 256);
result[1] = (unsigned char) (crc2 / 256);
NSData *resultsToData = [NSData dataWithBytes:result length:2];
free(result);
return resultsToData;
}
NSData can be read as raw bytes using the [NSData bytes] method call, and has a range of useful properties and methods.
For the boolean value, you have a few options:
"bool" seems to be the ISO C/C++ standard type
"Boolean" is defined as "typedef unsigned char"
"boolean_t" is defined as "typedef unsigned int" or "typedef int", depending on 64-bit compilation apparently
"BOOL", the Objective-C bool, which is defined as "typedef signed char", according to http://nshipster.com/bool/ and might therefore not behave as expected.
"uint8_t" can be substituted for "unsigned char", for clarity.
Please note: The above code compiles without warning or complaint, but wasn't tested with actual data.

functions calling from main method objective c

Here I need to write a function which is called from main method with integer array as a parameter please give me example.
In below example parameter are int type.
Note : please tell this is correct way to do this or not...
#import <Foundation/Foundation.h>
void displayit (int);
int main (int argc, const char * argv[])
{
#autoreleasepool {
int i;
for (i=0; i<5; i++)
{
displayit( i );
}
}
return 0;
}
void displayit (int i)
{
int y = 0;
y += i;
NSLog (#"y + i = %i", y);
}
Thanks in advance....
I tried out these, please check.
#import <Foundation/Foundation.h>
void displayit (int array[], int len);
int main (int argc, const char * argv[])
{
#autoreleasepool {
int array[]={1,2,3};
displayit( array, 3 );
}
return 0;
}
void displayit (int array[], int len)
{
for(int i=0;i<len;i++){
NSLog(#"display %d : %d",i,array[i]);
}
}
The out put is:
2014-10-30 14:09:32.017 OSTEST[32541:77397] display 0 : 1
2014-10-30 14:09:32.018 OSTEST[32541:77397] display 1 : 2
2014-10-30 14:09:32.018 OSTEST[32541:77397] display 2 : 3
Program ended with exit code: 0
I used another parameter len to avoid boundary beyond.
If the array is a global, static, or automatic variable (int array[10];), then sizeof(array)/sizeof(array[0]) works. Quoted From Another Question

How to return generated array from c function to objective-c

Need to generated some random 10 byte length string in c function and call the function from objective-c. So, I'm creating a pointer to uint8_t and passing it to C function. The function generates random bytes and assigns them to *randomString. However, after returning from function to objective-c randomValue pointer points to NULL.
Here's my random function in C:
void randomString(uint8_t *randomString)
{
randomString = malloc(10);
char randomByte;
char i;
for (i = 0; i < 10; i++) {
srand((unsigned)time(NULL));
randomByte = (rand() % 255 ) + 1;
*randomString = randomByte;
randomString++;
}
}
Here's objective-c part:
uint8_t *randomValue = NULL;
randomString(randomValue); //randomValue points to 0x000000
NSString *randomString = [[NSString alloc] initWithBytes:randomValue length:10 encoding:NSASCIIStringEncoding];
NSLog(#"Random string: %#", randomString);
A more natural semantic, like malloc() itself would be:
uint8_t * randomString()
{
uint8_t *randomString = malloc(10);
srand((unsigned)time(NULL));
for (unsigned i = 0; i < 10; i++)
randomString[i] = (rand() % 254) + 1;
return randomString;
}
Pointers are passed by value, so randomValue will remain NULL after the call of randomString. You need to pass a pointer to a pointer in order to make it work:
void randomString(uint8_t **randomString) {
*randomString = malloc(10);
// ... the rest of your code goes here, with an extra level of indirection
}
uint8_t *randomValue = NULL;
randomString(&randomValue);
You probably should be using uint8_t **randomeValue instead of uint8_t *.

De-interleave and interleave buffer with vDSP_ctoz() and vDSP_ztoz()?

How do I de-interleave the float *newAudio into float *channel1 and float* channel2 and interleave it back into newAudio?
Novocaine *audioManager = [Novocaine audioManager];
__block float *channel1;
__block float *channel2;
[audioManager setInputBlock:^(float *newAudio, UInt32 numSamples, UInt32 numChannels) {
// Audio comes in interleaved, so,
// if numChannels = 2, newAudio[0] is channel 1, newAudio[1] is channel 2, newAudio[2] is channel 1, etc.
// Deinterleave with vDSP_ctoz()/vDSP_ztoz(); and fill channel1 and channel2
// ... processing on channel1 & channel2
// Interleave channel1 and channel2 with vDSP_ctoz()/vDSP_ztoz(); to newAudio
}];
What would these two lines of code look like? I don't understand the syntax of ctoz/ztoz.
What I do in Novocaine's accessory classes, like the Ringbuffer, for de-interleaving:
float zero = 0.0;
vDSP_vsadd(data, numChannels, &zero, leftSampleData, 1, numFrames);
vDSP_vsadd(data+1, numChannels, &zero, rightSampleData, 1, numFrames);
for interleaving:
float zero = 0.0;
vDSP_vsadd(leftSampleData, 1, &zero, data, numChannels, numFrames);
vDSP_vsadd(rightSampleData, 1, &zero, data+1, numChannels, numFrames);
The more general way to do things is to have an array of arrays, like
int maxNumChannels = 2;
int maxNumFrames = 1024;
float **arrays = (float **)calloc(maxNumChannels, sizeof(float *));
for (int i=0; i < maxNumChannels; ++i) {
arrays[i] = (float *)calloc(maxNumFrames, sizeof(float));
}
[[Novocaine audioManager] setInputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
float zero = 0.0;
for (int iChannel = 0; iChannel < numChannels; ++iChannel) {
vDSP_vsadd(data, numChannels, &zero, arrays[iChannel], 1, numFrames);
}
}];
which is what I use internally a lot in the RingBuffer accessory classes for Novocaine. I timed the speed of vDSP_vsadd versus memcpy, and (very, very surprisingly), there's no speed difference.
Of course, you can always just use a ring buffer, and save yourself the hassle
#import "RingBuffer.h"
int maxNumFrames = 4096
int maxNumChannels = 2
RingBuffer *ringBuffer = new RingBuffer(maxNumFrames, maxNumChannels)
[[Novocaine audioManager] setInputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
ringBuffer->AddNewInterleavedFloatData(data, numFrames, numChannels);
}];
[[Novocaine audioManager] setOuputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
ringBuffer->FetchInterleavedData(data, numFrames, numChannels);
}];
Hope that helps.
Here is an example:
#include <Accelerate/Accelerate.h>
int main(int argc, const char * argv[])
{
// Bogus interleaved stereo data
float stereoInput [1024];
for(int i = 0; i < 1024; ++i)
stereoInput[i] = (float)i;
// Buffers to hold the deinterleaved data
float leftSampleData [1024 / 2];
float rightSampleData [1024 / 2];
DSPSplitComplex output = {
.realp = leftSampleData,
.imagp = rightSampleData
};
// Split the data. The left (even) samples will end up in leftSampleData, and the right (odd) will end up in rightSampleData
vDSP_ctoz((const DSPComplex *)stereoInput, 2, &output, 1, 1024 / 2);
// Print the result for verification
for(int i = 0; i < 512; ++i)
printf("%d: %f + %f\n", i, leftSampleData[i], rightSampleData[i]);
return 0;
}
sbooth answers how to de-interleave using vDSP_ctoz. Here's the complementary operation, namely interleaving using vDSP_ztoc.
#include <stdio.h>
#include <Accelerate/Accelerate.h>
int main(int argc, const char * argv[])
{
const int NUM_FRAMES = 16;
const int NUM_CHANNELS = 2;
// Buffers for left/right channels
float xL[NUM_FRAMES];
float xR[NUM_FRAMES];
// Initialize with some identifiable data
for (int i = 0; i < NUM_FRAMES; i++)
{
xL[i] = 2*i; // Even
xR[i] = 2*i+1; // Odd
}
// Buffer for interleaved data
float stereo[NUM_CHANNELS*NUM_FRAMES];
vDSP_vclr(stereo, 1, NUM_CHANNELS*NUM_FRAMES);
// Interleave - take separate left & right buffers, and combine into
// single buffer alternating left/right/left/right, etc.
DSPSplitComplex x = {xL, xR};
vDSP_ztoc(&x, 1, (DSPComplex*)stereo, 2, NUM_FRAMES);
// Print the result for verification. Should give output like
// i: L, R
// 0: 0.00, 1.00
// 1: 2.00, 3.00
// etc...
printf(" i: L, R\n");
for (int i = 0; i < NUM_FRAMES; i++)
{
printf("%2d: %5.2f, %5.2f\n", i, stereo[2*i], stereo[2*i+1]);
}
return 0;
}

How do I convert a Hexa-Tri-Decimal number into an int in objective c?

The Hexa-Tri-Decimal number is 0-9 and A-Z. I know I can covert from hex with a NSScanner but not sure how to go about converting Hexa-Tri-Decimal.
For example I have a NSString with "0XPM" the int value should be 43690, "1BLC" would be 61680.
Objective C is built on top of C, and luckily enough you can use the functions there to accomplish the conversion. What you're looking for is strtol or one of it's sibling functions. If I recall correctly strtol handles up to base36 (the hexa-tri-decimal you refer to).
http://www.cplusplus.com/reference/clibrary/cstdlib/strtol/
I can only think to do this using C strings, as they offer easier access to individual characters.
This seemed like an interesting problem to solve, so I had a go at writing it:
int parseBase36Number(NSString *input)
{
const char *inputCString = [[input lowercaseString] UTF8String];
size_t inputLength = [input length];
int orderOfMagnitudeMultiplier = 1;
int result = 0;
// iterate backward through the number
for (int i = inputLength - 1; i >= 0; i--)
{
char inputChar = inputCString[i];
int charNumericValue;
if (isdigit(inputChar))
{
charNumericValue = inputChar - '0';
}
else if (islower(inputChar))
{
charNumericValue = inputChar - 'a' + 10;
}
else
{
// unhanded character, throw error
}
result += charNumericValue * orderOfMagnitudeMultiplier;
orderOfMagnitudeMultiplier *= 36;
}
return result;
}
NOTE: I've not tested this at all, so take care and let me know how it goes!