De-interleave and interleave buffer with vDSP_ctoz() and vDSP_ztoz()? - objective-c

How do I de-interleave the float *newAudio into float *channel1 and float* channel2 and interleave it back into newAudio?
Novocaine *audioManager = [Novocaine audioManager];
__block float *channel1;
__block float *channel2;
[audioManager setInputBlock:^(float *newAudio, UInt32 numSamples, UInt32 numChannels) {
// Audio comes in interleaved, so,
// if numChannels = 2, newAudio[0] is channel 1, newAudio[1] is channel 2, newAudio[2] is channel 1, etc.
// Deinterleave with vDSP_ctoz()/vDSP_ztoz(); and fill channel1 and channel2
// ... processing on channel1 & channel2
// Interleave channel1 and channel2 with vDSP_ctoz()/vDSP_ztoz(); to newAudio
}];
What would these two lines of code look like? I don't understand the syntax of ctoz/ztoz.

What I do in Novocaine's accessory classes, like the Ringbuffer, for de-interleaving:
float zero = 0.0;
vDSP_vsadd(data, numChannels, &zero, leftSampleData, 1, numFrames);
vDSP_vsadd(data+1, numChannels, &zero, rightSampleData, 1, numFrames);
for interleaving:
float zero = 0.0;
vDSP_vsadd(leftSampleData, 1, &zero, data, numChannels, numFrames);
vDSP_vsadd(rightSampleData, 1, &zero, data+1, numChannels, numFrames);
The more general way to do things is to have an array of arrays, like
int maxNumChannels = 2;
int maxNumFrames = 1024;
float **arrays = (float **)calloc(maxNumChannels, sizeof(float *));
for (int i=0; i < maxNumChannels; ++i) {
arrays[i] = (float *)calloc(maxNumFrames, sizeof(float));
}
[[Novocaine audioManager] setInputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
float zero = 0.0;
for (int iChannel = 0; iChannel < numChannels; ++iChannel) {
vDSP_vsadd(data, numChannels, &zero, arrays[iChannel], 1, numFrames);
}
}];
which is what I use internally a lot in the RingBuffer accessory classes for Novocaine. I timed the speed of vDSP_vsadd versus memcpy, and (very, very surprisingly), there's no speed difference.
Of course, you can always just use a ring buffer, and save yourself the hassle
#import "RingBuffer.h"
int maxNumFrames = 4096
int maxNumChannels = 2
RingBuffer *ringBuffer = new RingBuffer(maxNumFrames, maxNumChannels)
[[Novocaine audioManager] setInputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
ringBuffer->AddNewInterleavedFloatData(data, numFrames, numChannels);
}];
[[Novocaine audioManager] setOuputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
ringBuffer->FetchInterleavedData(data, numFrames, numChannels);
}];
Hope that helps.

Here is an example:
#include <Accelerate/Accelerate.h>
int main(int argc, const char * argv[])
{
// Bogus interleaved stereo data
float stereoInput [1024];
for(int i = 0; i < 1024; ++i)
stereoInput[i] = (float)i;
// Buffers to hold the deinterleaved data
float leftSampleData [1024 / 2];
float rightSampleData [1024 / 2];
DSPSplitComplex output = {
.realp = leftSampleData,
.imagp = rightSampleData
};
// Split the data. The left (even) samples will end up in leftSampleData, and the right (odd) will end up in rightSampleData
vDSP_ctoz((const DSPComplex *)stereoInput, 2, &output, 1, 1024 / 2);
// Print the result for verification
for(int i = 0; i < 512; ++i)
printf("%d: %f + %f\n", i, leftSampleData[i], rightSampleData[i]);
return 0;
}

sbooth answers how to de-interleave using vDSP_ctoz. Here's the complementary operation, namely interleaving using vDSP_ztoc.
#include <stdio.h>
#include <Accelerate/Accelerate.h>
int main(int argc, const char * argv[])
{
const int NUM_FRAMES = 16;
const int NUM_CHANNELS = 2;
// Buffers for left/right channels
float xL[NUM_FRAMES];
float xR[NUM_FRAMES];
// Initialize with some identifiable data
for (int i = 0; i < NUM_FRAMES; i++)
{
xL[i] = 2*i; // Even
xR[i] = 2*i+1; // Odd
}
// Buffer for interleaved data
float stereo[NUM_CHANNELS*NUM_FRAMES];
vDSP_vclr(stereo, 1, NUM_CHANNELS*NUM_FRAMES);
// Interleave - take separate left & right buffers, and combine into
// single buffer alternating left/right/left/right, etc.
DSPSplitComplex x = {xL, xR};
vDSP_ztoc(&x, 1, (DSPComplex*)stereo, 2, NUM_FRAMES);
// Print the result for verification. Should give output like
// i: L, R
// 0: 0.00, 1.00
// 1: 2.00, 3.00
// etc...
printf(" i: L, R\n");
for (int i = 0; i < NUM_FRAMES; i++)
{
printf("%2d: %5.2f, %5.2f\n", i, stereo[2*i], stereo[2*i+1]);
}
return 0;
}

Related

Convolution matrix sharpen filter

i trying to implement sharpen convolution matrix filter for image.For this i create matrix 3x3. Maybe i did something wrong with formula?Also i tried other sharpen matrix but it didnt help. Color value could be larger then 255 or smaller then zero so i decide to give some limits on this(0 255).Is it correct solution?
static const int filterSmallMatrixSize = 3;
static const int sharpMatrix[3][3] = {{-1, -1, -1},{-1, 9, -1},{-1, -1, -1}};
some define
#define Mask8(x) ( (x) & 0xFF )
#define R(x) ( Mask8(x) )
#define G(x) ( Mask8(x >> 8 ) )
#define B(x) ( Mask8(x >> 16) )
#define A(x) ( Mask8(x >> 24) )
#define RGBAMake(r, g, b, a) ( Mask8(r) | Mask8(g) << 8 | Mask8(b) << 16 | Mask8(a) << 24 )
and algorithm
- (UIImage *)processSharpFilterUsingPixels:(UIImage *)inputImage
{
UInt32 *inputPixels;
CGImageRef inputCGImage = [inputImage CGImage];
NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
NSUInteger inputHeight = CGImageGetHeight(inputCGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bitsPerComponent = 8;
NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;
inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));
CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight,
bitsPerComponent, inputBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);
for (NSUInteger j = 1; j < inputHeight - 1; j++)
{
for (NSUInteger i = 1; i < inputWidth - 1; i++)
{
Float32 newRedColor = 0;
Float32 newGreenColor = 0;
Float32 newBlueColor = 0;
Float32 newA = 0;
for (int filterMatrixI = 0 ; filterMatrixI < filterSmallMatrixSize ; filterMatrixI ++)
{
for (int filterMatrixJ = 0; filterMatrixJ < filterSmallMatrixSize; filterMatrixJ ++)
{
UInt32 * currentPixel = inputPixels + ((j + filterMatrixJ - 1) * inputWidth) + i + filterMatrixI - 1;
int color = *currentPixel;
newRedColor += (R(color) * sharpMatrix[filterMatrixI][filterMatrixJ]);
newGreenColor += (G(color) * sharpMatrix[filterMatrixI][filterMatrixJ]);
newBlueColor += (B(color)* sharpMatrix[filterMatrixI][filterMatrixJ]);
newA += (A(color) * sharpMatrix[filterMatrixI][filterMatrixJ]);
}
}
int r = MAX( MIN((int)newRedColor,255), 0);
int g = MAX( MIN((int)newGreenColor,255), 0);
int b = MAX( MIN((int)newBlueColor,255), 0);
int a = MAX( MIN((int)newA,255), 0);
UInt32 *currentMainImagePixel = inputPixels + (j * inputWidth) + i;
*currentMainImagePixel = RGBAMake(r,g,b,a);
}
}
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage * processedImage = [UIImage imageWithCGImage:newCGImage];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
free(inputPixels);
return processedImage;
}
As result i have this
Consider these are pixels in the middle of image:
|_|_|_|_|
|_|_|_|_|
|_|_|_|_|
|_|_|_|_|
Since you are updating image in place, this is how it looks somewhere in the middle of sharpen cycle:
|u|u|u|u|
|u|u|u|u|
|u|c|_|_|
|_|_|_|_|
Where u stands for updated pixel, c for current. So his new color depends on color of surround pixels, half of which are from already sharpened image, half from origin. To fix it we need a copy of original image's pixels:
...
CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);
UInt32 *origPixels = calloc(inputHeight * inputWidth, sizeof(UInt32));
memcpy(origPixels, inputPixels, inputHeight * inputWidth * sizeof(UInt32));
for (NSUInteger j = 1; j < inputHeight - 1; j++) {
...
And now we only need to change one line to get our current pixels from original image
//changed inputPixels -> origPixels
UInt32 * currentPixel = origPixels + ((j + filterMatrixJ - 1) * inputWidth) + i + filterMatrixI - 1;
Here are some examples of how it works compared to not updated filter (link is dropbox, sorry about that). I've tried different matrices, and as for me the best was somewhere around
const float sharpMatrix[3][3] = {{-0.3, -0.3, -0.3},{-0.3, 3.4, -0.3},{-0.3, -0.3, -0.3}}
Also, I need to notice that this way of keeping original image is not optimal. My fix basically doubles amount of memory consumed. It could be easily done via holding only two lines of pixels, and I'm sure there are even better ways.

iOS:CRC in obj c

i am new to iOS i need to create data packet by using CRC algorithm for the below commands
int comm[6];
comm[0]=0x01;
comm[1]=6;
comm[2]=0x70;
comm[3]=0x00;
comm[4]=0xFFFF;
comm[5]=0xFFFF;
i had a java code which as same thing developing in android
byte[] getCRC(byte[] bytes)
{
byte[] result = new byte[2];
try
{
short crc = (short) 0xFFFF;
for (int j = 0; j < bytes.length; j++)
{
byte c = bytes[j];
for (int i = 7; i >= 0; i--)
{
boolean c15 = ((crc >> 15 & 1) == 1)
boolean bit = ((c >> (7 - i) & 1) == 1);
crc <<= 1;
if (c15 ^ bit)
{
crc ^= 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
}
}
}
int crc2 = crc - 0xffff0000;
result[0] = (byte) (crc2 % 256);
result[1] = (byte) (crc2 / 256);
return result;
}
catch(Exception ex)
{
result = null;
return result;
}
}
Input for getCRC() method: The data packet for which CRC is to be calculated.
Output of getCRC() method: CRC for the packet.
The same thing i need to do in obj c please help if any sample code available also.
Objective-C also incorporates C, so the contents of your method will look almost the same as in Java. All that is needed is to pass your data into and out of the method, in this example using NSData:
- (NSData *)bytesCRCResult:(NSData *)dataBytes
{
unsigned char *result = (unsigned char *)malloc(2);
unsigned char *bytes = (unsigned char *)[dataBytes bytes]; // returns readonly pointer to the byte stream
uint16_t crc = (short) 0xFFFF;
for (int j = 0; j < dataBytes.length; j++)
{
unsigned char c = bytes[j];
for (int i = 7; i >= 0; i--)
{
bool c15 = ((crc >> 15 & 1) == 1);
bool bit = ((c >> (7 - i) & 1) == 1);
crc <<= 1;
if (c15 ^ bit)
{
crc ^= 0x1021; // 0001 0000 0010 0001 (0, 5, 12)
}
}
}
uint16_t crc2 = crc - 0xffff0000;
result[0] = (unsigned char) (crc2 % 256);
result[1] = (unsigned char) (crc2 / 256);
NSData *resultsToData = [NSData dataWithBytes:result length:2];
free(result);
return resultsToData;
}
NSData can be read as raw bytes using the [NSData bytes] method call, and has a range of useful properties and methods.
For the boolean value, you have a few options:
"bool" seems to be the ISO C/C++ standard type
"Boolean" is defined as "typedef unsigned char"
"boolean_t" is defined as "typedef unsigned int" or "typedef int", depending on 64-bit compilation apparently
"BOOL", the Objective-C bool, which is defined as "typedef signed char", according to http://nshipster.com/bool/ and might therefore not behave as expected.
"uint8_t" can be substituted for "unsigned char", for clarity.
Please note: The above code compiles without warning or complaint, but wasn't tested with actual data.

From char* array to two dimentional array and back algorithm goes wrong

I think my algorithm has flawed logic somewhere. Calling the two functions should return the same image however it doesn't! Can anyone see where my logic goes wrong?
These functions are used on PNG-images, I have found that they store colors as follows: ALPHA, RED, GREEN, BLUE. Repeatingly for the whole image. "pixels" is just a long array of those values (like a list).
My intent is to do a lowpass filter on the image, which is a lot easier logic if you instead use a two dimentional array / matrix of the image.
// loading pixels
UIImage *image = imageView.image;
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
char *pixels = (char *)[data bytes];
// editing image
char** matrix = [self mallocMatrix:pixels withWidth:CGImageGetWidth(imageRef) andHeight:CGImageGetHeight(imageRef)];
char* newPixels = [self mallocMatrixToList:matrix withWidth:CGImageGetWidth(imageRef) andHeight:CGImageGetHeight(imageRef)];
pixels = newPixels;
and the functions looks like this:
- (char**)mallocMatrix:(char*)pixels withWidth:(int)width andHeight:(int)height {
char** matrix = malloc(sizeof(char*)*height);
int c = 0;
for (int h=0; h < height; h++) {
matrix[h] = malloc(sizeof(char)*width*4);
for (int w=0; w < (width*4); w++) {
matrix[h][w] = pixels[c];
c++;
}
}
return matrix;
}
- (char*)mallocMatrixToList:(char**)matrix withWidth:(int)width andHeight:(int)height {
char* pixels = malloc(sizeof(char)*height*width*4);
int c = 0;
for (int h=0; h < height; h++) {
for (int w=0; w < (width*4); w++) {
pixels[c] = matrix[h][w];
c++;
}
}
return pixels;
}
Edit: Fixed the malloc as posters pointed out. Simplified the algorithm a bit.
I have not tested your code but it appears you are allocating the incorrect size for your matrix and low pass filter as well as not moving to the next pixel correctly.
- (char**) mallocMatrix:(char*)pixels withWidth:(int)width andHeight:(int)height {
//When using Objective-C do not cast malloc (only do so with Objective-C++)
char** matrix = malloc(sizeof(char*)*height);
for (int h=0; h < height; h++) {
//Each row needs to malloc the sizeof(char) not char *
matrix[h] = malloc(sizeof(char)*width*4);
for (int w=0; w < width; w++) {
// Varje pixel har ARGB
for (int i=0; i < 4; i++) {
matrix[h][w+i] = pixels[h*w+i];
}
}
}
return matrix;
}
- (char*) mallocLowPassFilter:(char**)matrix withWidth:(int)width andHeight:(int)height
{
//Same as before only malloc sizeof(char)
char* pixels = malloc(sizeof(char)*height*width*4);
for (int h=0; h < height; h++) {
for (int w=0; w < width; w++) {
// Varje pixel har ARGB
for (int i=0; i < 4; i++) {
// TODO: Lowpass here
pixels[h*w+i] = matrix[h][w+i];
}
}
}
return pixels;
}
Note: This code, as you know, is limited to ARGB images. If you would like to support more image formats there are additional functions available to get more information about your image such as CGImageGetColorSpace to find the pixel format (ARGB, RGBA, RGB, etc...), and CGImageGetBytesPerRow to get the number of bytes per row (you wouldn't have to multiply width by channels per pixel).

Converting int to bytes and switching endian efficiently

I have to do some int -> byte conversion and switch to big endian for some MIDI data I'm writing. Right now, I'm doing it like:
int tempo = 500000;
char* a = (char*)&tempo;
//reverse it
inverse(a, 3);
[myMutableData appendBytes:a length:3];
and the inverse function:
void inverse(char inver_a[],int j)
{
int i,temp;
j--;
for(i=0;i<(j/2);i++)
{
temp=inver_a[i];
inver_a[i]=inver_a[j];
inver_a[j]=temp;
j--;
}
}
It works, but it's not real clean, and I don't like that I'm having to specify 3 both times (since I have the luxury of knowing how many bytes it will end up).
Is there a more convenient way I should be approaching this?
Use the Core Foundation byte swapping functions.
int32_t unswapped = 0x12345678;
int32_t swapped = CFSwapInt32HostToBig(unswapped);
char* a = (char*) &swapped;
[myMutableData appendBytes:a length:sizeof(int32_t)];
This should do the trick:
/*
Quick swap of Endian.
*/
#include <stdio.h>
int main(){
unsigned int number = 0x04030201;
char *p1, *p2;
int i;
p1 = (char *) &number;
p2 = (p1 + 3);
for (i=0; i<2; i++){
*p1 ^= *p2;
*p2 ^= *p1;
*p1 ^= *p2;
}
return 0;
}
You can pack it into a function in whatever way you want to use it. The bitwise swap should compile into some pretty neat assembly :)
Hope it helps :)
int tempo = 500000;
//reverse it
inverse(&tempo);
[myMutableData appendBytes:(char*)tempo length:sizeof(tempo)];
and the inverse function:
void inverse(int *value)
{
char inver_a = (char*)value;
int j = sizeof(*value); //or u can put 2
int i,temp;
// commenting this j--;
for(i=0;i<(j/2);i++)
{
temp=inver_a[i];
inver_a[i]=inver_a[j];
inver_a[j]=temp;
j--;
}
}

Reading PVRTC image color information for each pixel

How do I read the image color information for each pixel of PVRTC image?
Here is my code extracting the integer arrays
NSData *data = [[NSData alloc] initWithContentsOfFile:path];
NSMutableArray *_imageData = [[NSMutableArray alloc] initWithCapacity:10];
BOOL success = FALSE;
PVRTexHeader *header = NULL;
uint32_t flags, pvrTag;
uint32_t dataLength = 0, dataOffset = 0, dataSize = 0;
uint32_t blockSize = 0, widthBlocks = 0, heightBlocks = 0;
uint32_t width = 0, height = 0, bpp = 4;
uint8_t *bytes = NULL;
uint32_t formatFlags;
header = (PVRTexHeader *)[data bytes];
pvrTag = CFSwapInt32LittleToHost(header->pvrTag);
if (gPVRTexIdentifier[0] != ((pvrTag >> 0) & 0xff) ||
gPVRTexIdentifier[1] != ((pvrTag >> 8) & 0xff) ||
gPVRTexIdentifier[2] != ((pvrTag >> 16) & 0xff) ||
gPVRTexIdentifier[3] != ((pvrTag >> 24) & 0xff))
{
return FALSE;
}
flags = CFSwapInt32LittleToHost(header->flags);
formatFlags = flags & PVR_TEXTURE_FLAG_TYPE_MASK;
if (formatFlags == kPVRTextureFlagTypePVRTC_4 || formatFlags == kPVRTextureFlagTypePVRTC_2)
{
[_imageData removeAllObjects];
if (formatFlags == kPVRTextureFlagTypePVRTC_4)
_internalFormat = GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG;
else if (formatFlags == kPVRTextureFlagTypePVRTC_2)
_internalFormat = GL_COMPRESSED_RGBA_PVRTC_2BPPV1_IMG;
_width = width = CFSwapInt32LittleToHost(header->width);
_height = height = CFSwapInt32LittleToHost(header->height);
if (CFSwapInt32LittleToHost(header->bitmaskAlpha))
_hasAlpha = TRUE;
else
_hasAlpha = FALSE;
dataLength = CFSwapInt32LittleToHost(header->dataLength);
bytes = ((uint8_t *)[data bytes]) + sizeof(PVRTexHeader);
// Calculate the data size for each texture level and respect the minimum number of blocks
while (dataOffset < dataLength)
{
if (formatFlags == kPVRTextureFlagTypePVRTC_4)
{
blockSize = 4 * 4; // Pixel by pixel block size for 4bpp
widthBlocks = width / 4;
heightBlocks = height / 4;
bpp = 4;
}
else
{
blockSize = 8 * 4; // Pixel by pixel block size for 2bpp
widthBlocks = width / 8;
heightBlocks = height / 4;
bpp = 2;
}
// Clamp to minimum number of blocks
if (widthBlocks < 2)
widthBlocks = 2;
if (heightBlocks < 2)
heightBlocks = 2;
dataSize = widthBlocks * heightBlocks * ((blockSize * bpp) / 8);
[_imageData addObject:[NSData dataWithBytes:bytes+dataOffset length:dataSize]];
for (int i=0; i < mipmapCount; i++)
{
NSLog(#"width:%d, height:%d",width,height);
data = [[NSData alloc] initWithData:[_imageData objectAtIndex:i]];
NSLog(#"data length:%d",[data length]);
//extracted 20 sample data, but all u could see are large integer number
for(int i = 0; i < 20; i++){
NSLog(#"data[%d]:%d",i,data[i]);
}
PVRTC is a 4x4 (or 8x4) texel, block-based compression system that takes into account surrounding blocks to represent two low frequency images with which higher frequency modulation data is combined in order to produce the actual texel output. A better explanation is available here:
http://web.onetel.net.uk/~simonnihal/assorted3d/fenney03texcomp.pdf
So the values you're extracting are actually parts of the encoded blocks and these need to be decoded correctly in order to get sensible values.
There are two ways to get to the colour information: decode/decompress the PVR texture information using a software decompressor or render the texture using a POWERVR graphics core and then read the result back. I'll only discuss the first option here.
It's rather tricky to assemble a decompressor from only the information there, but fortunately there's C++ decompression source code in the POWERVR SDK which you can get here - download one of the iPhone SDKs for instance:
http://www.imgtec.com/powervr/insider/powervr-sdk.asp
It's in the Tools/PVRTDecompress.cpp file.
Hope that helps.