From char* array to two dimentional array and back algorithm goes wrong - objective-c

I think my algorithm has flawed logic somewhere. Calling the two functions should return the same image however it doesn't! Can anyone see where my logic goes wrong?
These functions are used on PNG-images, I have found that they store colors as follows: ALPHA, RED, GREEN, BLUE. Repeatingly for the whole image. "pixels" is just a long array of those values (like a list).
My intent is to do a lowpass filter on the image, which is a lot easier logic if you instead use a two dimentional array / matrix of the image.
// loading pixels
UIImage *image = imageView.image;
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
char *pixels = (char *)[data bytes];
// editing image
char** matrix = [self mallocMatrix:pixels withWidth:CGImageGetWidth(imageRef) andHeight:CGImageGetHeight(imageRef)];
char* newPixels = [self mallocMatrixToList:matrix withWidth:CGImageGetWidth(imageRef) andHeight:CGImageGetHeight(imageRef)];
pixels = newPixels;
and the functions looks like this:
- (char**)mallocMatrix:(char*)pixels withWidth:(int)width andHeight:(int)height {
char** matrix = malloc(sizeof(char*)*height);
int c = 0;
for (int h=0; h < height; h++) {
matrix[h] = malloc(sizeof(char)*width*4);
for (int w=0; w < (width*4); w++) {
matrix[h][w] = pixels[c];
c++;
}
}
return matrix;
}
- (char*)mallocMatrixToList:(char**)matrix withWidth:(int)width andHeight:(int)height {
char* pixels = malloc(sizeof(char)*height*width*4);
int c = 0;
for (int h=0; h < height; h++) {
for (int w=0; w < (width*4); w++) {
pixels[c] = matrix[h][w];
c++;
}
}
return pixels;
}
Edit: Fixed the malloc as posters pointed out. Simplified the algorithm a bit.

I have not tested your code but it appears you are allocating the incorrect size for your matrix and low pass filter as well as not moving to the next pixel correctly.
- (char**) mallocMatrix:(char*)pixels withWidth:(int)width andHeight:(int)height {
//When using Objective-C do not cast malloc (only do so with Objective-C++)
char** matrix = malloc(sizeof(char*)*height);
for (int h=0; h < height; h++) {
//Each row needs to malloc the sizeof(char) not char *
matrix[h] = malloc(sizeof(char)*width*4);
for (int w=0; w < width; w++) {
// Varje pixel har ARGB
for (int i=0; i < 4; i++) {
matrix[h][w+i] = pixels[h*w+i];
}
}
}
return matrix;
}
- (char*) mallocLowPassFilter:(char**)matrix withWidth:(int)width andHeight:(int)height
{
//Same as before only malloc sizeof(char)
char* pixels = malloc(sizeof(char)*height*width*4);
for (int h=0; h < height; h++) {
for (int w=0; w < width; w++) {
// Varje pixel har ARGB
for (int i=0; i < 4; i++) {
// TODO: Lowpass here
pixels[h*w+i] = matrix[h][w+i];
}
}
}
return pixels;
}
Note: This code, as you know, is limited to ARGB images. If you would like to support more image formats there are additional functions available to get more information about your image such as CGImageGetColorSpace to find the pixel format (ARGB, RGBA, RGB, etc...), and CGImageGetBytesPerRow to get the number of bytes per row (you wouldn't have to multiply width by channels per pixel).

Related

CGContextFillRects: No matching function for call

I'm trying to optimize the performance in one of my components. The component needs to draw some (10 to 200) rectangles in it's drawRect method, which is triggered about 20 times per second.
Everything works when I use the CGContextFillRect method on each CGRect separately. I want to test if grouping the drawing into one single call with CGContextFillRects on an array of CGRects would increase performance.
The method CGContextFillRects gives me a compiler error No matching function for call to 'CGContextFillRects'.
This code is inside a .mm file. Should I import something before the CGContextFillRects method can be used?
This is what i'm trying to do:
- (void) drawRect:(CGRect)rect{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetFillColorWithColor(context, self.fillColor.CGColor);
//check if some objects are present
if (self.leftDrawBuffer && self.rightDrawBuffer){
UInt32 xPosForRect = self.leftPadding;
NSMutableArray *rectsToFill = [[NSMutableArray alloc] init];
for (int drawBufferLRIndex = 0; drawBufferLRIndex < 2; drawBufferLRIndex++){
Float32 *drawBuffer_ptr = self.leftDrawBuffer;
if (drawBufferLRIndex > 0){
drawBuffer_ptr = self.rightDrawBuffer;
}
for (int i=0; i< kAmountOfBarsPerChannel; i=i+1){
Float32 amp = drawBuffer_ptr[i];
Float32 blockNumber = 1.0f;
UInt32 yPosForRect = self.bounds.size.height - self.heightPerBlock;
while (blockNumber <= self.blocksPerLine && blockNumber / self.blocksPerLine < amp){
CGRect rect= CGRectMake(xPosForRect, yPosForRect, self.widthPerBlock, self.heightPerBlock);
[rectsToFill addObject:[NSValue valueWithCGRect:rect]];
//Using the method below works and gives me the expected result
//CGContextFillRect(context, rect);
blockNumber++;
yPosForRect -= self.heightPerBlock + self.vPaddingPerBlock;
}
xPosForRect += self.widthPerBlock + self.hPaddingPerBlock;
}
}
//This is the added code where i try to use CGContextFillRects
//1 -> transform to a c array of CGRects
const CGRect *cRects[rectsToFill.count];
for (int i = 0; i < rectsToFill.count; ++i) {
CGRect rect = [[rectsToFill objectAtIndex:i] CGRectValue];
cRects[i] = &rect;
}
size_t size = rectsToFill.count;
//2 -> trigger the method to fill all rects at once
//this method gives me the compiler error 'No matching function for call to 'CGContextFillRects''
CGContextFillRects(context, cRects, size);
}
CGContextRestoreGState(context);
}
The problem is how you convert the rects to a C array. You make pointers to the rects that are temporarily stored on the stack. There are two problems with this. First, the rects are gone with each loop iteration, so you can't do that. Second, You should pass a pointer to an array of CGRects, not an array of pointers to CGRect.
This will likely solve it:
CGRect cRects[rectsToFill.count]; // Replace your lines from this
for (int i = 0; i < rectsToFill.count; ++i) {
CGRect rect = [[rectsToFill objectAtIndex:i] CGRectValue];
cRects[i] = rect;
}
size_t size = rectsToFill.count;
CGContextFillRects(context, cRects, size); // To this
Please note the re-declaration of the cRects array and the change in the assignment.

Having problems seeing polygons in my cocos2d code. Using cocos2d and box2d. Only in debug mode the actual polygons are visible

So I need help figuring out what code I am missing here. I have checked all over the place, but I need specifics on wether its the formulas used or a typo that i haven't noticed yet.
Here is the polygon class. I am trying to create random polygons with 8 vertices and then of course fill with a plain color. But I want them to continue to generate random position but leave them fixed. In a better way the poly's are my terrain.Ok revise: the polygons are there and my character interacts with them, but I cannot see them, and yes they are on the same layer. Oh but they don't keep generating at the bottom, which i am guessing i just need to delete the old ones once they go off the screen and it should make a new poly.
-(void) genBody:(b2World *)world pos:(CGPoint *)pos {
//Here we generate a somewhat random convex polygon by sampling
//the "eccentric anomaly" of an ellipse with randomly generated
//x and y scaling constants (a,b). The algorithm is limited by
//the parameter max_verts, and has a number of tunable minimal
//and scaling values.
// I need to change this to randomly choosing teh number of vertices between 3-8,
// then choosing random offsets from equally distributed t values.
// This will eliminate teh while loop.
screen_pos = ccp(pos->x, pos->y);
float cur_t;
float new_t;
float delta_t;
float min_delta_t = 0.5;
float t_scale = 1.5;
b2Vec2 *verts= new b2Vec2[m_maxVerts]; // this should be replaced by a private verts ... maybe ... hmm that will consume more ram though
float t_vec[m_maxVerts];
// Generate random vertices
int vec_len;
while (true) {
cur_t = 0.0;
for (vec_len=0; vec_len<m_maxVerts; vec_len++) {
//delta_t = t_scale*(float)rand()/(float)RAND_MAX; // wish they just had a randf method :/
delta_t = t_scale*floorf((double)arc4random()/ARC4RANDOM_MAX);
#ifdef POLY_DEBUG
CCLOG(#"delta_t %0.2f", delta_t);
#endif
if (delta_t < min_delta_t) {
delta_t = min_delta_t;
}
new_t = cur_t + delta_t;
if (new_t > 2*PI) {
break;
}
t_vec[vec_len] = new_t;
cur_t = new_t;
}
// We need at least three points for a triangle
if ( vec_len > 3 ) {
break;
}
}
At least where the body is being generated.
then...
float num_verts = vec_len;
b2BodyDef BodyDef;
BodyDef.type = b2_staticBody;
BodyDef.position.Set(pos->x/PTM_RATIO, pos->y/PTM_RATIO);
BodyDef.userData = self; // hope this is correct
m_polyBody = world->CreateBody(&BodyDef);
b2PolygonShape polyShape;
int32 polyVert = num_verts;
polyShape.Set(verts, polyVert);
b2FixtureDef FixtureDef;
FixtureDef.shape = &polyShape;
FixtureDef.userData = self; // hope this is correct
FixtureDef.density = 1.6f;
FixtureDef.friction = 0.4f;
FixtureDef.restitution = 0.5f;
m_polyBody->CreateFixture(&FixtureDef);
for (int i=0; i < num_verts; i++) {
// Convert from b2Vec2 to CCPoint and from physics units to pixels
m_verts[i] = ccp(verts[i].x*PTM_RATIO, verts[i].y*PTM_RATIO);
}
m_numVerts = num_verts;
delete verts;
}
-(void) setColor:(ccColor4F) color {
m_color = color;
}
-(BOOL) dirty {
return true;
}
-(void) draw {
//[super draw];
ccDrawPoly(m_verts, m_numVerts, YES);
CCLOG(#"Drawing?");
}
-(CGAffineTransform) nodeToParentTransform {
b2Vec2 pos = m_polyBody->GetPosition();
float x = pos.x * PTM_RATIO;
float y = pos.y * PTM_RATIO;
/*if ( ignoreAnchorPointForPosition_ ) {
x += anchorPointInPixels_.x;
y += anchorPointInPixels_.y;
}*/
// Make matrix
float radians = m_polyBody->GetAngle();
float c = cosf(radians);
float s = sinf(radians);
if( ! CGPointEqualToPoint(anchorPointInPixels_, CGPointZero) ){
x += c*-anchorPointInPixels_.x + -s*-anchorPointInPixels_.y;
y += s*-anchorPointInPixels_.x + c*-anchorPointInPixels_.y;
}
// Rot, Translate Matrix
transform_ = CGAffineTransformMake( c, s,
-s, c,
x, y );
return transform_;
}
there is some stuff in between but its less important. I can post it if asked.
Then the update function, which is based in my game scene class.
-(void)updateObstacles
{
//CCLOG(#"updating obstacles");
int xpos;
int ypos;
CGPoint pos;
for (int i=0; i<MAX_OBSTACLES; i++ ) {
// If there is no obstacle generate a new one
if ( obstacles[i] == NULL ) {
polyObstacleSprite *sprite = [[polyObstacleSprite alloc] init];
ypos = int(_winSize.width/2*(double)arc4random()/ARC4RANDOM_MAX) - _winSize.width/2;
xpos = int(_winSize.height/2*(double)arc4random()/ARC4RANDOM_MAX) - _winSize.height/2;
//CCLOG(#"generating obstacle at %d,%d", xpos, ypos);
pos = ccp(xpos, ypos);
[sprite genBody:_world pos:&pos];
[self addChild:sprite z:1];
obstacles[i] = sprite;
}
//CCLOG(#"position: %d, %d", obstacles[i]->screen, obstacles[i]->position.y); FINISH
}
}
Sorry if its sort of a mess I set this up quick, but pretty much what I want to do is have randomly generated polygons appear at the bottom of my iphone screen as my character moves down with gravity. I got everything else but the polygons working.
Thanks in advance for spending the time to read this.

Convert GLfloat [] array to GLfloat * array

Is there a quicker way to convert the following data into a c style pointer array?
GLfloat verticalLines [] = {
0.59, 0.66, 0.0,
0.59, -0.14, 0.0
}
My current approach is to manually iterate over the data using the method below:
-(GLfloat *)updateLineVertices{
int totalVertices = 6;
GLfloat *lineVertices = (GLfloat *)malloc(sizeof(GLfloat) * (totalVertices));
for (int i = 0; i<totalVertices; i++) {
lineVertices[i] = verticalLines[i];
}
return lineVertices;
}
Some additional info.
Ultimately I will need the data in a format which can be easily manipulated, for example:
-(void)scaleLineAnimation{
GLfloat *lineVertices = [self updateLineVertices];
for (int i = 0; i<totalVertices; i+=3) {
lineVertices[i+1] += 0.5; //scale y axis
}
}
It depends on if verticalLines is going to stick around or not. If it's defined like it is above and it's not going to change, you can forgo the whole malloc and just point lineVertices to it.
linesVertices = &verticalLines[0]
if verticalLines is going to change, you probably want your own copy, so you've got no choice but to copy the actual data from one part of memory to another as you are doing, that being said, this might be a bit more elegant
for (int i = 0; i<totalVertices; i++){
lineVertices[i] = verticalLines[i];
}
or the preferred method is probably to use memcopy(), here is some working code
//MemcopyTest.c
#include <cstdlib>
#include <stdio.h>
#include <string.h>
int main(){
float a[] = {1.0,2.0,3.0}; //Original Array
int size = sizeof(float)*3;
float *b = (float*)malloc(size); //Allocate New Array
memcpy(b, a, size); //Copy Data
for(int i = 0; i<3; i++){
printf("%f\n", b[i]);
}
free(b);
}
What's wrong with just using verticalLines directly? Anything of type T ident[n] can be implicitly converted to T* ident as if you had written &ident[0]. And if you really want to be explicit, you can just write &ident[0] directly.

Objective C traverse pixels in an image vertically

I'm a little confused at the moment, first time poster here on stack overflow. I'm brand new to objective C but have learned a lot from my coworkers. What I'm trying to do is traverse a bmContext vertically shifting horizontally by 1 pixel after every vertical loop. Heres some code:
NSUInteger width = image.size.width;
NSUInteger height = image.size.height;
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = width * bytesPerPixel;
NSUInteger bytesPerColumn = height * bytesPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bmContext = CGBitmapContextCreate(NULL, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(bmContext, (CGRect){.origin.x = 0.0f, .origin.y = 0.0f, .size.width = width, .size.height = height}, image.CGImage);
UInt8* data = (UInt8*)CGBitmapContextGetData(bmContext);
const size_t bitmapByteCount = bytesPerRow * height;
struct Color {
UInt8 r;
UInt8 g;
UInt8 b;
};
for (size_t i = 0; i < bytesPerRow; i += 4) //shift 1 pixel
{
for (size_t j = 0; j < bitmapByteCount; j += bytesPerRow) //check every pixel in column
{
struct Color thisColor = {data[j + i + 1], data[j + i + 2], data[j + i + 3]};
}
}
in java it looks something like this, but I have no interest in the java version it's just to emphasis my true question. I only care about the objective c code.
for (int x = 0; x = image.getWidth(); x++)
{
for (int y = 0; y = image.getHeight(); y++)
{
int rgb = image.getRGB(x, y);
//do something with pixel
}
}
Am I really shifting one unit horizontally and then checking all vertical pixels and then shifting again horizontally? I thought I was but my results seem to be a little off. In java and c# achieving a task was rather simple, if anyone knows a simpler way to do this in Objective C please let me know. Thanks in advance!
The way you are getting at the pixels seems to be off.
If I'm understanding correctly, you just want to iterate through every pixel in the image, column by column. Right?
This should work:
for (size_t i = 0; i < CGBitmapContextGetWidth(bmContext); i++)
{
for (size_t j = 0; j < CGBitmapContextGetHeight(bmContext); j++)
{
int pixel = j * CGBitmapContextGetWidth(bmContext) + i;
struct Color thisColor = {data[pixel + 1], data[pixel + 2], data[pixel + 3]};
}
}

De-interleave and interleave buffer with vDSP_ctoz() and vDSP_ztoz()?

How do I de-interleave the float *newAudio into float *channel1 and float* channel2 and interleave it back into newAudio?
Novocaine *audioManager = [Novocaine audioManager];
__block float *channel1;
__block float *channel2;
[audioManager setInputBlock:^(float *newAudio, UInt32 numSamples, UInt32 numChannels) {
// Audio comes in interleaved, so,
// if numChannels = 2, newAudio[0] is channel 1, newAudio[1] is channel 2, newAudio[2] is channel 1, etc.
// Deinterleave with vDSP_ctoz()/vDSP_ztoz(); and fill channel1 and channel2
// ... processing on channel1 & channel2
// Interleave channel1 and channel2 with vDSP_ctoz()/vDSP_ztoz(); to newAudio
}];
What would these two lines of code look like? I don't understand the syntax of ctoz/ztoz.
What I do in Novocaine's accessory classes, like the Ringbuffer, for de-interleaving:
float zero = 0.0;
vDSP_vsadd(data, numChannels, &zero, leftSampleData, 1, numFrames);
vDSP_vsadd(data+1, numChannels, &zero, rightSampleData, 1, numFrames);
for interleaving:
float zero = 0.0;
vDSP_vsadd(leftSampleData, 1, &zero, data, numChannels, numFrames);
vDSP_vsadd(rightSampleData, 1, &zero, data+1, numChannels, numFrames);
The more general way to do things is to have an array of arrays, like
int maxNumChannels = 2;
int maxNumFrames = 1024;
float **arrays = (float **)calloc(maxNumChannels, sizeof(float *));
for (int i=0; i < maxNumChannels; ++i) {
arrays[i] = (float *)calloc(maxNumFrames, sizeof(float));
}
[[Novocaine audioManager] setInputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
float zero = 0.0;
for (int iChannel = 0; iChannel < numChannels; ++iChannel) {
vDSP_vsadd(data, numChannels, &zero, arrays[iChannel], 1, numFrames);
}
}];
which is what I use internally a lot in the RingBuffer accessory classes for Novocaine. I timed the speed of vDSP_vsadd versus memcpy, and (very, very surprisingly), there's no speed difference.
Of course, you can always just use a ring buffer, and save yourself the hassle
#import "RingBuffer.h"
int maxNumFrames = 4096
int maxNumChannels = 2
RingBuffer *ringBuffer = new RingBuffer(maxNumFrames, maxNumChannels)
[[Novocaine audioManager] setInputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
ringBuffer->AddNewInterleavedFloatData(data, numFrames, numChannels);
}];
[[Novocaine audioManager] setOuputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) {
ringBuffer->FetchInterleavedData(data, numFrames, numChannels);
}];
Hope that helps.
Here is an example:
#include <Accelerate/Accelerate.h>
int main(int argc, const char * argv[])
{
// Bogus interleaved stereo data
float stereoInput [1024];
for(int i = 0; i < 1024; ++i)
stereoInput[i] = (float)i;
// Buffers to hold the deinterleaved data
float leftSampleData [1024 / 2];
float rightSampleData [1024 / 2];
DSPSplitComplex output = {
.realp = leftSampleData,
.imagp = rightSampleData
};
// Split the data. The left (even) samples will end up in leftSampleData, and the right (odd) will end up in rightSampleData
vDSP_ctoz((const DSPComplex *)stereoInput, 2, &output, 1, 1024 / 2);
// Print the result for verification
for(int i = 0; i < 512; ++i)
printf("%d: %f + %f\n", i, leftSampleData[i], rightSampleData[i]);
return 0;
}
sbooth answers how to de-interleave using vDSP_ctoz. Here's the complementary operation, namely interleaving using vDSP_ztoc.
#include <stdio.h>
#include <Accelerate/Accelerate.h>
int main(int argc, const char * argv[])
{
const int NUM_FRAMES = 16;
const int NUM_CHANNELS = 2;
// Buffers for left/right channels
float xL[NUM_FRAMES];
float xR[NUM_FRAMES];
// Initialize with some identifiable data
for (int i = 0; i < NUM_FRAMES; i++)
{
xL[i] = 2*i; // Even
xR[i] = 2*i+1; // Odd
}
// Buffer for interleaved data
float stereo[NUM_CHANNELS*NUM_FRAMES];
vDSP_vclr(stereo, 1, NUM_CHANNELS*NUM_FRAMES);
// Interleave - take separate left & right buffers, and combine into
// single buffer alternating left/right/left/right, etc.
DSPSplitComplex x = {xL, xR};
vDSP_ztoc(&x, 1, (DSPComplex*)stereo, 2, NUM_FRAMES);
// Print the result for verification. Should give output like
// i: L, R
// 0: 0.00, 1.00
// 1: 2.00, 3.00
// etc...
printf(" i: L, R\n");
for (int i = 0; i < NUM_FRAMES; i++)
{
printf("%2d: %5.2f, %5.2f\n", i, stereo[2*i], stereo[2*i+1]);
}
return 0;
}