Display YUV in OpenGL - objective-c

I am having trouble displaying a raw YUV file that it is in NV12 format.
I can display a selected frame, however, it is still mainly in black and white with certain shades of pink and green.
Here is how my output looks like
Anyways, here is how my program works. (This is done in cocoa/objective-c, but I need your expert advice on program algorithm, not on syntax.)
Prior to program execution, the YUV file is stored in a binary file named "test.yuv". The file is in NV12 format, meaning the Y plan is stored first, then the UV plan is interlaced. My file extraction has no problem because I did a lot testings on it.
During initiation, I create a lookup table that will convert binary/8 bites/a byte/chars into respected Y, U, V float values
For the Y plane this is my code
-(void)createLookupTableY //creates a lookup table for converting a single byte into a float between 0 and 1
{
NSLog(#"YUVFrame: createLookupTableY");
lookupTableY = new float [256];
for (int i = 0; i < 256; i++)
{
lookupTableY[i] = (float) i /255;
//NSLog(#"lookupTableY[%d]: %f",i,lookupTableY[i]);//prints out the value of each float
}
}
The U Plane lookup table
-(void)createLookupTableU //creates a lookup table for converting a single byte into a float between 0 and 1
{
NSLog(#"YUVFrame: createLookupTableU");
lookupTableU = new float [256];
for (int i = 0; i < 256; i++)
{
lookupTableU[i] = -0.436 + (float) i / 255* (0.436*2);
NSLog(#"lookupTableU[%d]: %f",i,lookupTableU[i]);//prints out the value of each float
}
}
And the V look-up table
-(void)createLookupTableV //creates a lookup table for converting a single byte into a float between 0 and 1
{
NSLog(#"YUVFrame: createLookupTableV");
lookupTableV = new float [256];
for (int i = 0; i < 256; i++)
{
lookupTableV[i] = -0.615 + (float) i /255 * (0.615*2);
NSLog(#"lookupTableV[%d]: %f",i,lookupTableV[i]);//prints out the value of each float
}
}
after this point, I extract the Y & UV plan and store them into two buffers, yBuffer & uvBuffer
at this point, I attempt to convert the YUV data and stored it into a RGB buffer array called "frameImage"
-(void)sortAndConvert//sort the extracted frame data into an array of float
{
NSLog(#"YUVFrame: sortAndConvert");
int frameImageCounter = 0;
int pixelCounter = 0;
for (int y = 0; y < YUV_HEIGHT; y++)//traverse through the frame's height
{
for ( int x = 0; x < YUV_WIDTH; x++)//traverse through the frame's width
{
float Y = lookupTableY [yBuffer [y*YUV_WIDTH + x] ];
float U = lookupTableU [uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) * 2 ] ];
float V = lookupTableV [uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) * 2 + 1] ];
float RFormula = Y + 1.13983f * V;
float GFormula = Y - 0.39465f * U - 0.58060f * V;
float BFormula = Y + 2.03211f * U;
frameImage [frameImageCounter++] = [self clampValue:RFormula];
frameImage [frameImageCounter++] = [self clampValue:GFormula];
frameImage [frameImageCounter++] = [self clampValue:BFormula];
}
}
}
then I tried to draw the Image in OpenGL
-(void)drawFrame:(int )x
{
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, YUV_WIDTH, YUV_HEIGHT, 0, GL_RGB, GL_FLOAT, frameImage);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glRotatef( 180.0f, 1.0f, 0.0f, 0.0f );
glBegin( GL_QUADS );
glTexCoord2d(0.0,0.0); glVertex2d(-1.0,-1.0);
glTexCoord2d(1.0,0.0); glVertex2d(+1.0,-1.0);
glTexCoord2d(1.0,1.0); glVertex2d(+1.0,+1.0);
glTexCoord2d(0.0,1.0); glVertex2d(-1.0,+1.0);
glEnd();
glFlush();
}
so basically this my program in a nut shell. essentially i read the binary YUV files, stores all the data in a char array buffer. i then translate these values into their respected YUV float values.
This is where I think the error might be: in the Y lookup table I standardize the Y plane to [0,1], in the U plane I standardized the values between [-0.435,0.436], and in the V plane I standardized it bewteen [-0.615,0.615]. I did this because those are the YUV value ranges according to wikipedia.
And the YUV to RGB formula is the same formula from Wikipedia. I have also tried various other formulas, and this is the only formula that gives the rough outlook of the frame. Anyone might venture to guess to why my program is not correctly displaying the YUV frame data. I think it is something to do with my standardization technique, but it seems alright to me.
I have done a lot of testings, and I am 100% certain that the error is caused by by look up table. I don't think my setting formulas are correct.
A note to everyone who is reading
this. For the longest time, my frame
was not displaying properly because I
was not able to extract the frame data
correctly.
When I first started to program, I was
under the impression that in a clip of
say 30 frames, all 30 Y planar datas
are first written in the data file,
followed then by UV plane datas.
What I found out through trial and
error was that for a YUV data file,
specifically NV12, the data file is
stored in the following fashion
Y(1) UV(1) Y(2) UV(2) ... ...
#nschmidt
I changed my code to what you suggested
float U = lookupTableU [uvBuffer [ (y * (YUV_WIDTH / 4) + (x/4)) * 2 ] ]
float V = lookupTableU [uvBuffer [ (y * (YUV_WIDTH / 4) + (x/4)) * 2 + 1] ]
and this is the result that i get
here is the print line from the console. i am print out the values for Y, U, V and
RGB value that are being translated and stored on in the frameImage array
YUV:[0.658824,-0.022227,-0.045824] RGBFinal:[0.606593,0.694201,0.613655]
YUV:[0.643137,-0.022227,-0.045824] RGBFinal:[0.590906,0.678514,0.597969]
YUV:[0.607843,-0.022227,-0.045824] RGBFinal:[0.555612,0.643220,0.562675]
YUV:[0.592157,-0.022227,-0.045824] RGBFinal:[0.539926,0.627534,0.546988]
YUV:[0.643137,0.025647,0.151941] RGBFinal:[0.816324,0.544799,0.695255]
YUV:[0.662745,0.025647,0.151941] RGBFinal:[0.835932,0.564406,0.714863]
YUV:[0.690196,0.025647,0.151941] RGBFinal:[0.863383,0.591857,0.742314]
Update July 13, 2009
The problem was finally solved thanks to the recommendation from nschmidt . It turns out that my YUV file was actually in YUV 4:1:1 format. I was originally told that it was in YUV NV12 format. Anyways, I would like to share my results with you.
Here is output
and my code for decode is as follows
float Y = (float) yBuffer [y*YUV_WIDTH + x] ;
float U = (float) uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) ] ;
float V = (float) uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) + UOffset] ;
float RFormula = (1.164*(Y-16) + (1.596* (V - 128) ));
float GFormula = ((1.164 * (Y - 16)) - (0.813 * ((V) - 128)) - (0.391 * ((U) - 128)));
float BFormula = ((1.164 * (Y - 16)) + (2.018 * ((U) - 128)));
frameImage [frameImageCounter] = (unsigned char)( (int)[self clampValue:RFormula]);
frameImageCounter ++;
frameImage [frameImageCounter] = (unsigned char)((int)[self clampValue:GFormula]);
frameImageCounter++;
frameImage [frameImageCounter] = (unsigned char)((int) [self clampValue:BFormula]);
frameImageCounter++;
GLuint texture;
glGenTextures(1, &texture);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, YUV_WIDTH, YUV_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, frameImage);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE_SGIS);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE_SGIS);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
glRotatef( 180.0f, 1.0f, 0.0f, 0.0f );
glBegin( GL_QUADS );
glTexCoord2d(0.0,0.0); glVertex2d(-1.0,-1.0);
glTexCoord2d(1.0,0.0); glVertex2d(+1.0,-1.0);
glTexCoord2d(1.0,1.0); glVertex2d(+1.0,+1.0);
glTexCoord2d(0.0,1.0); glVertex2d(-1.0,+1.0);
glEnd();
NSLog(#"YUVFrameView: drawRect complete");
glFlush();
essentially, I used NSData for the raw file extraction. stored in a char array buffer. For YUV to RGB conversion, I used the above formula, afterwards, I clamped the values to [0:255]. then I used a 2DTexture in OpenGL for display.
So if you are converting YUV to RGB in the future, use the formula from above. If are using the YUV to RGB conversion formula from the earlier post, then you need to display the texture in GL_Float from the values for RGB are clamped between [0:1]

Next try :)
I think you're uv buffer is not interleaved. It looks like the U values come first and then the array of V values. Changing the lines to
unsigned int voffset = YUV_HEIGHT * YUV_WIDTH / 2;
float U = lookupTableU [uvBuffer [ y * (YUV_WIDTH / 2) + x/2] ];
float V = lookupTableV [uvBuffer [ voffset + y * (YUV_WIDTH / 2) + x/2] ];
might indicate if this is really the case.

I think you're addressing U and V values incorrectly. Rather than:
float U = lookupTableU [uvBuffer [ ((y / 2) * (x / 2) + (x/2)) * 2 ] ];
float V = lookupTableV [uvBuffer [ ((y / 2) * (x / 2) + (x/2)) * 2 + 1] ];
It should be something along the lines of
float U = lookupTableU [uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) * 2 ] ];
float V = lookupTableV [uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) * 2 + 1] ];

The picture looks like you have 4:1:1 format. You should change your lines to
float U = lookupTableU [uvBuffer [ (y * (YUV_WIDTH / 4) + (x/4)) * 2 ] ]
float V = lookupTableU [uvBuffer [ (y * (YUV_WIDTH / 4) + (x/4)) * 2 + 1] ]
Maybe you can post the result to see, what else is wrong. I find it always hard to think about it. It's much easier to approach this iteratively.

Related

What is the most optimized way of creating a ray tracer?

Currently, I am working with a ray tracer that takes an iterative approach towards developing the scenes. My goal is to turn it into a recursive ray tracer.
At the moment, I have a ray tracer defined to do the following operation to create the bitmap it is stored in:
int WIDTH = 640;
int HEIGHT = 640;
BMP Image(WIDTH, HEIGHT); // create new bitmap
// Slightly shoot rays left of right camera direction
double xAMT, yAMT;
*/
Color blue(0.1, 0.61, 0.76, 0);
for (int x = 0; x < WIDTH; x++) {
for (int y = 0; y < HEIGHT; y++) {
if (WIDTH > HEIGHT) {
xAMT = ((x + 0.5) / WIDTH) * aspectRatio - (((WIDTH - HEIGHT) / (double)HEIGHT) / 2);
yAMT = ((HEIGHT - y) + 0.5) / HEIGHT;
}
else if (HEIGHT > WIDTH) {
xAMT = (x + 0.5) / WIDTH;
yAMT = (((HEIGHT - y) + 0.5) / HEIGHT) / aspectRatio - (((HEIGHT - WIDTH) / (double)WIDTH) / 2);
}
else {
xAMT = (x + 0.5) / WIDTH;
yAMT = ((HEIGHT - y) + 0.5) / HEIGHT;
}
..... // calculate intersections, shading, reflectiveness.... etc
Image.setPixel(x, y, blue); // this is here just as an example
}
}
Is there another approach to calculating the reflective and refractive child rays outside the double for-loop?
Are the for-loops necessary? // yes because of the bitmap?
What approaches can be taken to minimize/optimize an iterative ray tracer?

Collision between a circle and a rectangle

I have a problem with collision detection of a circle and a rectangle. I have tried to solve the problem with the Pythagorean Theorem. But none of the queries works. The rectangle collides with the rectangular bounding box of the circle.
if (CGRectIntersectsRect(player.frame, visibleEnemy.frame)) {
if (([visibleEnemy spriteTyp] == jumper || [visibleEnemy spriteTyp] == wobble )) {
if ((visibleEnemy.center.x - player.frame.origin.x) * (visibleEnemy.center.x - player.frame.origin.x) +
(visibleEnemy.center.y - player.frame.origin.y) * (visibleEnemy.center.y - player.frame.origin.y) <=
(visibleEnemy.bounds.size.width/2 * visibleEnemy.bounds.size.width/2)) {
NSLog(#"Check 1");
normalAction = NO;
}
if ((visibleEnemy.center.x - (player.frame.origin.x + player.bounds.size.width)) *
(visibleEnemy.center.x - (player.frame.origin.x + player.bounds.size.width)) +
(visibleEnemy.center.y - player.frame.origin.y) * (visibleEnemy.center.y - player.frame.origin.y) <=
(visibleEnemy.bounds.size.width/2 * visibleEnemy.bounds.size.width/2)) {
NSLog(#"Check 2");
normalAction = NO;
}
else {
NSLog(#"Check 3");
normalAction = NO;
}
}
}
Here is how I did it in one of my small gaming projects. It gave me best results and it's simple. My code detects if there is a collision between circle and the line. So you can easily adopt it to circle - rectangle collision detection by checking all 4 edges of the rectangle.
Let's say that a ball has a ballRadius, and location (xBall, yBall). The line is defined with two points (xStart, yStart) and (xEnd, yEnd).
Implementation of a simple collision detection:
float ballRadius = ...;
float x1 = xStart - xBall;
float y1 = yStart - yBall;
float x2 = xEnd - xBall;
float y2 = yEnd - yBall;
float dx = x2 - x1;
float dy = y2 - y1;
float dr = sqrtf(powf(dx, 2) + powf(dy, 2));
float D = x1*y2 - x2*y1;
float delta = powf(ballRadius*0.9,2)*powf(dr,2) - powf(D,2);
if (delta >= 0)
{
// Collision detected
}
If delta is greater than zero there are two intersections between ball (circle) and line. If delta is equal to zero there is one intersection – perfect collision.
I hope it will help you.

UIColor CMYK and Lab Values

Simple question, more than likely complex answer:
How can I get CMYK and Lab values from a UIColor object (of which I know the RGB values if it helps)?
I have found this regarding getting CMYK values but I can't get any accurate values out of it, despite it being everywhere, I've heard it's not a great snippet.
CGFloat rgbComponents[4];
[color getRed:&rgbComponents[0] green:&rgbComponents[1] blue:&rgbComponents[2] alpha:&rgbComponents[3]];
CGFloat k = MIN(1-rgbComponents[0], MIN(1-rgbComponents[1], 1-rgbComponents[2]));
CGFloat c = (1-rgbComponents[0]-k)/(1-k);
CGFloat m = (1-rgbComponents[1]-k)/(1-k);
CGFloat y = (1-rgbComponents[2]-k)/(1-k);
For ICC-based color conversion, you can use the Little Color Management System. (I have just added all .c and .h files from the download archive to an iOS Xcode project. It compiled and ran the following code without problems.)
Remark: RGB and CMYK are a device dependent color spaces, Lab is a device independent color space. Therefore, to convert from RGB to Lab, you have to choose a device independent (or "calibrated") RGB color space for the conversion, for example sRGB.
Little CMS comes with built-in profiles for sRGB and Lab color spaces. A conversion from sRGB to Lab looks like this:
Create a color transformation:
cmsHPROFILE rgbProfile = cmsCreate_sRGBProfile();
cmsHPROFILE labProfile = cmsCreateLab4Profile(NULL);
cmsHTRANSFORM xform = cmsCreateTransform(rgbProfile, TYPE_RGB_FLT, labProfile,
TYPE_Lab_FLT,
INTENT_PERCEPTUAL, 0);
cmsCloseProfile(labProfile);
cmsCloseProfile(rgbProfile);
Convert colors:
float rgbValues[3];
// fill rgbValues array with input values ...
float labValues[3];
cmsDoTransform(xform, rgbValues, labValues, 1);
// labValues array contains output values.
Dispose of color transformation:
cmsDeleteTransform(xform);
Of course, the transformation would be created only once and used for all color conversions.
For RGB to CMYK conversion you can also use Little CMS, but you have to provide an ICC-Profile, e.g. one from the free Adobe download page ICC profile downloads for Mac OS.
Code example for RGB to CMYK conversion:
float rgb[3]; // fill with input values (range 0.0 .. 1.0)
float cmyk[4]; // output values (range 0.0 .. 100.0)
cmsHPROFILE rgbProfile = cmsCreate_sRGBProfile();
// The CMYK profile is a resource in the application bundle:
NSString *cmykProfilePath = [[NSBundle mainBundle] pathForResource:#"YourCMYKProfile.icc" ofType:nil];
cmsHPROFILE cmykProfile = cmsOpenProfileFromFile([cmykProfilePath fileSystemRepresentation], "r");
cmsHTRANSFORM xform = cmsCreateTransform(rgbProfile, TYPE_RGB_FLT, cmykProfile,
TYPE_CMYK_FLT,
INTENT_PERCEPTUAL, 0);
cmsCloseProfile(cmykProfile);
cmsCloseProfile(rgbProfile);
cmsDoTransform(xform, rgb, cmyk, 1);
cmsDeleteTransform(xform);
To get the LAB values you need to convert the RGB values into XYZ values which you can then convert into RGB values.
- (NSMutableArray *) convertRGBtoLABwithColor: (UIColor *)color
////make variables to get rgb values
CGFloat red3;
CGFloat green3;
CGFloat blue3;
//get rgb of color
[color getRed:&red3 green:&green3 blue:&blue3 alpha:nil];
float red2 = (float)red3*255;
float blue2 = (float)blue3*255;
float green2 = (float)green3*255;
//first convert RGB to XYZ
// same values, from 0 to 1
red2 = red2/255;
green2 = green2/255;
blue2 = blue2/255;
// adjusting values
if(red2 > 0.04045)
{
red2 = (red2 + 0.055)/1.055;
red2 = pow(red2,2.4);
} else {
red2 = red2/12.92;
}
if(green2 > 0.04045)
{
green2 = (green2 + 0.055)/1.055;
green2 = pow(green2,2.4);
} else {
green2 = green2/12.92;
}
if(blue2 > 0.04045)
{
blue2 = (blue2 + 0.055)/1.055;
blue2 = pow(blue2,2.4);
} else {
blue2 = blue2/12.92;
}
red2 *= 100;
green2 *= 100;
blue2 *= 100;
//make x, y and z variables
float x;
float y;
float z;
// applying the matrix to finally have XYZ
x = (red2 * 0.4124) + (green2 * 0.3576) + (blue2 * 0.1805);
y = (red2 * 0.2126) + (green2 * 0.7152) + (blue2 * 0.0722);
z = (red2 * 0.0193) + (green2 * 0.1192) + (blue2 * 0.9505);
//then convert XYZ to LAB
x = x/95.047;
y = y/100;
z = z/108.883;
// adjusting the values
if(x > 0.008856)
{
x = powf(x,(1.0/3.0));
} else {
x = ((7.787 * x) + (16/116));
}
if(y > 0.008856)
{
y = pow(y,(1.0/3.0));
} else {
y = ((7.787 * y) + (16/116));
}
if(z > 0.008856)
{
z = pow(z,(1.0/3.0));
} else {
z = ((7.787 * z) + (16/116));
}
//make L, A and B variables
float l;
float a;
float b;
//finally have your l, a, b variables!!!!
l = ((116 * y) - 16);
a = 500 * (x - y);
b = 200 * (y - z);
NSNumber *lNumber = [NSNumber numberWithFloat:l];
NSNumber *aNumber = [NSNumber numberWithFloat:a];
NSNumber *bNumber = [NSNumber numberWithFloat:b];
//add them to an array to return.
NSMutableArray *labArray = [[NSMutableArray alloc] init];
[labArray addObject:lNumber];
[labArray addObject:aNumber];
[labArray addObject:bNumber];
return labArray;
}

2nd order IIR filter, coefficients for a butterworth bandpass (EQ)?

Important update: I already figured out the answers and put them in this simple open-source library: http://bartolsthoorn.github.com/NVDSP/ Check it out, it will probably save you quite some time if you're having trouble with audio filters in IOS!
^
I have created a (realtime) audio buffer (float *data) that holds a few sin(theta) waves with different frequencies.
The code below shows how I created my buffer, and I've tried to do a bandpass filter but it just turns the signals to noise/blips:
// Multiple signal generator
__block float *phases = nil;
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels)
{
float samplingRate = audioManager.samplingRate;
NSUInteger activeSignalCount = [tones count];
// Initialize phases
if (phases == nil) {
phases = new float[10];
for(int z = 0; z <= 10; z++) {
phases[z] = 0.0;
}
}
// Multiple signals
NSEnumerator * enumerator = [tones objectEnumerator];
id frequency;
UInt32 c = 0;
while(frequency = [enumerator nextObject])
{
for (int i=0; i < numFrames; ++i)
{
for (int iChannel = 0; iChannel < numChannels; ++iChannel)
{
float theta = phases[c] * M_PI * 2;
if (c == 0) {
data[i*numChannels + iChannel] = sin(theta);
} else {
data[i*numChannels + iChannel] = data[i*numChannels + iChannel] + sin(theta);
}
}
phases[c] += 1.0 / (samplingRate / [frequency floatValue]);
if (phases[c] > 1.0) phases[c] = -1;
}
c++;
}
// Normalize data with active signal count
float signalMulti = 1.0 / (float(activeSignalCount) * (sqrt(2.0)));
vDSP_vsmul(data, 1, &signalMulti, data, 1, numFrames*numChannels);
// Apply master volume
float volume = masterVolumeSlider.value;
vDSP_vsmul(data, 1, &volume, data, 1, numFrames*numChannels);
if (fxSwitch.isOn) {
// H(s) = (s/Q) / (s^2 + s/Q + 1)
// http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt
// BW 2.0 Q 0.667
// http://www.rane.com/note170.html
//The order of the coefficients are, B1, B2, A1, A2, B0.
float Fs = samplingRate;
float omega = 2*M_PI*Fs; // w0 = 2*pi*f0/Fs
float Q = 0.50f;
float alpha = sin(omega)/(2*Q); // sin(w0)/(2*Q)
// Through H
for (int i=0; i < numFrames; ++i)
{
for (int iChannel = 0; iChannel < numChannels; ++iChannel)
{
data[i*numChannels + iChannel] = (data[i*numChannels + iChannel]/Q) / (pow(data[i*numChannels + iChannel],2) + data[i*numChannels + iChannel]/Q + 1);
}
}
float b0 = alpha;
float b1 = 0;
float b2 = -alpha;
float a0 = 1 + alpha;
float a1 = -2*cos(omega);
float a2 = 1 - alpha;
float *coefficients = (float *) calloc(5, sizeof(float));
coefficients[0] = b1;
coefficients[1] = b2;
coefficients[2] = a1;
coefficients[3] = a2;
coefficients[3] = b0;
vDSP_deq22(data, 2, coefficients, data, 2, numFrames);
free(coefficients);
}
// Measure dB
[self measureDB:data:numFrames:numChannels];
}];
My aim is to make a 10-band EQ for this buffer, using vDSP_deq22, the syntax of the method is:
vDSP_deq22(<float *vDSP_A>, <vDSP_Stride vDSP_I>, <float *vDSP_B>, <float *vDSP_C>, <vDSP_Stride vDSP_K>, <vDSP_Length __vDSP_N>)
See: http://developer.apple.com/library/mac/#documentation/Accelerate/Reference/vDSPRef/Reference/reference.html#//apple_ref/doc/c_ref/vDSP_deq22
Arguments:
float *vDSP_A is the input data
float *vDSP_B are 5 filter coefficients
float *vDSP_C is the output data
I have to make 10 filters (10 times vDSP_deq22). Then I set the gain for every band and combine them back together. But what coefficients do I feed every filter? I know vDSP_deq22 is a 2nd order (butterworth) IIR filter, but how do I turn this into a bandpass?
Now I have three questions:
a) Do I have to de-interleave and interleave the audio buffer? I know setting stride to 2 just filters on channel but how I filter the other, stride 1 will process both channels as one.
b) Do I have to transform/process the buffer before it enters the vDSP_deq22 method? If so, do I also have to transform it back to normal?
c) What values of the coefficients should I set to the 10 vDSP_deq22s?
I've been trying for days now but I haven't been able to figure this on out, please help me out!
Your omega value need to be normalised, i.e. expressed as a fraction of Fs - it looks like you left out the f0 when you calculated omega, which will make alpha wrong too:
float omega = 2*M_PI*Fs; // w0 = 2*pi*f0/Fs
should probably be:
float omega = 2*M_PI*f0/Fs; // w0 = 2*pi*f0/Fs
where f0 is the centre frequency in Hz.
For your 10 band equaliser you'll need to pick 10 values of f0, spaced logarithmically, e.g. 25 Hz, 50 Hz, 100 Hz, 200 Hz, 400 Hz, 800 Hz, 1.6 kHz, 3.2 kHz, 6.4 kHz, 12.8 kHz.

Map GPS Coordinates to an Image and draw some GPS Points on it

I have some problems figuring out where my error is. I got the following:
Have an image and corresponding GPS coordinates of its top-left and bottom-right vertices.
E.g:
topLeft.longitude = 8.235128;
topLeft.latitude = 49.632383;
bottomRight.longitude = 8.240547;
bottomRight.latitude = 49.629808;
Now a have an Point that lies in that map:
p.longitude = 8.238567;
p.latitude = 49.630664;
I draw my image in landscape fullscreen (1024*748).
Now I want to calculate the exact Pixel position (x,y) of my point.
For doing that I am trying to use the great circle distance approach from here: Link.
CGFloat DegreesToRadians(CGFloat degrees)
{
return degrees * M_PI / 180;
};
- (float) calculateDistanceP1:(CLLocationCoordinate2D)p1 andP2:(CLLocationCoordinate2D)p2 {
double circumference = 40000.0; // Erdumfang in km am Äquator
double distance = 0.0;
double latitude1Rad = DegreesToRadians(p1.latitude);
double longitude1Rad = DegreesToRadians(p1.longitude);
double latititude2Rad = DegreesToRadians(p2.latitude);
double longitude2Rad = DegreesToRadians(p2.longitude);
double logitudeDiff = fabs(longitude1Rad - longitude2Rad);
if (logitudeDiff > M_PI)
{
logitudeDiff = 2.0 * M_PI - logitudeDiff;
}
double angleCalculation =
acos(sin(latititude2Rad) * sin(latitude1Rad) + cos(latititude2Rad) * cos(latitude1Rad) * cos(logitudeDiff));
distance = circumference * angleCalculation / (2.0 * M_PI);
NSLog(#"%f",distance);
return distance;
}
Here is my code for getting the Pixel position:
- (CGPoint) calculatePoint:(CLLocationCoordinate2D)point {
float x_coord;
float y_coord;
CLLocationCoordinate2D x1;
CLLocationCoordinate2D x2;
x1.longitude = p.longitude;
x1.latitude = topLeft.latitude;
x2.longitude = p.longitude;
x2.latitude = bottomRight.latitude;
CLLocationCoordinate2D y1;
CLLocationCoordinate2D y2;
y1.longitude = topLeft.longitude;
y1.latitude = p.latitude;
y2.longitude = bottomRight.longitude;
y2.latitude = p.latitude;
float distanceX = [self calculateDistanceP1:x1 andP2:x2];
float distanceY = [self calculateDistanceP1:y1 andP2:y2];
float distancePX = [self calculateDistanceP1:x1 andP2:p];
float distancePY = [self calculateDistanceP1:y1 andP2:p];
x_coord = fabs(distancePX * (1024 / distanceX))-1;
y_coord = fabs(distancePY * (748 / distanceY))-1;
return CGPointMake(x_coord,y_coord);
}
x1 and x2 are the points on the longitude of p and with latitude of topLeft and bottomRight.
y1 and y2 are the points on the latitude of p and with longitude of topLeft and bottomRight.
So I got the distance between left and right on longitude of p and distance between top and bottom on latitude of p. (Needed for calculate the pixel position)
Now I calculate the distance between x1 and p (my distance between x_0 and x_p) after that I calculate the distance between y1 and p (distance between y_0 and y_p)
Last but not least the Pixel position is calculated and returned.
The Result is, that my point is on the red and NOT on the blue position:
Maybe you find any mistakes or have any suggestions for improving the accuracy.
Maybe I didn't understand your question, but shouldn't you be using the Converting Map Coordinates methods of MKMapView?
See this image
I used your co-ordinates, and simply did the following:
x_coord = 1024 * (p.longitude - topLeft.longitude)/(bottomRight.longitude - topLeft.longitude);
y_coord = 748 - (748 * (p.latitude - bottomRight.latitude)/(topLeft.latitude - bottomRight.latitude));
The red dot markes this point. For such small distances you don't really need to use great circles, and your rounding errors will be making things much more inaccurate