Drawing a Sphere and Calculating Surface Normals - opengl-es-2.0

I'm trying to draw a sphere and calculate its surface normals. I've been staring at this for hours, but I'm getting nowhere. Here is a screenshot of the mess that this draws:
- (id) init
{
if (self = [super init]) {
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
GLfloat rad_th, rad_ph;
GLint th, ph;
GLint i = 0;
GLKMatrix3 this_triangle;
GLKVector3 column0, column1, column2, this_normal;
for (ph=-90; ph<=90; ph++) {
for (th=0; th<=360; th+=10) {
if (i<3) printf("i: %d th: %f ph: %f\n", i, (float)th, (float)ph);
rad_th = GLKMathDegreesToRadians( (float) th );
rad_ph = GLKMathDegreesToRadians( (float) ph);
_vertices[i][0][0] = sinf(rad_th)*cosf(rad_ph);
_vertices[i][0][1] = sinf(rad_ph);
_vertices[i][0][2] = cos(rad_th)*cos(rad_ph);
rad_th = GLKMathDegreesToRadians( (float) (th) );
rad_ph = GLKMathDegreesToRadians( (float) (ph+1) );
_vertices[i+1][0][0] = sinf(rad_th)*cosf(rad_ph);
_vertices[i+1][0][1] = sinf(rad_ph);
_vertices[i+1][0][2] = cos(rad_th)*cos(rad_ph);
i+=2;
}
}
// calclate and store the surface normal for every triangle
i=2;
for (ph=-90; ph<=90; ph++) {
for (th=2; th<=360; th++) {
// note that the first two vertices are irrelevant since it isn't until the third vertex that a triangle is defined.
column0 = GLKVector3Make(_vertices[i-2][0][0], _vertices[i-2][0][1], _vertices[i-2][0][2]);
column1 = GLKVector3Make(_vertices[i-1][0][0], _vertices[i-1][0][1], _vertices[i-1][0][2]);
column2 = GLKVector3Make(_vertices[i-0][0][0], _vertices[i-0][0][1], _vertices[i-0][0][2]);
this_triangle = GLKMatrix3MakeWithColumns(column0, column1, column2);
this_normal = [self calculateTriangleSurfaceNormal : this_triangle];
_vertices[i][1][0] = this_normal.x;
_vertices[i][1][1] = this_normal.y;
_vertices[i][1][2] = this_normal.z;
i++;
}
i+=2;
}
glBufferData(GL_ARRAY_BUFFER, sizeof(_vertices), _vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat)*6, NULL);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat)*6, (GLubyte*)(sizeof(GLfloat)*3));
glBindVertexArrayOES(0);
}
return self;
}
(void) render;
{
glDrawArrays(GL_TRIANGLE_STRIP, 0, 65522);
}
Here is my surface normal calculation. I've used this elsewhere, so I believe that it works, if given the correct vertices, of course.
- (GLKVector3) calculateTriangleSurfaceNormal : (GLKMatrix3) triangle_vertices
{
GLKVector3 surfaceNormal;
GLKVector3 col0 = GLKMatrix3GetColumn(triangle_vertices, 0);
GLKVector3 col1 = GLKMatrix3GetColumn(triangle_vertices, 1);
GLKVector3 col2 = GLKMatrix3GetColumn(triangle_vertices, 2);
GLKVector3 vec1 = GLKVector3Subtract(col1, col0);
GLKVector3 vec2 = GLKVector3Subtract(col2, col0);
surfaceNormal.x = vec1.y * vec2.z - vec2.y * vec1.z;
surfaceNormal.y = vec1.z * vec2.x - vec2.z * vec1.x;
surfaceNormal.z = vec1.x * vec2.y - vec2.x * vec1.y;
return GLKVector3Normalize(surfaceNormal);
}
In my .h file, I define the _vertices array like this (laugh if you will...):
// 360 + 2 = 362 vertices per triangle strip
// 90 strips per hemisphere (one hemisphere has 91)
// 2 hemispheres
// 362 * 90.5 * 2 = 65522
GLfloat _vertices[65522][2][3]; //2 sets (vertex, normal) and 3 vertices in each set

It appears you are calculating normals for the triangles in your triangle strip, but assigning these normals to the vertexes which are shared by multiple triangles. If you just used triangles instead of the triangle strip, and gave all three vertexes from each triangle that triangles normal, each triangle would have an appropriate normal. This value would really only be correct for the center of the triangle. You would be better off using vertex normals, which as Christian mentioned are equal to the vertexes in this case. These could be interpolated across the triangles.

Related

How to do batching without UBOs?

I'm trying to implement batching for a WebGL renderer which is struggling with lots of small objects due to too many draw calls. What I thought is I'd batch them all by the kind of shader they use, then draw a few at a time, uploading material parameters and the model matrix for each object once in uniforms.
My problem is that the uniform size limits for non-UBO uniforms are extremely low, as in 256 floats low at a minimum. If my material uses, say, 8 floats, and if you factor in the model matrix, I barely have enough uniforms to draw 10 models in a single batch, which isn't really going to be enough.
Is there any hope to make this work without UBOs? Are textures an option? How are people doing batching without WebGL2 UBOs?
More details: I have no skinning or complex animations, I just have some shaders (diffuse, cook-torrance, whatever) and each model has different material settings for each shader, e.g. color, roughness, index of refraction which can be changed dynamically by the user (so it's not realistic to bake them into the vertex array because we have some high poly data, also users can switch shaders and not all shaders have the same number of parameters) as well as material maps obviously. The geometry itself is static and just has a linear transform on each model. For the most part all meshes are different so geometry instancing won't help a whole lot, but I can look at that later.
Thanks
I don't know that this is actually faster than lots of draw calls but here is drawing 4 models with a single draw call
It works by adding an id per model. So, for every vertex in model #0 put a 0, for every vertex in model #1 put a 1, etc.
Then it uses model id to index stuff in a texture. The easiest would be model id chooses the row of a texture and then all the data for that model can be pulled out of that row.
For WebGL1
attribute float modelId;
...
#define TEXTURE_WIDTH ??
#define COLOR_OFFSET ((0.0 + 0.5) / TEXTURE_WIDTH)
#define MATERIAL_OFFSET ((1.0 + 0.5) / TEXTURE_WIDTH)
float modelOffset = (modelId + .5) / textureHeight;
vec4 color = texture2D(perModelData, vec2(COLOR_OFFSET, modelOffset));
vec4 roughnessIndexOfRefaction = texture2D(perModelData,
vec2(MATERIAL_OFFSET, modelOffset));
etc..
As long as you are not drawing more than gl.getParameter(gl.MAX_TEXTURE_SIZE) models it will work. If you have more than that either use more draw calls or change the texture coordinate calculations so there's more than one model per row
In WebGL2 you'd change the code to use texelFetch and unsigned integers
in uint modelId;
...
#define COLOR_OFFSET 0
#define MATERIAL_OFFSET 1
vec4 color = texelFetch(perModelData, uvec2(COLOR_OFFSET, modelId));
vec4 roughnessIndexOfRefaction = texelFetch(perModelData,
uvec2(MATERIAL_OFFSET, modelId));
example of 4 models drawn with 1 draw call. For each model the model matrix and color are stored in the texture.
const m4 = twgl.m4;
const v3 = twgl.v3;
const gl = document.querySelector('canvas').getContext('webgl');
const ext = gl.getExtension('OES_texture_float');
if (!ext) {
alert('need OES_texture_float');
}
const COMMON_STUFF = `
#define TEXTURE_WIDTH 5.0
#define MATRIX_ROW_0_OFFSET ((0. + 0.5) / TEXTURE_WIDTH)
#define MATRIX_ROW_1_OFFSET ((1. + 0.5) / TEXTURE_WIDTH)
#define MATRIX_ROW_2_OFFSET ((2. + 0.5) / TEXTURE_WIDTH)
#define MATRIX_ROW_3_OFFSET ((3. + 0.5) / TEXTURE_WIDTH)
#define COLOR_OFFSET ((4. + 0.5) / TEXTURE_WIDTH)
`;
const vs = `
attribute vec4 position;
attribute vec3 normal;
attribute float modelId;
uniform float textureHeight;
uniform sampler2D perModelDataTexture;
uniform mat4 projection;
uniform mat4 view;
varying vec3 v_normal;
varying float v_modelId;
${COMMON_STUFF}
void main() {
v_modelId = modelId; // pass to fragment shader
float modelOffset = (modelId + 0.5) / textureHeight;
// note: in WebGL2 better to use texelFetch
mat4 model = mat4(
texture2D(perModelDataTexture, vec2(MATRIX_ROW_0_OFFSET, modelOffset)),
texture2D(perModelDataTexture, vec2(MATRIX_ROW_1_OFFSET, modelOffset)),
texture2D(perModelDataTexture, vec2(MATRIX_ROW_2_OFFSET, modelOffset)),
texture2D(perModelDataTexture, vec2(MATRIX_ROW_3_OFFSET, modelOffset)));
gl_Position = projection * view * model * position;
v_normal = mat3(view) * mat3(model) * normal;
}
`;
const fs = `
precision highp float;
varying vec3 v_normal;
varying float v_modelId;
uniform float textureHeight;
uniform sampler2D perModelDataTexture;
uniform vec3 lightDirection;
${COMMON_STUFF}
void main() {
float modelOffset = (v_modelId + 0.5) / textureHeight;
vec4 color = texture2D(perModelDataTexture, vec2(COLOR_OFFSET, modelOffset));
float l = dot(lightDirection, normalize(v_normal)) * .5 + .5;
gl_FragColor = vec4(color.rgb * l, color.a);
}
`;
// compile shader, link, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// make some vertex data
const modelVerts = [
twgl.primitives.createSphereVertices(1, 6, 4),
twgl.primitives.createCubeVertices(1, 1, 1),
twgl.primitives.createCylinderVertices(1, 1, 10, 1),
twgl.primitives.createTorusVertices(1, .2, 16, 8),
];
// merge all the vertices into one
const arrays = twgl.primitives.concatVertices(modelVerts);
// fill an array so each vertex of each model has a modelId
const modelIds = new Uint16Array(arrays.position.length / 3);
let offset = 0;
modelVerts.forEach((verts, modelId) => {
const end = offset + verts.position.length / 3;
while(offset < end) {
modelIds[offset++] = modelId;
}
});
arrays.modelId = { numComponents: 1, data: modelIds };
// calls gl.createBuffer, gl.bindBuffer, gl.bufferData
const bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays);
const numModels = modelVerts.length;
const tex = gl.createTexture();
const textureWidth = 5; // 4x4 matrix, 4x1 color
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, textureWidth, numModels, 0, gl.RGBA, gl.FLOAT, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
// this data is for the texture, one row per model
// first 4 pixels are the model matrix, 5 pixel is the color
const perModelData = new Float32Array(textureWidth * numModels * 4);
const stride = textureWidth * 4;
const modelOffset = 0;
const colorOffset = 16;
// set the colors at init time
for (let modelId = 0; modelId < numModels; ++modelId) {
perModelData.set([r(), r(), r(), 1], modelId * stride + colorOffset);
}
function r() {
return Math.random();
}
function render(time) {
time *= 0.001; // seconds
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
const fov = Math.PI * 0.25;
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const near = 0.1;
const far = 20;
const projection = m4.perspective(fov, aspect, near, far);
const eye = [0, 0, 10];
const target = [0, 0, 0];
const up = [0, 1, 0];
const camera = m4.lookAt(eye, target, up);
const view = m4.inverse(camera);
// set the matrix for each model in the texture data
const mat = m4.identity();
for (let modelId = 0; modelId < numModels; ++modelId) {
const t = time * (modelId + 1) * 0.3;
m4.identity(mat);
m4.rotateX(mat, t, mat);
m4.rotateY(mat, t, mat);
m4.translate(mat, [0, 0, Math.sin(t * 1.1) * 4], mat);
m4.rotateZ(mat, t, mat);
perModelData.set(mat, modelId * stride + modelOffset);
}
// upload the texture data
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texSubImage2D(gl.TEXTURE_2D, 0, 0, 0, textureWidth, numModels,
gl.RGBA, gl.FLOAT, perModelData);
gl.useProgram(programInfo.program);
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// calls gl.activeTexture, gl.bindTexture, gl.uniformXXX
twgl.setUniforms(programInfo, {
lightDirection: v3.normalize([1, 2, 3]),
perModelDataTexture: tex,
textureHeight: numModels,
projection,
view,
});
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Here's 2000 models in one draw call
https://jsfiddle.net/greggman/g2tcadho/

Is there any PDF command, that scales rectangle coordinates?

I have an application, that extracts text and rectangles from pdf files for further analysis. I use ItextSharp for extraction, and everything worked smoothly, until I stumbled upon a document, which has some strange table cell rectangles. The values in the drawing commands, that I retrieve, seem 10 times larger, than actual dimensions of the latter rectangles.
Just an example :
2577 831.676 385.996 3.99609 re
At the same time, when viewing the document all rectangles seem to correctly fit in the bounds of document pages. My guess is that there should be some scaling command, telling, that these values should be scaled down. Is the assumption right, or how is it possible, that such large rectangles are rendered so, that they stay inside the bounds of a page ?
The pdf document is behind this link : https://www.dropbox.com/s/gyvon0dwk6a9cj0/prEVS_ISO_11620_KOM_et.pdf?dl=0
The code, that handles extraction of dimensions from PRStream is as follows :
private static List<PdfRect> GetRectsAndLinesFromStream(PRStream stream)
{
var streamBytes = PdfReader.GetStreamBytes(stream);
var tokenizer = new PRTokeniser(new RandomAccessFileOrArray(streamBytes));
List<string> newBuf = new List<string>();
List<PdfRect> rects = new List<PdfRect>();
List<string> allTokens = new List<string>();
float[,] ctm = null;
List<float[,]> ctms = new List<float[,]>();
//if current ctm has not yet been added to list
bool pendingCtm = false;
//format definition for string-> float conversion
var format = new System.Globalization.NumberFormatInfo();
format.NegativeSign = "-";
while (tokenizer.NextToken())
{
//Add them to our master buffer
newBuf.Add(tokenizer.StringValue);
if (
tokenizer.TokenType == PRTokeniser.TokType.OTHER && newBuf[newBuf.Count - 1] == "re"
)
{
float startPointX = (float)double.Parse(newBuf[newBuf.Count - 5], format);
float startPointY = (float)double.Parse(newBuf[newBuf.Count - 4], format);
float width = (float)double.Parse(newBuf[newBuf.Count - 3], format);
float height = (float)double.Parse(newBuf[newBuf.Count - 2], format);
float endPointX = startPointX + width;
float endPointY = startPointY + height;
//if transformation is defined, correct coordinates
if (ctm!=null)
{
//extract parameters
float a = ctm[0, 0];
float b = ctm[0, 1];
float c = ctm[1, 0];
float d = ctm[1, 1];
float e = ctm[2, 0];
float f = ctm[2, 1];
//reverse transformation to get x and y from x' and y'
startPointX = (startPointX - startPointY * c - e) / a;
startPointY = (startPointY - startPointX * b - f) / d;
endPointX = (endPointX - endPointY * c - e) / a;
endPointY = (endPointY - endPointX * b - f) / d;
}
rects.Add(new PdfRect(startPointX, startPointY , endPointX , endPointY ));
}
//store current ctm
else if (tokenizer.TokenType == PRTokeniser.TokType.OTHER && newBuf[newBuf.Count - 1] == "q")
{
if (ctm != null)
{
ctms.Add(ctm);
pendingCtm = false;
}
}
//fetch last ctm and remove it from list
else if (tokenizer.TokenType == PRTokeniser.TokType.OTHER && newBuf[newBuf.Count - 1] == "Q")
{
if (ctms.Count > 0)
{
ctm = ctms[ctms.Count - 1];
ctms.RemoveAt(ctms.Count -1 );
}
}
else if (tokenizer.TokenType == PRTokeniser.TokType.OTHER && newBuf[newBuf.Count - 1] == "cm")
{
// x' = x*a + y*c + e ; y' = x*b + y*d + f
float a = (float)double.Parse(newBuf[newBuf.Count - 7], format);
float b = (float)double.Parse(newBuf[newBuf.Count - 6], format);
float c = (float)double.Parse(newBuf[newBuf.Count - 5], format);
float d = (float)double.Parse(newBuf[newBuf.Count - 4], format);
float e = (float)double.Parse(newBuf[newBuf.Count - 3], format);
float f = (float)double.Parse(newBuf[newBuf.Count - 2], format);
float[,] tempCtm = ctm;
ctm = new float[3, 3] {
{a,b,0},
{c,d,0},
{e,f,1}
};
//multiply matrices to form 1 transformation matrix
if (pendingCtm && tempCtm != null)
{
float[,] resultantCtm;
if (!TryMultiplyMatrix(tempCtm, ctm, out resultantCtm))
{
throw new InvalidOperationException("Invalid transform matrix");
}
ctm = resultantCtm;
}
//current CTM has not yet been saved to stack
pendingCtm = true;
}
return rects;
}
The command you are looking for is cm. Did you read The ABC of PDF with iText? The book isn't finished yet, but you can already download the first five chapters.
This is a screen shot of the table that shows the cm operator:
This is an example of 5 shapes that are created in the exact same way, using identical syntax:
They are added at different positions, even in a different size and shape, because of the change in the graphics state: the coordinate system was changed, and the shapes are rendered in that altered coordinate system.

Collision between a circle and a rectangle

I have a problem with collision detection of a circle and a rectangle. I have tried to solve the problem with the Pythagorean Theorem. But none of the queries works. The rectangle collides with the rectangular bounding box of the circle.
if (CGRectIntersectsRect(player.frame, visibleEnemy.frame)) {
if (([visibleEnemy spriteTyp] == jumper || [visibleEnemy spriteTyp] == wobble )) {
if ((visibleEnemy.center.x - player.frame.origin.x) * (visibleEnemy.center.x - player.frame.origin.x) +
(visibleEnemy.center.y - player.frame.origin.y) * (visibleEnemy.center.y - player.frame.origin.y) <=
(visibleEnemy.bounds.size.width/2 * visibleEnemy.bounds.size.width/2)) {
NSLog(#"Check 1");
normalAction = NO;
}
if ((visibleEnemy.center.x - (player.frame.origin.x + player.bounds.size.width)) *
(visibleEnemy.center.x - (player.frame.origin.x + player.bounds.size.width)) +
(visibleEnemy.center.y - player.frame.origin.y) * (visibleEnemy.center.y - player.frame.origin.y) <=
(visibleEnemy.bounds.size.width/2 * visibleEnemy.bounds.size.width/2)) {
NSLog(#"Check 2");
normalAction = NO;
}
else {
NSLog(#"Check 3");
normalAction = NO;
}
}
}
Here is how I did it in one of my small gaming projects. It gave me best results and it's simple. My code detects if there is a collision between circle and the line. So you can easily adopt it to circle - rectangle collision detection by checking all 4 edges of the rectangle.
Let's say that a ball has a ballRadius, and location (xBall, yBall). The line is defined with two points (xStart, yStart) and (xEnd, yEnd).
Implementation of a simple collision detection:
float ballRadius = ...;
float x1 = xStart - xBall;
float y1 = yStart - yBall;
float x2 = xEnd - xBall;
float y2 = yEnd - yBall;
float dx = x2 - x1;
float dy = y2 - y1;
float dr = sqrtf(powf(dx, 2) + powf(dy, 2));
float D = x1*y2 - x2*y1;
float delta = powf(ballRadius*0.9,2)*powf(dr,2) - powf(D,2);
if (delta >= 0)
{
// Collision detected
}
If delta is greater than zero there are two intersections between ball (circle) and line. If delta is equal to zero there is one intersection – perfect collision.
I hope it will help you.

2nd order IIR filter, coefficients for a butterworth bandpass (EQ)?

Important update: I already figured out the answers and put them in this simple open-source library: http://bartolsthoorn.github.com/NVDSP/ Check it out, it will probably save you quite some time if you're having trouble with audio filters in IOS!
^
I have created a (realtime) audio buffer (float *data) that holds a few sin(theta) waves with different frequencies.
The code below shows how I created my buffer, and I've tried to do a bandpass filter but it just turns the signals to noise/blips:
// Multiple signal generator
__block float *phases = nil;
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels)
{
float samplingRate = audioManager.samplingRate;
NSUInteger activeSignalCount = [tones count];
// Initialize phases
if (phases == nil) {
phases = new float[10];
for(int z = 0; z <= 10; z++) {
phases[z] = 0.0;
}
}
// Multiple signals
NSEnumerator * enumerator = [tones objectEnumerator];
id frequency;
UInt32 c = 0;
while(frequency = [enumerator nextObject])
{
for (int i=0; i < numFrames; ++i)
{
for (int iChannel = 0; iChannel < numChannels; ++iChannel)
{
float theta = phases[c] * M_PI * 2;
if (c == 0) {
data[i*numChannels + iChannel] = sin(theta);
} else {
data[i*numChannels + iChannel] = data[i*numChannels + iChannel] + sin(theta);
}
}
phases[c] += 1.0 / (samplingRate / [frequency floatValue]);
if (phases[c] > 1.0) phases[c] = -1;
}
c++;
}
// Normalize data with active signal count
float signalMulti = 1.0 / (float(activeSignalCount) * (sqrt(2.0)));
vDSP_vsmul(data, 1, &signalMulti, data, 1, numFrames*numChannels);
// Apply master volume
float volume = masterVolumeSlider.value;
vDSP_vsmul(data, 1, &volume, data, 1, numFrames*numChannels);
if (fxSwitch.isOn) {
// H(s) = (s/Q) / (s^2 + s/Q + 1)
// http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt
// BW 2.0 Q 0.667
// http://www.rane.com/note170.html
//The order of the coefficients are, B1, B2, A1, A2, B0.
float Fs = samplingRate;
float omega = 2*M_PI*Fs; // w0 = 2*pi*f0/Fs
float Q = 0.50f;
float alpha = sin(omega)/(2*Q); // sin(w0)/(2*Q)
// Through H
for (int i=0; i < numFrames; ++i)
{
for (int iChannel = 0; iChannel < numChannels; ++iChannel)
{
data[i*numChannels + iChannel] = (data[i*numChannels + iChannel]/Q) / (pow(data[i*numChannels + iChannel],2) + data[i*numChannels + iChannel]/Q + 1);
}
}
float b0 = alpha;
float b1 = 0;
float b2 = -alpha;
float a0 = 1 + alpha;
float a1 = -2*cos(omega);
float a2 = 1 - alpha;
float *coefficients = (float *) calloc(5, sizeof(float));
coefficients[0] = b1;
coefficients[1] = b2;
coefficients[2] = a1;
coefficients[3] = a2;
coefficients[3] = b0;
vDSP_deq22(data, 2, coefficients, data, 2, numFrames);
free(coefficients);
}
// Measure dB
[self measureDB:data:numFrames:numChannels];
}];
My aim is to make a 10-band EQ for this buffer, using vDSP_deq22, the syntax of the method is:
vDSP_deq22(<float *vDSP_A>, <vDSP_Stride vDSP_I>, <float *vDSP_B>, <float *vDSP_C>, <vDSP_Stride vDSP_K>, <vDSP_Length __vDSP_N>)
See: http://developer.apple.com/library/mac/#documentation/Accelerate/Reference/vDSPRef/Reference/reference.html#//apple_ref/doc/c_ref/vDSP_deq22
Arguments:
float *vDSP_A is the input data
float *vDSP_B are 5 filter coefficients
float *vDSP_C is the output data
I have to make 10 filters (10 times vDSP_deq22). Then I set the gain for every band and combine them back together. But what coefficients do I feed every filter? I know vDSP_deq22 is a 2nd order (butterworth) IIR filter, but how do I turn this into a bandpass?
Now I have three questions:
a) Do I have to de-interleave and interleave the audio buffer? I know setting stride to 2 just filters on channel but how I filter the other, stride 1 will process both channels as one.
b) Do I have to transform/process the buffer before it enters the vDSP_deq22 method? If so, do I also have to transform it back to normal?
c) What values of the coefficients should I set to the 10 vDSP_deq22s?
I've been trying for days now but I haven't been able to figure this on out, please help me out!
Your omega value need to be normalised, i.e. expressed as a fraction of Fs - it looks like you left out the f0 when you calculated omega, which will make alpha wrong too:
float omega = 2*M_PI*Fs; // w0 = 2*pi*f0/Fs
should probably be:
float omega = 2*M_PI*f0/Fs; // w0 = 2*pi*f0/Fs
where f0 is the centre frequency in Hz.
For your 10 band equaliser you'll need to pick 10 values of f0, spaced logarithmically, e.g. 25 Hz, 50 Hz, 100 Hz, 200 Hz, 400 Hz, 800 Hz, 1.6 kHz, 3.2 kHz, 6.4 kHz, 12.8 kHz.

Error with GL_TRIANGLE_STRIP and vertex array

I have this method that prepares the coordinates in the posCoords array. It works properly about 30% of the time, then the other 70% the first few triangles are messed up in the grid.
The entire grid is drawn using GL_TRIANGLE_STRIP.
I'm pulling my hair out trying to figure out whats wrong. Any ideas?
if(!ES2) {
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
}
int cols = floor(SCREEN_WIDTH/blockSize);
int rows = floor(SCREEN_HEIGHT/blockSize);
int cells = cols*rows;
NSLog(#"Cells: %i", cells);
coordCount = /*Points per coordinate*/2 * /*Coordinates per cell*/ 2 * cells + /* additional coord per row */2*2*rows;
NSLog(#"Coord count: %i", coordCount);
if(texCoords) free(texCoords);
if(posCoords) free(posCoords);
if(dposCoords) free(dposCoords);
texCoords = malloc(sizeof(GLfloat)*coordCount);
posCoords = malloc(sizeof(GLfloat)*coordCount);
dposCoords = malloc(sizeof(GLfloat)*coordCount);
int index = 0;
float lowY, hiY = 0;
int x,y = 0;
BOOL drawLeftToRight = YES;
for(y=0;y<SCREEN_HEIGHT;y+=blockSize) {
lowY = y;
hiY = y + blockSize;
// Draw a single row
for(x=0;x<=SCREEN_WIDTH;x+=blockSize) {
CGFloat px,py,px2,py2 = 0;
// Top point of triangle
if(drawLeftToRight) {
px = x;
py = lowY;
// Bottom point of triangle
px2 = x;
py2 = hiY;
}
else {
px = SCREEN_WIDTH-x;
py = lowY;
// Bottom point of triangle
px2 = SCREEN_WIDTH-x;
py2 = hiY;
}
// Top point of triangle
posCoords[index] = px;
posCoords[index+1] = py;
// Bottom point of triangle
posCoords[index+2] = px2;
posCoords[index+3] = py2;
texCoords[index] = px/SCREEN_WIDTH;
texCoords[index+1] = py/SCREEN_HEIGHT;
texCoords[index+2] = px2/SCREEN_WIDTH;
texCoords[index+3] = py2/SCREEN_HEIGHT;
index+=4;
}
drawLeftToRight = !drawLeftToRight;
}
With a triangle strip the last vertex you add replaces the the oldest vertex used so you're using bad vertices along the edge. It's easier to explain with your drawing.
Triangle 1 uses vertices 1, 2, 3 - valid triangle
Triangle 2 uses vertices 2, 3, 4 - valid triangle
Triangle 3 uses vertices 4, 5, 6 - valid triangle
Triangle 4 uses vertices 5, 6, 7 - straight line, nothing will be drawn
Triangle 5 uses vertices 6, 7, 8 - valid
etc.
If you want your strips to work, you'll need to pad your strips with degenerate triangles or break your strips up.
I tend to draw left to right and at the end of the row add a degenerate triangle, then left to right again.
e.g. [1, 2, 3, 4, 5, 6; 6 10; 10, 11, 8, 9, 6, 7]
The middle part is called a degenerate triangle (e.g. triangles of zero area).
Also, if I had to take a guess at why you are seeing various kinds of corruption, I'd check to make sure that your vertices and indices are exactly what you expect them to be - normally you see that kind of corruption when you don't specify indices correctly.
Found the issue: the texture buffer was overflowing in to the vertex buffer, it was random because some background tasks where shuffling memory around on a timer (sometimes)