How to do batching without UBOs? - optimization

I'm trying to implement batching for a WebGL renderer which is struggling with lots of small objects due to too many draw calls. What I thought is I'd batch them all by the kind of shader they use, then draw a few at a time, uploading material parameters and the model matrix for each object once in uniforms.
My problem is that the uniform size limits for non-UBO uniforms are extremely low, as in 256 floats low at a minimum. If my material uses, say, 8 floats, and if you factor in the model matrix, I barely have enough uniforms to draw 10 models in a single batch, which isn't really going to be enough.
Is there any hope to make this work without UBOs? Are textures an option? How are people doing batching without WebGL2 UBOs?
More details: I have no skinning or complex animations, I just have some shaders (diffuse, cook-torrance, whatever) and each model has different material settings for each shader, e.g. color, roughness, index of refraction which can be changed dynamically by the user (so it's not realistic to bake them into the vertex array because we have some high poly data, also users can switch shaders and not all shaders have the same number of parameters) as well as material maps obviously. The geometry itself is static and just has a linear transform on each model. For the most part all meshes are different so geometry instancing won't help a whole lot, but I can look at that later.
Thanks

I don't know that this is actually faster than lots of draw calls but here is drawing 4 models with a single draw call
It works by adding an id per model. So, for every vertex in model #0 put a 0, for every vertex in model #1 put a 1, etc.
Then it uses model id to index stuff in a texture. The easiest would be model id chooses the row of a texture and then all the data for that model can be pulled out of that row.
For WebGL1
attribute float modelId;
...
#define TEXTURE_WIDTH ??
#define COLOR_OFFSET ((0.0 + 0.5) / TEXTURE_WIDTH)
#define MATERIAL_OFFSET ((1.0 + 0.5) / TEXTURE_WIDTH)
float modelOffset = (modelId + .5) / textureHeight;
vec4 color = texture2D(perModelData, vec2(COLOR_OFFSET, modelOffset));
vec4 roughnessIndexOfRefaction = texture2D(perModelData,
vec2(MATERIAL_OFFSET, modelOffset));
etc..
As long as you are not drawing more than gl.getParameter(gl.MAX_TEXTURE_SIZE) models it will work. If you have more than that either use more draw calls or change the texture coordinate calculations so there's more than one model per row
In WebGL2 you'd change the code to use texelFetch and unsigned integers
in uint modelId;
...
#define COLOR_OFFSET 0
#define MATERIAL_OFFSET 1
vec4 color = texelFetch(perModelData, uvec2(COLOR_OFFSET, modelId));
vec4 roughnessIndexOfRefaction = texelFetch(perModelData,
uvec2(MATERIAL_OFFSET, modelId));
example of 4 models drawn with 1 draw call. For each model the model matrix and color are stored in the texture.
const m4 = twgl.m4;
const v3 = twgl.v3;
const gl = document.querySelector('canvas').getContext('webgl');
const ext = gl.getExtension('OES_texture_float');
if (!ext) {
alert('need OES_texture_float');
}
const COMMON_STUFF = `
#define TEXTURE_WIDTH 5.0
#define MATRIX_ROW_0_OFFSET ((0. + 0.5) / TEXTURE_WIDTH)
#define MATRIX_ROW_1_OFFSET ((1. + 0.5) / TEXTURE_WIDTH)
#define MATRIX_ROW_2_OFFSET ((2. + 0.5) / TEXTURE_WIDTH)
#define MATRIX_ROW_3_OFFSET ((3. + 0.5) / TEXTURE_WIDTH)
#define COLOR_OFFSET ((4. + 0.5) / TEXTURE_WIDTH)
`;
const vs = `
attribute vec4 position;
attribute vec3 normal;
attribute float modelId;
uniform float textureHeight;
uniform sampler2D perModelDataTexture;
uniform mat4 projection;
uniform mat4 view;
varying vec3 v_normal;
varying float v_modelId;
${COMMON_STUFF}
void main() {
v_modelId = modelId; // pass to fragment shader
float modelOffset = (modelId + 0.5) / textureHeight;
// note: in WebGL2 better to use texelFetch
mat4 model = mat4(
texture2D(perModelDataTexture, vec2(MATRIX_ROW_0_OFFSET, modelOffset)),
texture2D(perModelDataTexture, vec2(MATRIX_ROW_1_OFFSET, modelOffset)),
texture2D(perModelDataTexture, vec2(MATRIX_ROW_2_OFFSET, modelOffset)),
texture2D(perModelDataTexture, vec2(MATRIX_ROW_3_OFFSET, modelOffset)));
gl_Position = projection * view * model * position;
v_normal = mat3(view) * mat3(model) * normal;
}
`;
const fs = `
precision highp float;
varying vec3 v_normal;
varying float v_modelId;
uniform float textureHeight;
uniform sampler2D perModelDataTexture;
uniform vec3 lightDirection;
${COMMON_STUFF}
void main() {
float modelOffset = (v_modelId + 0.5) / textureHeight;
vec4 color = texture2D(perModelDataTexture, vec2(COLOR_OFFSET, modelOffset));
float l = dot(lightDirection, normalize(v_normal)) * .5 + .5;
gl_FragColor = vec4(color.rgb * l, color.a);
}
`;
// compile shader, link, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// make some vertex data
const modelVerts = [
twgl.primitives.createSphereVertices(1, 6, 4),
twgl.primitives.createCubeVertices(1, 1, 1),
twgl.primitives.createCylinderVertices(1, 1, 10, 1),
twgl.primitives.createTorusVertices(1, .2, 16, 8),
];
// merge all the vertices into one
const arrays = twgl.primitives.concatVertices(modelVerts);
// fill an array so each vertex of each model has a modelId
const modelIds = new Uint16Array(arrays.position.length / 3);
let offset = 0;
modelVerts.forEach((verts, modelId) => {
const end = offset + verts.position.length / 3;
while(offset < end) {
modelIds[offset++] = modelId;
}
});
arrays.modelId = { numComponents: 1, data: modelIds };
// calls gl.createBuffer, gl.bindBuffer, gl.bufferData
const bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays);
const numModels = modelVerts.length;
const tex = gl.createTexture();
const textureWidth = 5; // 4x4 matrix, 4x1 color
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, textureWidth, numModels, 0, gl.RGBA, gl.FLOAT, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
// this data is for the texture, one row per model
// first 4 pixels are the model matrix, 5 pixel is the color
const perModelData = new Float32Array(textureWidth * numModels * 4);
const stride = textureWidth * 4;
const modelOffset = 0;
const colorOffset = 16;
// set the colors at init time
for (let modelId = 0; modelId < numModels; ++modelId) {
perModelData.set([r(), r(), r(), 1], modelId * stride + colorOffset);
}
function r() {
return Math.random();
}
function render(time) {
time *= 0.001; // seconds
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
const fov = Math.PI * 0.25;
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const near = 0.1;
const far = 20;
const projection = m4.perspective(fov, aspect, near, far);
const eye = [0, 0, 10];
const target = [0, 0, 0];
const up = [0, 1, 0];
const camera = m4.lookAt(eye, target, up);
const view = m4.inverse(camera);
// set the matrix for each model in the texture data
const mat = m4.identity();
for (let modelId = 0; modelId < numModels; ++modelId) {
const t = time * (modelId + 1) * 0.3;
m4.identity(mat);
m4.rotateX(mat, t, mat);
m4.rotateY(mat, t, mat);
m4.translate(mat, [0, 0, Math.sin(t * 1.1) * 4], mat);
m4.rotateZ(mat, t, mat);
perModelData.set(mat, modelId * stride + modelOffset);
}
// upload the texture data
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texSubImage2D(gl.TEXTURE_2D, 0, 0, 0, textureWidth, numModels,
gl.RGBA, gl.FLOAT, perModelData);
gl.useProgram(programInfo.program);
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// calls gl.activeTexture, gl.bindTexture, gl.uniformXXX
twgl.setUniforms(programInfo, {
lightDirection: v3.normalize([1, 2, 3]),
perModelDataTexture: tex,
textureHeight: numModels,
projection,
view,
});
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Here's 2000 models in one draw call
https://jsfiddle.net/greggman/g2tcadho/

Related

Vulkan line drawing not showing up

So I would like to render the bounding box of a selected object. I have a buffer to store the 8 points and another buffer to store the indices to use line strip drawing of those points to make a box. I captured a frame with RenderDoc and I can see the yellow box bounding box
So it looks like the values are correct and the box is being drawn, but I do not see it in the final render to the screen.. Anyone have an idea what I might be missing here?
VkDescriptorSetLayoutBinding cameraBind = vkinit::descriptorset_layout_binding(VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, VK_SHADER_STAGE_VERTEX_BIT, 0);
VkDescriptorSetLayoutCreateInfo setinfo = {};
setinfo.bindingCount = 1;
setinfo.flags = 0;
setinfo.pNext = nullptr;
setinfo.sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
setinfo.pBindings = &cameraBind;
vkCreateDescriptorSetLayout(_device, &setinfo, nullptr, &_globalSetLayout);
//setup push constant
VkPushConstantRange push_constant;
push_constant.offset = 0;
push_constant.size = sizeof(glm::mat4);
push_constant.stageFlags = VK_SHADER_STAGE_VERTEX_BIT;
VkPipelineLayoutCreateInfo mesh_pipeline_layout_info = vkinit::pipeline_layout_create_info();
mesh_pipeline_layout_info.setLayoutCount = 1;
mesh_pipeline_layout_info.pSetLayouts = &_globalSetLayout;
mesh_pipeline_layout_info.pPushConstantRanges = &push_constant;
mesh_pipeline_layout_info.pushConstantRangeCount = 1;
vkCreatePipelineLayout(_device, &mesh_pipeline_layout_info, nullptr, &_highlightPipelineLayout);
PipelineBuilder highlightPipelineBuilder;
VertexInputDescription description;
VkVertexInputBindingDescription mainBinding = {};
mainBinding.binding = 0;
mainBinding.stride = sizeof(glm::vec3);
mainBinding.inputRate = VK_VERTEX_INPUT_RATE_VERTEX;
description.bindings.push_back(mainBinding);
//Position will be stored at Location 0
VkVertexInputAttributeDescription positionAttribute = {};
positionAttribute.binding = 0;
positionAttribute.location = 0;
positionAttribute.format = VK_FORMAT_R32G32B32_SFLOAT;
positionAttribute.offset = 0;
description.attributes.push_back(positionAttribute);
highlightPipelineBuilder._pipelineLayout = _highlightPipelineLayout;
highlightPipelineBuilder.vertexDescription = description;
highlightPipelineBuilder._inputAssembly = vkinit::input_assembly_create_info(VK_PRIMITIVE_TOPOLOGY_LINE_STRIP);
highlightPipelineBuilder._rasterizer = vkinit::rasterization_state_create_info(VK_POLYGON_MODE_LINE);
highlightPipelineBuilder._depthStencil = vkinit::depth_stencil_create_info(true, true, VK_COMPARE_OP_ALWAYS);
ShaderEffect* lineEffect = new ShaderEffect();
lineEffect->add_stage(_shaderCache.get_shader(shader_path("wk_highlight.vert.spv")), VK_SHADER_STAGE_VERTEX_BIT);
lineEffect->add_stage(_shaderCache.get_shader(shader_path("wk_highlight.frag.spv")), VK_SHADER_STAGE_FRAGMENT_BIT);
lineEffect->reflect_layout(_device, nullptr, 0);
highlightPipelineBuilder.setShaders(lineEffect);
_highlightPipeline = highlightPipelineBuilder.build_pipeline(_device, _renderPass);
and here is the drawing part
vkCmdBindPipeline(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, _highlightPipeline);
uint32_t camera_data_offset = _dynamicData.push(_camera.matrices);
VkDescriptorBufferInfo camInfo = _dynamicData.source.get_info();
camInfo.range = sizeof(GPUCameraData);
camInfo.offset = camera_data_offset;
VkDescriptorSet cameraSet;
vkutil::DescriptorBuilder::begin(_descriptorLayoutCache, _dynamicDescriptorAllocator)
.bind_buffer(0, &camInfo, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, VK_SHADER_STAGE_VERTEX_BIT)
.build(cameraSet);
vkCmdPushConstants(cmd, _highlightPipelineLayout, VK_SHADER_STAGE_VERTEX_BIT, 0, sizeof(glm::mat4), &_selectedMatrix);
VkDeviceSize offset = 0;
vkCmdBindVertexBuffers(cmd, 0, 1, &_highlightVertexBuffer._buffer, &offset);
vkCmdBindIndexBuffer(cmd, _highlightIndexBuffer._buffer, 0, VK_INDEX_TYPE_UINT16);
vkCmdBindDescriptorSets(cmd, VK_PIPELINE_BIND_POINT_GRAPHICS, _highlightPipelineLayout, 0, 1, &cameraSet, 0, nullptr);
vkCmdDrawIndexed(cmd, 16, 1, 0, 0, 0);
And the shaders
#version 450
layout (location = 0) in vec3 vPosition;
layout(set = 0, binding = 0) uniform CameraBuffer{
mat4 view;
mat4 proj;
mat4 viewproj;
} cameraData;
layout(push_constant) uniform PushConstants {
mat4 matrix;
} pushConstants;
void main()
{
mat4 modelMatrix = pushConstants.matrix;
mat4 transformMatrix = (cameraData.viewproj * modelMatrix);
gl_Position = transformMatrix * vec4(vPosition, 1.0f);
}
fragment shader
//glsl version 4.5
#version 450
//output write
layout (location = 0) out vec4 outFragColor;
void main()
{
outFragColor = vec4(1.0, 1.0, 0.0, 1.0);
}

Vulkan compute shader. Smooth uv coordinates

I have this shader:
#version 450
layout(binding = 0) buffer b0 {
vec2 src[ ];
};
layout(binding = 1) buffer b1 {
vec2 dst[ ];
};
layout(binding = 2) buffer b2 {
int index[ ];
};
layout (local_size_x = 1, local_size_y = 1, local_size_z = 1) in;
void main()
{
int ind =int(gl_GlobalInvocationID.x);
vec2 norm;
norm=src[index[ind*3+2]]-src[index[ind*3]]+src[index[ind*3+1]]-src[index[ind*3]];
norm/=2.0;
dst[index[ind*3]] +=norm;
norm=src[index[ind*3+0]]-src[index[ind*3+1]]+src[index[ind*3+2]]-src[index[ind*3+1]];
dst[index[ind*3+1]] +=norm;
norm=src[index[ind*3+1]]-src[index[ind*3+2]]+src[index[ind*3+0]]-src[index[ind*3+2]];
norm/=2.0;
dst[index[ind*3+2]] +=norm;
}
Because dst buffer is not "atomic", the summation is incorrect.
Is there any way to solve this problem? My answer is no, but if i missed something.
For each vertex in polygon I am calculating a vector from vertex to the center of polygon. Different polygons has the same vertices.
dst - is a vertex buffer, the result of the summation of those shifts from vertex to polygon center.
Each time I have different computation results.

WebGL-moving the object in the line of sight

I defined my Model-View Matrix defining a function lookAt that represents the eye of the camera, the position of the object that I'm representing and the "up" vector of the camera. How can I move the object in the line of sight of the camera? Any tips? If I define the vector that points to the position of the object and starts at the eye of the camera (so if I define the line of sight) how can I use this to make the object move along this direction?
This is my lookAt function
function lookAt( eye, at, up )
{
if ( !Array.isArray(eye) || eye.length != 3) {
throw "lookAt(): first parameter [eye] must be an a vec3";
}
if ( !Array.isArray(at) || at.length != 3) {
throw "lookAt(): first parameter [at] must be an a vec3";
}
if ( !Array.isArray(up) || up.length != 3) {
throw "lookAt(): first parameter [up] must be an a vec3";
}
if ( equal(eye, at) ) {
return mat4();
}
var v = normalize( subtract(at, eye) ); // view direction vector
var n = normalize( cross(v, up) ); // perpendicular vector
var u = normalize( cross(n, v) ); // "new" up vector
v = negate( v );
var result = mat4(
vec4( n, -dot(n, eye) ),
vec4( u, -dot(u, eye) ),
vec4( v, -dot(v, eye) ),
vec4()
);
return result;
}
Honestly I don't understand you're lookAt function. It's not setting a translation like most look at functions.
Here's a different lookAt function that generates a camera matrix,a matrix that positions the camera in the world. That's in contrast to a lookAt function that generates a view matrix, a matrix that moves everything in the world in front of the camera.
function lookAt(eye, target, up) {
const zAxis = v3.normalize(v3.subtract(eye, target));
const xAxis = v3.normalize(v3.cross(up, zAxis));
const yAxis = v3.normalize(v3.cross(zAxis, xAxis));
return [
...xAxis, 0,
...yAxis, 0,
...zAxis, 0,
...eye, 1,
];
}
Here's an article with some detail about a lookAt matrix.
I find camera matrices more useful can view matrixes because a camera matrix (or a lookAt matrix) and be used to make heads look at other things. Gun turrets look at targets, eyes look at interests, where as a view matrix can pretty much only be used for one thing. You can get one from the other by taking the inverse. But since a scene with turrets, eyes, and heads of characters tracking things might need 50+ lookAt matrices it seems far more useful to generate that kind of matrix and take 1 inverse or a view matrix than to generate 50+ view matrices and have to invert all but 1 of them.
You can move any object relative to the way the camera is facing by taking an axis of the camera matrix and multiplying by some scalar. The xAxis will move left and right perpendicular to the camera, the yAxis up and down perpendicular to the camera, and the zAxis forward/backward in the direction the camera is facing.
The axis of the camera matrix are
+----+----+----+----+
| xx | xy | xz | | xaxis
+----+----+----+----+
| yx | yy | yz | | yaxis
+----+----+----+----+
| zx | zy | zz | | zaxis
+----+----+----+----+
| tx | ty | tz | | translation
+----+----+----+----+
In other words
const camera = lookAt(eye, target, up);
const xaxis = camera.slice(0, 3);
const yaxis = camera.slice(4, 7);
const zaxis = camera.slice(8, 11);
Now you can translate forward or back with
matrix = mult(matrix, zaxis); // moves 1 unit away from camera
Multiply zaxis by the amount you want to move
moveVec = [zaxis[0] * moveAmount, zaxis[1] * moveAmount, zaxis[2] * moveAmount];
matrix = mult(matrix, moveVec); // moves moveAmount units away from camera
Or if you have your translation stored elsewhere just add the zaxis in
// assuming tx, ty, and tz are our translation
tx += zaxis[0] * moveAmount;
ty += zaxis[1] * moveAmount;
tz += zaxis[2] * moveAmount;
const vs = `
uniform mat4 u_worldViewProjection;
attribute vec4 position;
attribute vec2 texcoord;
varying vec4 v_position;
varying vec2 v_texcoord;
void main() {
v_texcoord = texcoord;
gl_Position = u_worldViewProjection * position;
}
`;
const fs = `
precision mediump float;
varying vec2 v_texcoord;
uniform sampler2D u_texture;
void main() {
gl_FragColor = texture2D(u_texture, v_texcoord);
}
`;
"use strict";
const m4 = twgl.m4;
const v3 = twgl.v3;
const gl = document.getElementById("c").getContext("webgl");
// compiles shaders, links program, looks up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// calls gl.createBuffer, gl.bindBuffer, gl.bufferData for positions, texcoords
const bufferInfo = twgl.primitives.createCubeBufferInfo(gl);
// calls gl.createTexture, gl.bindTexture, gl.texImage2D, gl.texParameteri
const tex = twgl.createTexture(gl, {
min: gl.NEAREST,
mag: gl.NEAREST,
src: [
255, 64, 64, 255,
64, 192, 64, 255,
64, 64, 255, 255,
255, 224, 64, 255,
],
});
const settings = {
xoff: 0,
yoff: 0,
zoff: 0,
};
function render(time) {
time *= 0.001;
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
const fov = 45 * Math.PI / 180;
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const zNear = 0.01;
const zFar = 100;
const projection = m4.perspective(fov, aspect, zNear, zFar);
const eye = [3, 4, -6];
const target = [0, 0, 0];
const up = [0, 1, 0];
const camera = m4.lookAt(eye, target, up);
const view = m4.inverse(camera);
const viewProjection = m4.multiply(projection, view);
gl.useProgram(programInfo.program);
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
const t = time * .1;
for (let z = -1; z <= 1; ++z) {
for (let x = -1; x <= 1; ++x) {
const world = m4.identity();
m4.translate(world, v3.mulScalar(camera.slice(0, 3), settings.xoff), world);
m4.translate(world, v3.mulScalar(camera.slice(4, 7), settings.yoff), world);
m4.translate(world, v3.mulScalar(camera.slice(8, 11), settings.zoff), world);
m4.translate(world, [x * 1.4, 0, z * 1.4], world);
m4.rotateY(world, t + z + x, world);
// calls gl.uniformXXX
twgl.setUniforms(programInfo, {
u_texture: tex,
u_worldViewProjection: m4.multiply(viewProjection, world),
});
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
}
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
setupSlider("#xSlider", "#xoff", "xoff");
setupSlider("#ySlider", "#yoff", "yoff");
setupSlider("#zSlider", "#zoff", "zoff");
function setupSlider(sliderId, labelId, property) {
const slider = document.querySelector(sliderId);
const label = document.querySelector(labelId);
function updateLabel() {
label.textContent = settings[property].toFixed(2);
}
slider.addEventListener('input', e => {
settings[property] = (parseInt(slider.value) / 100 * 2 - 1) * 5;
updateLabel();
});
updateLabel();
slider.value = (settings[property] / 5 * .5 + .5) * 100;
}
body { margin: 0; }
canvas { display: block; width: 100vw; height: 100vh; }
#ui {
position: absolute;
left: 10px;
top: 10px;
z-index: 2;
background: rgba(255, 255, 255, 0.9);
padding: .5em;
}
<script src="https://twgljs.org/dist/3.x/twgl-full.min.js"></script>
<canvas id="c"></canvas>
<div id="ui">
<div><input id="xSlider" type="range" min="0" max="100"/><label>xoff: <span id="xoff"></span></label></div>
<div><input id="ySlider" type="range" min="0" max="100"/><label>yoff: <span id="yoff"></span></label></div>
<div><input id="zSlider" type="range" min="0" max="100"/><label>zoff: <span id="zoff"></span></label></div>
</div>

Drawing a Sphere and Calculating Surface Normals

I'm trying to draw a sphere and calculate its surface normals. I've been staring at this for hours, but I'm getting nowhere. Here is a screenshot of the mess that this draws:
- (id) init
{
if (self = [super init]) {
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
GLfloat rad_th, rad_ph;
GLint th, ph;
GLint i = 0;
GLKMatrix3 this_triangle;
GLKVector3 column0, column1, column2, this_normal;
for (ph=-90; ph<=90; ph++) {
for (th=0; th<=360; th+=10) {
if (i<3) printf("i: %d th: %f ph: %f\n", i, (float)th, (float)ph);
rad_th = GLKMathDegreesToRadians( (float) th );
rad_ph = GLKMathDegreesToRadians( (float) ph);
_vertices[i][0][0] = sinf(rad_th)*cosf(rad_ph);
_vertices[i][0][1] = sinf(rad_ph);
_vertices[i][0][2] = cos(rad_th)*cos(rad_ph);
rad_th = GLKMathDegreesToRadians( (float) (th) );
rad_ph = GLKMathDegreesToRadians( (float) (ph+1) );
_vertices[i+1][0][0] = sinf(rad_th)*cosf(rad_ph);
_vertices[i+1][0][1] = sinf(rad_ph);
_vertices[i+1][0][2] = cos(rad_th)*cos(rad_ph);
i+=2;
}
}
// calclate and store the surface normal for every triangle
i=2;
for (ph=-90; ph<=90; ph++) {
for (th=2; th<=360; th++) {
// note that the first two vertices are irrelevant since it isn't until the third vertex that a triangle is defined.
column0 = GLKVector3Make(_vertices[i-2][0][0], _vertices[i-2][0][1], _vertices[i-2][0][2]);
column1 = GLKVector3Make(_vertices[i-1][0][0], _vertices[i-1][0][1], _vertices[i-1][0][2]);
column2 = GLKVector3Make(_vertices[i-0][0][0], _vertices[i-0][0][1], _vertices[i-0][0][2]);
this_triangle = GLKMatrix3MakeWithColumns(column0, column1, column2);
this_normal = [self calculateTriangleSurfaceNormal : this_triangle];
_vertices[i][1][0] = this_normal.x;
_vertices[i][1][1] = this_normal.y;
_vertices[i][1][2] = this_normal.z;
i++;
}
i+=2;
}
glBufferData(GL_ARRAY_BUFFER, sizeof(_vertices), _vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat)*6, NULL);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat)*6, (GLubyte*)(sizeof(GLfloat)*3));
glBindVertexArrayOES(0);
}
return self;
}
(void) render;
{
glDrawArrays(GL_TRIANGLE_STRIP, 0, 65522);
}
Here is my surface normal calculation. I've used this elsewhere, so I believe that it works, if given the correct vertices, of course.
- (GLKVector3) calculateTriangleSurfaceNormal : (GLKMatrix3) triangle_vertices
{
GLKVector3 surfaceNormal;
GLKVector3 col0 = GLKMatrix3GetColumn(triangle_vertices, 0);
GLKVector3 col1 = GLKMatrix3GetColumn(triangle_vertices, 1);
GLKVector3 col2 = GLKMatrix3GetColumn(triangle_vertices, 2);
GLKVector3 vec1 = GLKVector3Subtract(col1, col0);
GLKVector3 vec2 = GLKVector3Subtract(col2, col0);
surfaceNormal.x = vec1.y * vec2.z - vec2.y * vec1.z;
surfaceNormal.y = vec1.z * vec2.x - vec2.z * vec1.x;
surfaceNormal.z = vec1.x * vec2.y - vec2.x * vec1.y;
return GLKVector3Normalize(surfaceNormal);
}
In my .h file, I define the _vertices array like this (laugh if you will...):
// 360 + 2 = 362 vertices per triangle strip
// 90 strips per hemisphere (one hemisphere has 91)
// 2 hemispheres
// 362 * 90.5 * 2 = 65522
GLfloat _vertices[65522][2][3]; //2 sets (vertex, normal) and 3 vertices in each set
It appears you are calculating normals for the triangles in your triangle strip, but assigning these normals to the vertexes which are shared by multiple triangles. If you just used triangles instead of the triangle strip, and gave all three vertexes from each triangle that triangles normal, each triangle would have an appropriate normal. This value would really only be correct for the center of the triangle. You would be better off using vertex normals, which as Christian mentioned are equal to the vertexes in this case. These could be interpolated across the triangles.

2nd order IIR filter, coefficients for a butterworth bandpass (EQ)?

Important update: I already figured out the answers and put them in this simple open-source library: http://bartolsthoorn.github.com/NVDSP/ Check it out, it will probably save you quite some time if you're having trouble with audio filters in IOS!
^
I have created a (realtime) audio buffer (float *data) that holds a few sin(theta) waves with different frequencies.
The code below shows how I created my buffer, and I've tried to do a bandpass filter but it just turns the signals to noise/blips:
// Multiple signal generator
__block float *phases = nil;
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels)
{
float samplingRate = audioManager.samplingRate;
NSUInteger activeSignalCount = [tones count];
// Initialize phases
if (phases == nil) {
phases = new float[10];
for(int z = 0; z <= 10; z++) {
phases[z] = 0.0;
}
}
// Multiple signals
NSEnumerator * enumerator = [tones objectEnumerator];
id frequency;
UInt32 c = 0;
while(frequency = [enumerator nextObject])
{
for (int i=0; i < numFrames; ++i)
{
for (int iChannel = 0; iChannel < numChannels; ++iChannel)
{
float theta = phases[c] * M_PI * 2;
if (c == 0) {
data[i*numChannels + iChannel] = sin(theta);
} else {
data[i*numChannels + iChannel] = data[i*numChannels + iChannel] + sin(theta);
}
}
phases[c] += 1.0 / (samplingRate / [frequency floatValue]);
if (phases[c] > 1.0) phases[c] = -1;
}
c++;
}
// Normalize data with active signal count
float signalMulti = 1.0 / (float(activeSignalCount) * (sqrt(2.0)));
vDSP_vsmul(data, 1, &signalMulti, data, 1, numFrames*numChannels);
// Apply master volume
float volume = masterVolumeSlider.value;
vDSP_vsmul(data, 1, &volume, data, 1, numFrames*numChannels);
if (fxSwitch.isOn) {
// H(s) = (s/Q) / (s^2 + s/Q + 1)
// http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt
// BW 2.0 Q 0.667
// http://www.rane.com/note170.html
//The order of the coefficients are, B1, B2, A1, A2, B0.
float Fs = samplingRate;
float omega = 2*M_PI*Fs; // w0 = 2*pi*f0/Fs
float Q = 0.50f;
float alpha = sin(omega)/(2*Q); // sin(w0)/(2*Q)
// Through H
for (int i=0; i < numFrames; ++i)
{
for (int iChannel = 0; iChannel < numChannels; ++iChannel)
{
data[i*numChannels + iChannel] = (data[i*numChannels + iChannel]/Q) / (pow(data[i*numChannels + iChannel],2) + data[i*numChannels + iChannel]/Q + 1);
}
}
float b0 = alpha;
float b1 = 0;
float b2 = -alpha;
float a0 = 1 + alpha;
float a1 = -2*cos(omega);
float a2 = 1 - alpha;
float *coefficients = (float *) calloc(5, sizeof(float));
coefficients[0] = b1;
coefficients[1] = b2;
coefficients[2] = a1;
coefficients[3] = a2;
coefficients[3] = b0;
vDSP_deq22(data, 2, coefficients, data, 2, numFrames);
free(coefficients);
}
// Measure dB
[self measureDB:data:numFrames:numChannels];
}];
My aim is to make a 10-band EQ for this buffer, using vDSP_deq22, the syntax of the method is:
vDSP_deq22(<float *vDSP_A>, <vDSP_Stride vDSP_I>, <float *vDSP_B>, <float *vDSP_C>, <vDSP_Stride vDSP_K>, <vDSP_Length __vDSP_N>)
See: http://developer.apple.com/library/mac/#documentation/Accelerate/Reference/vDSPRef/Reference/reference.html#//apple_ref/doc/c_ref/vDSP_deq22
Arguments:
float *vDSP_A is the input data
float *vDSP_B are 5 filter coefficients
float *vDSP_C is the output data
I have to make 10 filters (10 times vDSP_deq22). Then I set the gain for every band and combine them back together. But what coefficients do I feed every filter? I know vDSP_deq22 is a 2nd order (butterworth) IIR filter, but how do I turn this into a bandpass?
Now I have three questions:
a) Do I have to de-interleave and interleave the audio buffer? I know setting stride to 2 just filters on channel but how I filter the other, stride 1 will process both channels as one.
b) Do I have to transform/process the buffer before it enters the vDSP_deq22 method? If so, do I also have to transform it back to normal?
c) What values of the coefficients should I set to the 10 vDSP_deq22s?
I've been trying for days now but I haven't been able to figure this on out, please help me out!
Your omega value need to be normalised, i.e. expressed as a fraction of Fs - it looks like you left out the f0 when you calculated omega, which will make alpha wrong too:
float omega = 2*M_PI*Fs; // w0 = 2*pi*f0/Fs
should probably be:
float omega = 2*M_PI*f0/Fs; // w0 = 2*pi*f0/Fs
where f0 is the centre frequency in Hz.
For your 10 band equaliser you'll need to pick 10 values of f0, spaced logarithmically, e.g. 25 Hz, 50 Hz, 100 Hz, 200 Hz, 400 Hz, 800 Hz, 1.6 kHz, 3.2 kHz, 6.4 kHz, 12.8 kHz.