I started developing my own path tracing program with Opencl using GPU (for the speed). I had easily created path tracer with basic mateial with standard properties (as metallic, specular, refractionChance, emission etc.).
But what I am interested in now is option to create more complex materials something as in Blender's shader graph where you can use for example Fresnel function.
I imagine this as function for every material (and stop me if this is horrible approach) and the function returns basic information about point where ray landed (base color ,metallic...). It is some sort of sampler for object's material.
This would be easily done with C++ on CPU. But OpenCL doesnt support virtual functions and creating big switch for every material's function is really bad for GPU performance by my knowledge.
Does anyone have some expirience with material systems or just some small tip to get started? I would be thanks for any answer.
This is my C++ CPU approach:
struct MaterialProp{
float3 baseColor;
float metallic;
float specular;
// ETC.
};
struct Material {
virtual MaterialProp sampleMaterial(Ray ray, ... ){
MaterialProp prop;
prop.baseColor = float3(0.0f, 0.0f, 0.0f);
prop.metallic = 0.0f;
prop.specular = 0.0f;
...
prop;
}
};
// Brand new material
struct NewMaterial : public Material{
Texture tex;
virtual MaterialProp sampleMaterial(Ray ray, ... ){
MaterialProp prop;
prop.baseColor = sampleTex(tex);
prop.metallic = fresnel;
prop.specular = 0.0f;
...
prop;
}
};
Color trace(Ray ray){
Object* hitObj = intersectScene(ray);
if(hitObj){
MaterialProp prop = hitObj.Material.sampleMaterial(ray);
//Do other stuff with properties;
//return color;
}else{
return Color(0.0f, 0.0f, 0.0f);
}
};
Related
When you create a BLAS (bottom level acceleration structures) you specify any number of vertex/index buffers to be part of the structure. How does that end up interacting with the shader and get specified in the descriptor set? How should I link these structures with materials?
How is texture mapping usually done with raytracing? I saw some sort of "materials table" in Q2RTX but the documentation is non-existent and the code is sparsely commented.
A common approach is to use a material buffer in combination with a texture array that is addressed in the shaders where you require the texture data. You then pass the material id e.g. per-vertex or per-primitive and then use that to dynamically fetch the material, and with it the texture index. Due to the requirements for Vulkan ray tracing you can simplify this by using the VK_EXT_descriptor_indexing extension (Spec) that makes it possible to create a large and descriptor set containing all textures required to render your scene.
The relevant shader parts:
// Enable required extension
...
#extension GL_EXT_nonuniform_qualifier : enable
// Material definition
struct Material {
int albedoTextureIndex;
int normalTextureIndex;
...
};
// Bindings
layout(binding = 6, set = 0) readonly buffer Materials { Material materials[]; };
layout(binding = 7, set = 0) uniform sampler2D[] textures;
...
// Usage
void main()
{
Primitive primitive = unpackTriangle(gl_Primitive, ...);
Material material = materials[primitive.materialId];
vec4 color = texture(textures[nonuniformEXT(material.albedoTextureIndex)], uv);
...
}
In your application you then create a buffer that stores the materials generated on the host, and bind it to the binding point of the shader.
For the textures, you pass them as an array of textures. An array texture would be an option too, but isn't as flexible due to the same size per array slice limitation. Note that it does not have a size limitation in the above example, which is made possible by VK_EXT_descriptor_indexing and is only allowed for the final binding in a descriptor set. This adds some flexibility to your setup.
As for the passing the material index that you fetch the data from: The easiest approach is to pass that information along with your vertex data, which you'll have to access/unpack in your shaders anyway:
struct Vertex {
vec4 pos;
vec4 normal;
vec2 uv;
vec4 color;
int32_t materialIndex;
}
I've been building a program with Processing 3 the last several days (first time going back to Processing since Intro to Computer Science in 2009) and kept having this issue:
public class PolarMap {
...
PVector[][] mapping = new PVector[width][height];
PVector[][] cartesian = new PVector[width][height];
PVector cart = new PVector();
PVector polar = new PVector();
/**
Maps every pixel on the cartesian plane to a polar coordinate
relative to some origin point.
*/
public void Map(float originX, float originY){
for (int x=0; x < width; x++){
for (int y=0; y < height; y++){
...
cart.add(x, y);
polar.add(r, theta);
mapping[x][y] = polar; ***
cartesian[x][y] = cart;
}
}
}
...
}
On the line with the ***, I would always get an Array Index Out Of Bounds thrown. I searched SO, Reddit, and Processing's own documentation to figure out why. If you're not familiar with Processing, width and height are both built-in variables and are equal to the number of pixels high and across your canvas is as declared in the setup() method (800x800 in my case). For some reason, both arrays were not being initialized to this value--instead, they were initializing to the default value of those variables: 100.
So, because it made no sense but it was one of those times, I tried declaring new variables:
int high = height;
int wide = width;
and initialized the array with those variables. And wouldn't you know it, that solved the problem. I now have two 800x800 arrays.
So here's my question: WHY were the built-in variables not working as expected when used to initialize the arrays, but did exactly what they were supposed to when assigned to a defined variable?
Think about when the width and height variables get their values. Consider this sample sketch:
int value = width;
void setup(){
size(500, 200);
println(value);
}
If you run this program, you'll see that it prints 100, even though the window is 500 pixels wide. This is because the int value = width; line is happening before the width is set!
For this to work how you'd expect, you have to set the value variable after the size() function is called. So you could do this:
int value;
void setup(){
size(500, 200);
value = width;
println(value);
}
Move any initializations to inside the setup() function, after the size() function is called, and you'll be fine.
I'm trying to follow an example from "3D Engine Design for Virtual Globes" by Cozzi and Ring.
I'm trying to use their vertex shader (11.2.2, p. 319), as it seems to provide exactly the starting point for what I need to accomplish (rendering terrain from dense, array-based terrain data):
in vec2 position;
uniform mat4 og_modelViewPerspectiveMatrix;
uniform sampler2DRect u_heightMap;
void main()
{
gl_Position = og_modelViewPerspectiveMatrix * vec4(position, texture(u_heightMap, position).r, 1.0);
}
The problem is that I'm not clear how to set up the necessary data in the Objective C client code. Build output shows
TerrainShaderTest[10429:607] Shader compile log:
ERROR: 0:31: Invalid qualifiers 'in' in global variable context
ERROR: 0:33: 'sampler2DRect' : declaration must include a precision qualifier for type
ERROR: 0:37: Use of undeclared identifier 'position'
ERROR: 0:37: Use of undeclared identifier 'u_heightMap'
ERROR: 0:37: Use of undeclared identifier 'position'
2015-01-08 10:33:30.532 TerrainShaderTest[10429:607] Failed to compile vertex shader
2015-01-08 10:33:30.545 TerrainShaderTest[10429:607] GL ERROR: 0x0500
If I use a different vertex shader instead (below), I get a basic but working result, using the same set up for position on the client side (but obviously not heightmap/texture.)
// WORKS - very simple case
attribute vec4 position;
varying lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
void main()
{
colorVarying = (position + vec4(0.5,0.5,0.0,0));
gl_Position = modelViewProjectionMatrix * position;
}
Code snippets from client (Objective C):
...
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
...
#pragma mark - OpenGL ES 2 shader compilation
- (BOOL)loadShaders
{
GLuint vertShader, fragShader;
NSString *vertShaderPathname, *fragShaderPathname;
// Create shader program.
_program = glCreateProgram();
// Create and compile vertex shader.
vertShaderPathname = [[NSBundle mainBundle] pathForResource:#"Shader" ofType:#"vsh"];
if (![self compileShader:&vertShader type:GL_VERTEX_SHADER file:vertShaderPathname]) {
NSLog(#"Failed to compile vertex shader");
return NO;
}
// Create and compile fragment shader.
fragShaderPathname = [[NSBundle mainBundle] pathForResource:#"Shader" ofType:#"fsh"];
if (![self compileShader:&fragShader type:GL_FRAGMENT_SHADER file:fragShaderPathname]) {
NSLog(#"Failed to compile fragment shader");
return NO;
}
// Attach vertex shader to program.
glAttachShader(_program, vertShader);
// Attach fragment shader to program.
glAttachShader(_program, fragShader);
// Bind attribute locations.
// This needs to be done prior to linking.
glBindAttribLocation(_program, GLKVertexAttribPosition, "position");
// Link program.
if (![self linkProgram:_program]) {
NSLog(#"Failed to link program: %d", _program);
if (vertShader) {
glDeleteShader(vertShader);
vertShader = 0;
}
if (fragShader) {
glDeleteShader(fragShader);
fragShader = 0;
}
if (_program) {
glDeleteProgram(_program);
_program = 0;
}
return NO;
}
// Get uniform locations.
uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX] = glGetUniformLocation(_program, "modelViewProjectionMatrix");
uniforms[UNIFORM_TEXTURE] = glGetUniformLocation(_program, "u_heightMap");
glProgramUniform1fvEXT (_program, uniforms[UNIFORM_TEXTURE], nTerrainElements, _pTerrainScaled);
// Release vertex and fragment shaders.
if (vertShader) {
glDetachShader(_program, vertShader);
glDeleteShader(vertShader);
}
if (fragShader) {
glDetachShader(_program, fragShader);
glDeleteShader(fragShader);
}
return YES;
}
Any help on setting up the client side data?
On iOS, you will be using OpenGL ES. The most commonly used version is ES 2.0. Devices released starting in 2013 also support ES 3.0.
Your shader code is not compatible with ES 2.0:
in vec2 position;
uniform mat4 og_modelViewPerspectiveMatrix;
uniform sampler2DRect u_heightMap;
void main()
{
gl_Position = og_modelViewPerspectiveMatrix *
vec4(position, texture(u_heightMap, position).r, 1.0);
}
To make this work with ES 2.0:
ES 2.0 still uses attribute for vertex attributes, instead of in like current versions of full OpenGL, and ES 3.0+.
No version of ES supports RECT textures, so the type sampler2DRect is invalid. Use regular 2D textures, with the corresponding sampler2D in the shader code, instead.
ES 2.0 uses texture2D() as the name of the texture sampling function, instead of texture() in newer versions.
The shader should then look like this:
attribute vec2 position;
uniform mat4 og_modelViewPerspectiveMatrix;
uniform sampler2D u_heightMap;
void main()
{
gl_Position = og_modelViewPerspectiveMatrix *
vec4(position, texture2D(u_heightMap, position).r, 1.0);
}
I'm trying to create an OpenGL context with depth buffer using Core OpenGl. I then wish to display the OpenGL content via a CAOpenGLLayer. From what I've read it seems I should be able to create the desired context by the following method:
I declare the following instance variable in the interface
#interface TorusCAOpenGLLayer : CAOpenGLLayer
{
//omitted code
CGLPixelFormatObj pix;
GLint pixn;
CGLContextObj ctx;
}
Then in the implementation I override copyCGLContextForPixelFormat, which I believe should create the required context
- (CGLContextObj)copyCGLContextForPixelFormat:(CGLPixelFormatObj)pixelFormat
{
CGLPixelFormatAttribute attrs[] =
{
kCGLPFAColorSize, (CGLPixelFormatAttribute)24,
kCGLPFAAlphaSize, (CGLPixelFormatAttribute)8,
kCGLPFADepthSize, (CGLPixelFormatAttribute)24,
(CGLPixelFormatAttribute)0
};
NSLog(#"Pixel format error:%d", CGLChoosePixelFormat(attrs, &pix, &pixn)); //returns 0
NSLog(#"Context error: %d", CGLCreateContext(pix, NULL, &ctx)); //returns 0
NSLog(#"The context:%p", ctx); // returns same memory address as similar NSLog call in function below
return ctx;
}
Finally I override drawInCGLContext to display the content.
-(void)drawInCGLContext:(CGLContextObj)glContext pixelFormat: (CGLPixelFormatObj)pixelFormat forLayerTime:(CFTimeInterval)timeInterval displayTime:(const CVTimeStamp *)timeStamp
{
// Set the current context to the one given to us.
CGLSetCurrentContext(glContext);
int depth;
NSLog(#"The context again:%p", glContext); //returns the same memory address as the NSLog in the previous function
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho(0.5, 0.5, 1.0, 1.0, -1.0, 1.0);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_DEPTH_TEST);
glGetIntegerv(GL_DEPTH_BITS, &depth);
NSLog(#"%i bits depth", depth); // returns 0
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//drawing code here
// Call super to finalize the drawing. By default all it does is call glFlush().
[super drawInCGLContext:glContext pixelFormat:pixelFormat forLayerTime:timeInterval displayTime:timeStamp];
}
The program compiles fine and displays the content, but without the depth testing. Is there something extra I have to do to get this to work? Or is my entire approach wrong?
Looks like I was overriding the wrong method. To obtain the required depth buffer one should override the copyCGLPixelFormatForDisplayMask like so:
- (CGLPixelFormatObj)copyCGLPixelFormatForDisplayMask:(uint32_t)mask {
CGLPixelFormatAttribute attributes[] =
{
kCGLPFADepthSize, 24,
0
};
CGLPixelFormatObj pixelFormatObj = NULL;
GLint numPixelFormats = 0;
CGLChoosePixelFormat(attributes, &pixelFormatObj, &numPixelFormats);
if(pixelFormatObj == NULL)
NSLog(#"Error: Could not choose pixel format!");
return pixelFormatObj;
}
Based on the code here.
I'm using Cocos2d iPhone with Box2D to create a basic physics engine.
Occasionally the user is required to drag around a small box2D object.
Creation of touchjoints on small objects is a bit hit and miss, with the game engine seeing it as a tap on blank space as often as actually creating the appropriate touchjoint. In practice this means the user is constantly mashing their fingers against the screen in vain attempts to move a stubborn object. I want the game to select small objects easily without this 'hit and miss' effect.
I could create the small objects with larger sensors around them, but this is not ideal because objects above a certain size (around 40px diameter) don't need this extra layer of complexity; and the small objects are simply the big objects scaled down to size.
What are some strategies I could use to allow the user experience to be better when moving small objects?
Here's the AABB code in ccTouchBegan:
b2Vec2 locationWorld = b2Vec2(touchLocation.x/PTM_RATIO, touchLocation.y/PTM_RATIO);
b2AABB aabb;
b2Vec2 delta = b2Vec2(1.0/PTM_RATIO, 1.0/PTM_RATIO);
//Changing the 1.0 here to a larger value doesn't make any noticeable difference.
aabb.lowerBound = locationWorld - delta;
aabb.upperBound = locationWorld + delta;
SimpleQueryCallback callback(locationWorld);
world->QueryAABB(&callback, aabb);
if(callback.fixtureFound){
//dragging code, updating sprite location etc.
}
SimpleQueryCallback code:
class SimpleQueryCallback : public b2QueryCallback
{
public:
b2Vec2 pointToTest;
b2Fixture * fixtureFound;
SimpleQueryCallback(const b2Vec2& point) {
pointToTest = point;
fixtureFound = NULL;
}
bool ReportFixture(b2Fixture* fixture) {
b2Body* body = fixture->GetBody();
if (body->GetType() == b2_dynamicBody) {
if (fixture->TestPoint(pointToTest)) {
fixtureFound = fixture;
return false;
}
}
return true;
}
};
What about a minimum collision box for touches? Objects with less than 40px diameter use the 40px diameter, all larger objects use their actual diameter.
What I ended up doing - thanks to iforce2d, was change ReportFixture in SimpileQueryCallback to:
bool ReportFixture(b2Fixture* fixture) {
b2Body* body = fixture->GetBody();
if (body->GetType() == b2_dynamicBody) {
//if (fixture->TestPoint(pointToTest)) {
fixtureFound = fixture;
return true;
//}
}
return true;
}
And increase the delta to 10.0/PTM_RATIO.