I am rewriting my application using the modern OpenGL (3.3+) in Jogl.
I am using all the conventional matrices, that is objectToWorld, WorldToCamera and CameraToClip (or model, view and projection)
I created a class for handling all the mouse movements as McKesson does in its "Learning modern 3d graphic programming" with a method to offset the camera target position:
private void offsetTargetPosition(MouseEvent mouseEvent){
Mat4 currMat = calcMatrix();
Quat orientation = currMat.toQuaternion();
Quat invOrientation = orientation.conjugate();
Vec2 current = new Vec2(mouseEvent.getX(), mouseEvent.getY());
Vec2 diff = current.minus(startDragMouseLoc);
Vec3 worldOffset = invOrientation.mult(new Vec3(-diff.x*10, diff.y*10, 0.0f));
currView.setTargetPos(currView.getTargetPos().plus(worldOffset));
startDragMouseLoc = current;
}
calcMatrix() returns the camera matrix, the rest should be clear.
What I want is moving my object along with the mouse, right now mouse movement and object translation don't correspond, that is they are not linear, because I am dealing with different spaces I guess..
I learnt that if I want to apply a transformation T in space O, but related in space C, I should do the following with p as vertex:
C * (C * T * C^-1) * O * p
Should I do something similar?
I solved with a damn simple proportion...
float x = (float) (10000 * 2 * EC_Main.viewer.getAspect() * diff.x / EC_Main.viewer.getWidth());
float y = (float) (10000 * 2 * diff.y / EC_Main.viewer.getHeight());
Vec3 worldOffset = invOrientation.mult(new Vec3(-x, y, 0.0f));
Taking into account my projection matrix
Mat4 orthographicMatrix = Jglm.orthographic(-10000.0f * (float) EC_Main.viewer.getAspect(), 10000.0f * (float) EC_Main.viewer.getAspect(),
-10000.0f, 10000.0f, -10000.0f, 10000.0f);
Related
gif
Creating View, Porjection Matrix
void CameraSystem::CreateMatrix()
{
camera->aspect = viewport->Width / viewport->Height;
projection_matrix = XMMatrixPerspectiveFovLH(camera->fov, camera->aspect, camera->near_z, camera->far_z);
XMVECTOR s, o, q, t;
XMFLOAT3 position(camera->position.m128_f32[0], camera->position.m128_f32[1], camera->position.m128_f32[2]);
s = XMVectorReplicate(1.0f);
o = XMVectorSet(0.0f, 0.0f, 0.0f, 1.0f);
q = XMQuaternionRotationRollPitchYaw(camera->pitch, camera->yaw, camera->roll);
t = XMLoadFloat3(&position);
world_matrix = XMMatrixAffineTransformation(s, o, q, t);
view_matrix = XMMatrixInverse(0, world_matrix);
camera->look = XMVector3Normalize(XMMatrixTranspose(view_matrix).r[2]);
camera->right = XMVector3Normalize(XMMatrixTranspose(view_matrix).r[0]);
camera->up = XMVector3Normalize(XMMatrixTranspose(view_matrix).r[1]);
camera->position = world_matrix.r[3];
cb_viewproj.data.view_matrix = XMMatrixTranspose(view_matrix);
cb_viewproj.data.projection_matrix = XMMatrixTranspose(projection_matrix);
}
this code creating projection matrix with aspect, fov, near, far.
and create world, view with camera transform. and these work perfect for rendering but maybe not for create ray.
Creating Mouse Ray
MouseRay CameraSystem::CreateMouseRay()
{
MouseRay mouse_ray;
POINT cursor_pos;
GetCursorPos(&cursor_pos);
ScreenToClient(ENGINE->GetWindowHandle(), &cursor_pos);
// Convert the mouse position to a direction in world space
float mouse_x = static_cast<float>(cursor_pos.x);
float mouse_y = static_cast<float>(cursor_pos.y);
float ndc_x = 2.0f * mouse_x / (float)ENGINE->GetWindowSize().x - 1.0f;
float ndc_y = (2.0f * mouse_y / (float)ENGINE->GetWindowSize().y - 1.0f) * -1.0f;
float ndc_z = 1.0f;
ndc.x = ndc_x;
ndc.y = ndc_y;
XMMATRIX inv_view = XMMatrixInverse(nullptr, view_matrix);
XMMATRIX inv_world = XMMatrixInverse(nullptr, XMMatrixIdentity());
XMVECTOR ray_dir;
XMVECTOR ray_origin;
ndc_x /= projection_matrix.r[0].m128_f32[0];
ndc_y /= projection_matrix.r[1].m128_f32[1];
ray_dir.m128_f32[0] = (ndc_x * inv_view.r[0].m128_f32[0]) + (ndc_y * inv_view.r[1].m128_f32[0]) + (ndc_z * inv_view.r[2].m128_f32[0]) + inv_view.r[0].m128_f32[3];
ray_dir.m128_f32[1] = (ndc_x * inv_view.r[0].m128_f32[1]) + (ndc_y * inv_view.r[1].m128_f32[1]) + (ndc_z * inv_view.r[2].m128_f32[1]) + inv_view.r[1].m128_f32[3];
ray_dir.m128_f32[2] = (ndc_x * inv_view.r[0].m128_f32[2]) + (ndc_y * inv_view.r[1].m128_f32[2]) + (ndc_z * inv_view.r[2].m128_f32[2]) + inv_view.r[2].m128_f32[3];
ray_origin = XMVector3TransformCoord(camera->position, inv_world);
ray_dir = XMVector3TransformNormal(ray_dir, inv_world);
ray_dir = XMVector3Normalize(ray_dir);
XMtoRP(ray_origin, mouse_ray.start_point);
XMtoRP(ray_dir * 1000.f, mouse_ray.end_point);
return mouse_ray;
}
get cursor pos, and converto ndc, it's perpect, i guess 'ndc, porjection calulate' has some propelm, but i couldn't find any other code.
inversed view matrix to ray direction, and i guess this code also has problem.
As you can see, the mouse does not seem to generate a ray in the exact direction of the pointed mouse. The frustum feels narrower than it really is, and the further away from the map, the worse it gets.
It seems as if the camera position is fixed, but the camera position is being updated from the world matrix as shown in the properties window.
The fact that views and projection matrices are also honestly generated can be inferred from the fact that the rest except for raycasting are rendered correctly.
I assume that the ray direction vector is miscalculated.
I don't know what more calculations can be made in the next code.
raycast test by reactphysics3d library.
I have a three.js scene made with rogue engine, which im using to make a VR experience.
In that im using a fairly complex shader, it takes world space location of two locators for transitioning between their normal shader and just some color, the transition is using noise for some effect (see video below, its showing the effect of the first locator but the second one is also similar, it goes bottom to top),
the location of the object is passed as Vector 3 uniforms., the shader itself im injecting to a MeshStandardMaterial using onBeforeCompile.
the performance is already bad and really tanks when im using textures, im using three texture sets for the scene, im using diffuse,rough,metal,emission and AO so each is sampled thrice and then masked using vertex colors. (not present in the code below)
varying vec3 W_Pos; //world position vector
varying vec3 F_Nrml; //normal vector
varying vec3 camDir; // cam facing
varying vec3 vertexColor;
uniform vec3 astral_locator; // First locator
uniform vec3 astral_spread; // i pass the locator's scale here and scale it up for the transition
uniform vec3 starScatter_starScale_nScale; //three float parameters im passing as vector for easier control in rogue engine
uniform vec3 breakPoints;
uniform vec3 c1;
uniform vec3 c2;
uniform vec3 c3;
uniform vec3 noise_locator; //Second locator
uniform vec3 nStretch_nScale_emSharp;// same as above, three floats passed as a vector
uniform vec3 emCol;
vec4 mod289(vec4 x){return x - floor(x * (1.0 / 289.0)) * 289.0;}
vec4 perm(vec4 x){return mod289(((x * 34.0) + 1.0) * x);}
vec3 rand2( vec3 p ) {
return fract(
sin(
vec3(dot(p,vec3(127.1,310.7,143.54)),dot(p,vec3(269.5,183.3,217.42)),dot(p,vec3(2459.5,133.3,17.42))))*43758.5453);
}
float mapping(float number, float inMin, float inMax, float outMin, float outMax){return (number - inMin) * (outMax - outMin) / (inMax - inMin) + outMin;}
vec4 vertexMask(vec4 map1, vec4 map2, vec4 map3, vec3 vertMask){vec4 me1 = mix(vec4(0.0), map1,vertMask.r); vec4 me2 = mix(me1, map2,vertMask.g); vec4 me3 = mix(me2, map3,vertMask.b); return me3;}
//Noises
float noise(vec3 p){
vec3 a = floor(p);
vec3 d = p - a;
d = d * d * (3.0 - 2.0 * d);
vec4 b = a.xxyy + vec4(0.0, 1.0, 0.0, 1.0);
vec4 k1 = perm(b.xyxy);
vec4 k2 = perm(k1.xyxy + b.zzww);
vec4 c = k2 + a.zzzz;
vec4 k3 = perm(c);
vec4 k4 = perm(c + 1.0);
vec4 o1 = fract(k3 * (1.0 / 41.0));
vec4 o2 = fract(k4 * (1.0 / 41.0));
vec4 o3 = o2 * d.z + o1 * (1.0 - d.z);
vec2 o4 = o3.yw * d.x + o3.xz * (1.0 - d.x);
return o4.y * d.y + o4.x * (1.0 - d.y);
}
float facing(){
vec3 nrml = F_Nrml;
vec3 cam = camDir;
vec3 normal = normalize(nrml.xyz);
vec3 eye = normalize(-cam);
float rim = smoothstep(-0.75, 1.0, 1.0 - dot(normal, eye));
return clamp(rim, 0.0, 1.0);
}
//Function for the second locatior
vec2 noiseMove(vec3 loc,vec3 noiseDat){
float noise_stretch = noiseDat.x;
float noise_scale = noiseDat.y;
float emission_sharp = noiseDat.z;
float noise_move = -loc.y;
float gen_Pattern;
float gen_Pattern_invert;
float emi_sharp_fac;
float transparency;
float emission;
gen_Pattern = ((W_Pos.y+noise_move)*noise_stretch) + noise(W_Pos.xyz*noise_scale);
gen_Pattern_invert = 1.0 - gen_Pattern;
emi_sharp_fac = clamp(emission_sharp*1000.0,1.0,1000.0)*gen_Pattern;
emission = emission_sharp*gen_Pattern;
emission = 1.0 - emission;
emission = emission * emi_sharp_fac;
emission = clamp(emission,0.0,1.0);
transparency = clamp(gen_Pattern_invert,0.0,1.0);
return vec2(emission,transparency);
}
//Function for the first locator
vec4 astral(vec3 loc, vec3 spr,vec3 cee1,vec3 cee2,vec3 cee3, vec3 breakks, vec3 star){//star is WIP
float f = facing();
float re1 = mapping(f,breakks.x,1.0,0.0,1.0);
float re2 = mapping(f,breakks.y,1.0,0.0,1.0);
float re3 = mapping(f,breakks.z,1.0,0.0,1.0);
vec3 me1 = mix(vec3(0.,0.,0.),cee1,re1);
vec3 me2 = mix(me1,cee2,re2);
vec3 me3 = mix(me2,cee3,re3);
float dist = distance(W_Pos.xyz + (noise(W_Pos.xyz*star.z)-0.5),loc);
float val = step(dist,spr.x);
return vec4(me3,val);
}
void main(){
vec4 ast = astral(astral_locator,astral_spread,c1,c2,c3,breakPoints,starScatter_starScale_nScale);
vec2 noice = noiseMove(noise_locator,nStretch_nScale_emSharp);
vec3 outp = mix(mix(outgoingLight,ast.xyz,ast.w),emCol,noice.x); //Take output light from the three.js shader and mix it with the custom shader
float t = noice.y;
#ifdef NONSCIFI
t = 1.0 - noice.y;
#endif
t *= diffuseColor.a;
gl_FragColor = vec4(outp*t,t);
}
is there a way to optimize it better? a couple things i can think of is storing the noise and then using it instead of calculating every frame, and figuring out occlusion culling (renderpass doesnt work well in VR so cant store the depth pass, gotta figure a way), objects in the scene are already instances to reduce draw calls. im assuming making some objects static might help, including the locators but i dont know if it will stop the uniform from updating every frame.
is there anything else that can be done?
also i apologize for the structure of the question, i rarely post questions thanks to stackoverflow :p
I'm wondering about the transformation from Lat,Lon,Alt Values to 3D-Systems like ECEF (Earth-Centered).
This can be implemented as follows (https://gist.github.com/1536054):
/*
* WGS84 ellipsoid constants Radius
*/
private static final double a = 6378137;
/*
* eccentricity
*/
private static final double e = 8.1819190842622e-2;
private static final double asq = Math.pow(a, 2);
private static final double esq = Math.pow(e, 2);
void convert(latitude,longitude,altitude){
double lat = Math.toRadians(latitude);
double lon = Math.toRadians(longitude);
double alt = altitude;
double N = a / Math.sqrt(1 - esq * Math.pow(Math.sin(lat), 2));
x = (N + alt) * Math.cos(lat) * Math.cos(lon);
y = (N + alt) * Math.cos(lat) * Math.sin(lon);
z = ((1 - esq) * N + alt) * Math.sin(lat);
}
What in my opinion seems very strange is the fact, that a little change of the altitude, affects x,y and z, where I would expect, that it just affect one axis. For example, if I have two GPS-Points, which have same lat/lon values but different altitude values, I'll get 3 different x,y,z coordinates.
Can someone explain the "idea" behind this? This looks very curious to me... Is there any other 3D-System, in which only one of the values is changing, when I lower/higher my altitude value?
Thanks a lot!
If you look at this picture: ECEF Coordinate System
then you see why.
ECEF is a cube arround the earth, centered in earth center.
if altitude raises you move out.
Lat/lon is an "angular" coordinate system where lat,lon are angles,
ECEF is a cartesian coordinate system!
Probaly you thougt ECEF is like LatLon with center earth has altitude 0, but this is not the case.
What are successful strategies to optimize HLSL shader code in terms of computational complexity (meaning: minimizing runtime of the shader)?
I guess one way would be to minimize the number of arithmetic operations that result from compiling the shader.
How could this be done a) manually and b) using automated tools (if existing) ?
Collection of manual techniques (Updated)
Avoid branching (But how to do that best?)
Whenever possible: precompute outside shader and pass as argument.
An example code would be:
float2 DisplacementScroll;
// Parameter that limit the water effect
float glowHeight;
float limitTop;
float limitTopWater;
float limitLeft;
float limitRight;
float limitBottom;
sampler TextureSampler : register(s0); // Original color
sampler DisplacementSampler : register(s1); // Displacement
float fadeoutWidth = 0.05;
// External rumble displacement
int enableRumble;
float displacementX;
float displacementY;
float screenZoom;
float4 main(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : COLOR0
{
// Calculate minimal distance to next border
float dx = min(texCoord.x - limitLeft, limitRight - texCoord.x);
float dy = min(texCoord.y - limitTop, limitBottom - texCoord.y);
///////////////////////////////////////////////////////////////////////////////////////
// RUMBLE //////////////////////
///////////////////////////////////////////////////////////////////////////////////////
if (enableRumble!=0)
{
// Limit rumble strength by distance to HLSL-active region (think map)
// The factor of 100 is chosen by hand and controls slope with which dimfactor goes to 1
float dimfactor = clamp(100.0f * min(dx, dy), 0, 1); // Maximum is 1.0 (do not amplify)
// Shift texture coordinates by rumble
texCoord.x += displacementX * dimfactor * screenZoom;
texCoord.y += displacementY * dimfactor * screenZoom;
}
//////////////////////////////////////////////////////////////////////////////////////////
// Water refraction (optical distortion) and water like-color tint //////////////////////
//////////////////////////////////////////////////////////////////////////////////////////
if (dx >= 0)
{
float dyWater = min(texCoord.y - limitTopWater, limitBottom - texCoord.y);
if (dyWater >= 0)
{
// Look up the amount of displacement from texture
float2 displacement = tex2D(DisplacementSampler, DisplacementScroll + texCoord / 3);
float finalFactor = min(dx,dyWater) / fadeoutWidth;
if (finalFactor > 1) finalFactor = 1;
// Apply displacement by water refraction
texCoord.x += (displacement.x * 0.2 - 0.15) * finalFactor * 0.15 * screenZoom; // Why these strange numbers ?
texCoord.y += (displacement.y * 0.2 - 0.15) * finalFactor * 0.15 * screenZoom;
// Look up the texture color of the original underwater pixel.
color = tex2D(TextureSampler, texCoord);
// Additional color transformation (blue shift)
color.r = color.r - 0.1f;
color.g = color.g - 0.1f;
color.b = color.b + 0.3f;
}
else if (dyWater > -glowHeight)
{
// No water distortion...
color = tex2D(TextureSampler, texCoord);
// Scales from 0 (upper glow limit) ... 1 (near water surface)
float glowFactor = 1 - (dyWater / -glowHeight);
// ... but bluish glow
// Additional color transformation
color.r = color.r - (glowFactor * 0.1); // 24 = 1/(30f/720f); // Prelim: depends on screen resolution, must fit to value in HLSL Update
color.g = color.g - (glowFactor * 0.1);
color.b = color.b + (glowFactor * 0.3);
}
else
{
// Return original color (no water distortion above and below)
color = tex2D(TextureSampler, texCoord);
}
}
else
{
// Return original color (no water distortion left or right)
color = tex2D(TextureSampler, texCoord);
}
return color;
}
technique Refraction
{
pass Pass0
{
PixelShader = compile ps_2_0 main();
}
}
I'm not very familar with the HLSL internals, but from what I've learned from GLSL is: never branch something. It probably will execute both parts and then decide which result of them should be valid.
Also have a look at this
and this.
As far as I know there are no automatic tools except the compiler itself. For very low level optimization you can use fxc with the /Fc parameter to get the assembly listing. The possible assembly instructions are listed here. One low level optimization which is worth mentioning is MAD: multiply and add. This may not be optimized to a MAD operation (I'm not sure, just try it out yourself):
a *= b;
a += c;
but this should be optimized to a MAD:
a = (a * b) + c;
You can optimize your code using mathematical techniques that involve manipulation functions, would be something like:
// Shift texture coordinates by rumble
texCoord.x += displacementX * dimfactor * screenZoom;
texCoord.y += displacementY * dimfactor * screenZoom;
Here you multiply three values, but only one of them comes from a register of the GPU, the other two are constants, you could pre multiply and store in a global constant.
// Shift texture coordinates by rumble
texCoord.x += dimfactor * pre_zoom_dispx; // displacementX * screenZoom
texCoord.y += dimfactor * pre_zoom_dispy; // displacementY * screenZoom
Another example:
// Apply displacement by water refraction
texCoord.x += (displacement.x * 0.2 - 0.15) * finalFactor * 0.15 * screenZoom; // Why these strange numbers ?
texCoord.y += (displacement.y * 0.2 - 0.15) * finalFactor * 0.15 * screenZoom;
0.15 * screenZoom <- can be optimized by one global.
The HLSL Compiler of Visual Studio 2012 have a option in poperties to enable Optimizations. But the best optimization that you can make is write the HLSL code simple as possible and using the Intrinsic functions http://msdn.microsoft.com/en-us/library/windows/desktop/ff471376(v=vs.85).aspx
Those functions are like memcpy of C, using assembly code in body that uses resources of system like 128-bit registers (yes, CPU have 128-bit registers http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions) and strongly fast operations.
I'm looking for a better way (or a note that this is the best way) to transfer a pixel coordinate to its corresponding ray direction from a arbitrary camera position/direction.
My current method is as follows. I define a "camera" as a position vector, lookat vector, and up vector, named as such. (Note that the lookat vector is a unit vector in the direction the camera is facing, NOT where (position - lookat) is the direction, as is the standard in XNA's Matrix.CreateLookAt) These three vectors can uniquely define a camera position. Here's the actual code (well, not really the actual, a simplified abstracted version) (Language is HLSL)
float xPixelCoordShifted = (xPixelCoord / screenWidth * 2 - 1) * aspectRatio;
float yPixelCoordShifted = yPixelCoord / screenHeight * 2 - 1;
float3 right = cross(lookat, up);
float3 actualUp = cross(right, lookat);
float3 rightShift = mul(right, xPixelCoordShifted);
float3 upShift = mul(actualUp, yPixelCoordShifted);
return normalize(lookat + rightShift + upShift);
(the return value is the direction of the ray)
So what I'm asking is this- What's a better way to do this, maybe using matrices, etc. The problem with this method is that if you have too wide a viewing angle, the edges of the screen get sort of "radially stretched".
You can calculate it (ray) in pixel shader, HLSL code:
float4x4 WorldViewProjMatrix; // World*View*Proj
float4x4 WorldViewProjMatrixInv; // (World*View*Proj)^(-1)
void VS( float4 vPos : POSITION,
out float4 oPos : POSITION,
out float4 pos : TEXCOORD0 )
{
oPos = mul(vPos, WorldViewProjMatrix);
pos = oPos;
}
float4 PS( float4 pos : TEXCOORD0 )
{
float4 posWS = mul(pos, WorldViewProjMatrixInv);
float3 ray = posWS.xyz / posWS.w;
return float4(0, 0, 0, 1);
}
The information about your camera's position and direction is in View matrix (Matrix.CreateLookAt).