Fish Eye Wide-angle with a Scene Kit Camera: Possible? - camera

How do I get a distortion like what a fisheye lens does to a view with a SCNCamera in Scene Kit?
Something like this kind of "bowing" of the imagery:
// as Rickster pointed out, this kind of distortion is known as "Barrel Distortion".
From the docs, this is the part that got me intrigued by the possibility of doing this kind of distortion with the camera:
If you compute your own projection transform matrix, you can use this
method to set it directly, overriding the transformation synthesized
from the camera’s geometric properties.
Unfortunately I know nothing about the powers and possibilities of computing ones own projection transform matrix. I'm hoping it's possible to do this kind of distortion via it... but dunno, hence the question.
Any other means via a camera is ideal. Too. Wanting to avoid post processing trickery and get the more "organic" look of this kind of distortion when the camera rotates and moves through the scene.
See any skateboarding video for how this looks in real life.

What you are looking for is called Barrel Distrortion.
There are a few ways of doing this, all of them using GLSL shaders.
You can either use classic OpenGL code, such as this example for the Occulus Rift (you will need to change the shader a little bit), or my personal favorite: SCNTechnique.
Create a technique containing a Barrel Fragment Shader (.fsh), and set its draw parameter to DRAW_QUAD. Then, simply apply the technique to your camera.
You can find an example of Barrel Distortion shader here : http://www.geeks3d.com/20140213/glsl-shader-library-fish-eye-and-dome-and-barrel-distortion-post-processing-filters/2/
EDIT: here's a sample code:
barrel.json (this should go in your scnassets bundle)
{
"passes" : {
"barrel" : {
"outputs" : {
"color" : "COLOR"
},
"inputs" : {
"colorSampler" : "COLOR",
"noiseSampler" : "noiseSymbol",
"a_position" : "a_position-symbol"
},
"program" : "art.scnassets/barrel",
"draw" : "DRAW_QUAD"
}
},
"sequence" : [
"barrel"
],
"symbols" : {
"a_position-symbol" : {
"semantic" : "vertex"
},
"noiseSymbol" : {
"image" : "noise.png",
"type" : "sampler2D"
},
"barrelPower" : {
"type" : "float"
}
}
}
barrel.vsh
attribute vec4 a_position;
varying vec2 uv;
void main() {
gl_Position = a_position;
uv = a_position.xy;
}
barrel.fsh
// Adapted from :
// http://www.geeks3d.com/20140213/glsl-shader-library-fish-eye-and-dome-and-barrel-distortion-post-processing-filters/2/
uniform sampler2D colorSampler;
const float PI = 3.1415926535;
uniform float barrelPower;
varying vec2 uv;
vec2 Distort(vec2 p)
{
float theta = atan(p.y, p.x);
float radius = length(p);
radius = pow(radius, barrelPower);
p.x = radius * cos(theta);
p.y = radius * sin(theta);
return 0.5 * (p + 1.0);
}
void main() {
vec2 rg = 2.0 * uv.xy - 1.0;
vec2 uv2;
float d = length(xy);
if (d < 1.0){
uv2 = Distort(xy);
}else{
uv2 = uv.xy;
}
gl_FragColor = texture2D(colorSampler, uv2);
}
something.m
NSURL *url = [[NSBundle mainBundle] URLForResource:#"art.scnassets/barrel" withExtension:#"json"];
NSDictionary *tecDic = [NSJSONSerialization JSONObjectWithData:[NSData dataWithContentsOfURL: url] options:nil error:nil];
SCNTechnique* technique = [SCNTechnique techniqueWithDictionary:tecDic];
[technique setValue: [NSNumber numberWithFloat:0.5] forKey:#"barrelPower"];
cameraNode.technique = technique;

Related

Implement custom WebGL shaders in vuejs

I found a three.js example of an effect I'm attempting, but the implementation doesn't appear to be explained, and may be an older version of three/js. I'm also just getting into three.js/WebGL, so I may just be plain wrong about things.
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent,
The example uses "shaders", which are in script tags with the type="x-shader/x-vertex" attribute. I'm able to put the script tags in my head and everything works as advertised.
However, I'm using VueJS, and so I really would rather not put these in my index.html file. I tried to just inject the string directly to the material:
const vertexShader = `
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main()
{
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
`;
And:
vertexShader: vertexShader,
But I get a
THREE.WebGLProgram: Shader Error 0 - VALIDATE_STATUS false
Program Info Log: Fragment shader is not compiled.
So I tried to actually put it in a created script tag thinking the browser must be doing some magic:
const createShader = (text) => {
const tag = document.createElement("script");
tag.setAttribute("type", "x-shader/x-vertex");
const tnode = document.createTextNode(text);
tag.appendChild(tnode);
document.head.appendChild(tag);
return tag.textContent;
}
And:
vertexShader: createShader(vertexShader),
But that has the same issue.
Is there a clean way to make this happen, or is direct code on page load the only option?

Skeletal animation bug with Assimp in DirectX 12

I am using Assimp to load an FBX model with animation (created in Blender) into my DirectX 12 game, but I'm experiencing a very frustrating bug with the animation rendered by the game application.
The test model is a simple 'flagpole' containing four bones like so:
Bone0 -> Bone1 -> Bone2 -> Bone3
The model renders correctly in its rest pose when the keyframe animation is bypassed.
The model also renders and animates properly when the animation rotates the model only by the root bone (Bone0).
However, when importing a model that rotates at the first joint (i.e. at Bone1), the vertices clustered around each joint seem 'stuck' in their original positions, while the vertices surrounding the 'bones' proper appear to follow through with the correct animation.
The result is a crappy zigzag of stretched geometry like so:
Instead the model should resemble an 'allen-key' shape at the end of its animation pose, as shown by the same model rendered in the AssimpViewer utility tool:
Since the model is rendering correctly in AssimpViewer, it's reasonable to assume there are no issues with the FBX file exported by Blender. I then checked and confirmed that the vertices 'stuck' around the joints did indeed have their vertex weights correctly assigned by the game loading code.
The C++ model loading and animation code is based on the popular OGLDev tutorial: https://ogldev.org/www/tutorial38/tutorial38.html
Now the infuriating thing is, since the AssimpViewer tool was correctly rendering the model animation, I also copied in the SceneAnimator and AnimEvaluator classes from that tool to generate the final bone transforms via that code branch as well... only to end up with exactly the same zigzag bug in the game!
I'm reasonably confident there aren't any issues with finding the bone hierarchy structure at initialization, so here are the key functions that traverse the hierarchy and interpolate key frames each frame.
VOID Mesh::ReadNodeHeirarchy(FLOAT animationTime, CONST aiNode* pNode, CONST aiAnimation* pAnim, CONST aiMatrix4x4 parentTransform)
{
std::string nodeName(pNode->mName.data);
// nodeTransform is a relative transform to parent node space
aiMatrix4x4 nodeTransform = pNode->mTransformation;
CONST aiNodeAnim* pNodeAnim = FindNodeAnim(pAnim, nodeName);
if (pNodeAnim)
{
// Interpolate scaling and generate scaling transformation matrix
aiVector3D scaling(1.f, 1.f, 1.f);
CalcInterpolatedScaling(scaling, animationTime, pNodeAnim);
// Interpolate rotation and generate rotation transformation matrix
aiQuaternion rotationQ (1.f, 0.f, 0.f, 0.f);
CalcInterpolatedRotation(rotationQ, animationTime, pNodeAnim);
// Interpolate translation and generate translation transformation matrix
aiVector3D translat(0.f, 0.f, 0.f);
CalcInterpolatedPosition(translat, animationTime, pNodeAnim);
// build the SRT transform matrix
nodeTransform = aiMatrix4x4(rotationQ.GetMatrix());
nodeTransform.a1 *= scaling.x; nodeTransform.b1 *= scaling.x; nodeTransform.c1 *= scaling.x;
nodeTransform.a2 *= scaling.y; nodeTransform.b2 *= scaling.y; nodeTransform.c2 *= scaling.y;
nodeTransform.a3 *= scaling.z; nodeTransform.b3 *= scaling.z; nodeTransform.c3 *= scaling.z;
nodeTransform.a4 = translat.x; nodeTransform.b4 = translat.y; nodeTransform.c4 = translat.z;
}
aiMatrix4x4 globalTransform = parentTransform * nodeTransform;
if (m_boneMapping.find(nodeName) != m_boneMapping.end())
{
UINT boneIndex = m_boneMapping[nodeName];
// the global inverse transform returns us to mesh space!!!
m_boneInfo[boneIndex].FinalTransform = m_globalInverseTransform * globalTransform * m_boneInfo[boneIndex].BoneOffset;
//m_boneInfo[boneIndex].FinalTransform = m_boneInfo[boneIndex].BoneOffset * globalTransform * m_globalInverseTransform;
m_shaderTransforms[boneIndex] = aiMatrixToSimpleMatrix(m_boneInfo[boneIndex].FinalTransform);
}
for (UINT i = 0u; i < pNode->mNumChildren; i++)
{
ReadNodeHeirarchy(animationTime, pNode->mChildren[i], pAnim, globalTransform);
}
}
VOID Mesh::CalcInterpolatedRotation(aiQuaternion& out, FLOAT animationTime, CONST aiNodeAnim* pNodeAnim)
{
UINT rotationKeys = pNodeAnim->mNumRotationKeys;
// we need at least two values to interpolate...
if (rotationKeys == 1u)
{
CONST aiQuaternion& key = pNodeAnim->mRotationKeys[0u].mValue;
out = key;
return;
}
UINT rotationIndex = FindRotation(animationTime, pNodeAnim);
UINT nextRotationIndex = (rotationIndex + 1u) % rotationKeys;
assert(nextRotationIndex < rotationKeys);
CONST aiQuatKey& key = pNodeAnim->mRotationKeys[rotationIndex];
CONST aiQuatKey& nextKey = pNodeAnim->mRotationKeys[nextRotationIndex];
FLOAT deltaTime = FLOAT(nextKey.mTime) - FLOAT(key.mTime);
FLOAT factor = (animationTime - FLOAT(key.mTime)) / deltaTime;
assert(factor >= 0.f && factor <= 1.f);
aiQuaternion::Interpolate(out, key.mValue, nextKey.mValue, factor);
}
I've just included the rotation interpolation here, since the scaling and translation functions are identical. For those unaware, Assimp's aiMatrix4x4 type follows a column-vector math convention, so I haven't messed with original matrix multiplication order.
About the only deviation between my code and the two Assimp-based code branches I've adopted is the requirement to convert the final transforms from aiMatrix4x4 types into a DirectXTK SimpleMath Matrix (really an XMMATRIX) with this conversion function:
Matrix Mesh::aiMatrixToSimpleMatrix(CONST aiMatrix4x4 m)
{
return Matrix
(m.a1, m.a2, m.a3, m.a4,
m.b1, m.b2, m.b3, m.b4,
m.c1, m.c2, m.c3, m.c4,
m.d1, m.d2, m.d3, m.d4);
}
Because of the column-vector orientation of aiMatrix4x4 Assimp matrices, the final bone transforms are not transposed for HLSL consumption. The array of final bone transforms are passed to the skinning vertex shader constant buffer as follows.
commandList->SetPipelineState(m_psoForwardSkinned.Get()); // set PSO
// Update vertex shader with current bone transforms
CONST std::vector<Matrix> transforms = m_assimpModel.GetShaderTransforms();
VSBonePassConstants vsBoneConstants{};
for (UINT i = 0; i < m_assimpModel.GetNumBones(); i++)
{
// We do not transpose bone matrices for HLSL because the original
// Assimp matrices are column-vector matrices.
vsBoneConstants.boneTransforms[i] = transforms[i];
//vsBoneConstants.boneTransforms[i] = transforms[i].Transpose();
//vsBoneConstants.boneTransforms[i] = Matrix::Identity;
}
GraphicsResource vsBoneCB = m_graphicsMemory->AllocateConstant(vsBoneConstants);
vsPerObjects.gWorld = m_assimp_world.Transpose(); // vertex shader per object constant
vsPerObjectCB = m_graphicsMemory->AllocateConstant(vsPerObjects);
commandList->SetGraphicsRootConstantBufferView(RootParameterIndex::VSBoneConstantBuffer, vsBoneCB.GpuAddress());
commandList->SetGraphicsRootConstantBufferView(RootParameterIndex::VSPerObjConstBuffer, vsPerObjectCB.GpuAddress());
//commandList->SetGraphicsRootDescriptorTable(RootParameterIndex::ObjectSRV, m_shaderTextureHeap->GetGpuHandle(ShaderTexDescriptors::SuzanneDiffuse));
commandList->SetGraphicsRootDescriptorTable(RootParameterIndex::ObjectSRV, m_shaderTextureHeap->GetGpuHandle(ShaderTexDescriptors::DefaultDiffuse));
for (UINT i = 0; i < m_assimpModel.GetMeshSize(); i++)
{
commandList->IASetVertexBuffers(0u, 1u, &m_assimpModel.meshEntries[i].GetVertexBufferView());
commandList->IASetIndexBuffer(&m_assimpModel.meshEntries[i].GetIndexBufferView());
commandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
commandList->DrawIndexedInstanced(m_assimpModel.meshEntries[i].GetIndexCount(), 1u, 0u, 0u, 0u);
}
Please note I am using the Graphics Resource memory management helper object found in the DirectXTK12 library in the code above. Finally, here's the skinning vertex shader I'm using.
// Luna (2016) lighting model adapted from Moller
#define MAX_BONES 4
// vertex shader constant data that varies per object
cbuffer cbVSPerObject : register(b3)
{
float4x4 gWorld;
//float4x4 gTexTransform;
}
// vertex shader constant data that varies per frame
cbuffer cbVSPerFrame : register(b5)
{
float4x4 gViewProj;
float4x4 gShadowTransform;
}
// bone matrix constant data that varies per object
cbuffer cbVSBonesPerObject : register(b9)
{
float4x4 gBoneTransforms[MAX_BONES];
}
struct VertexIn
{
float3 posL : SV_POSITION;
float3 normalL : NORMAL;
float2 texCoord : TEXCOORD0;
float3 tangentU : TANGENT;
float4 boneWeights : BONEWEIGHT;
uint4 boneIndices : BONEINDEX;
};
struct VertexOut
{
float4 posH : SV_POSITION;
//float3 posW : POSITION;
float4 shadowPosH : POSITION0;
float3 posW : POSITION1;
float3 normalW : NORMAL;
float2 texCoord : TEXCOORD0;
float3 tangentW : TANGENT;
};
VertexOut VS_main(VertexIn vin)
{
VertexOut vout = (VertexOut)0.f;
// Perform vertex skinning.
// Ignore BoneWeights.w and instead calculate the last weight value
// to ensure all bone weights sum to unity.
float4 weights = vin.boneWeights;
//weights.w = 1.f - dot(weights.xyz, float3(1.f, 1.f, 1.f));
//float4 weights = { 0.f, 0.f, 0.f, 0.f };
//weights.x = vin.boneWeights.x;
//weights.y = vin.boneWeights.y;
//weights.z = vin.boneWeights.z;
weights.w = 1.f - (weights.x + weights.y + weights.z);
float4 localPos = float4(vin.posL, 1.f);
float3 localNrm = vin.normalL;
float3 localTan = vin.tangentU;
float3 objPos = mul(localPos, (float4x3)gBoneTransforms[vin.boneIndices.x]).xyz * weights.x;
objPos += mul(localPos, (float4x3)gBoneTransforms[vin.boneIndices.y]).xyz * weights.y;
objPos += mul(localPos, (float4x3)gBoneTransforms[vin.boneIndices.z]).xyz * weights.z;
objPos += mul(localPos, (float4x3)gBoneTransforms[vin.boneIndices.w]).xyz * weights.w;
float3 objNrm = mul(localNrm, (float3x3)gBoneTransforms[vin.boneIndices.x]) * weights.x;
objNrm += mul(localNrm, (float3x3)gBoneTransforms[vin.boneIndices.y]) * weights.y;
objNrm += mul(localNrm, (float3x3)gBoneTransforms[vin.boneIndices.z]) * weights.z;
objNrm += mul(localNrm, (float3x3)gBoneTransforms[vin.boneIndices.w]) * weights.w;
float3 objTan = mul(localTan, (float3x3)gBoneTransforms[vin.boneIndices.x]) * weights.x;
objTan += mul(localTan, (float3x3)gBoneTransforms[vin.boneIndices.y]) * weights.y;
objTan += mul(localTan, (float3x3)gBoneTransforms[vin.boneIndices.z]) * weights.z;
objTan += mul(localTan, (float3x3)gBoneTransforms[vin.boneIndices.w]) * weights.w;
vin.posL = objPos;
vin.normalL = objNrm;
vin.tangentU.xyz = objTan;
//vin.posL = posL;
//vin.normalL = normalL;
//vin.tangentU.xyz = tangentL;
// End vertex skinning
// transform to world space
float4 posW = mul(float4(vin.posL, 1.f), gWorld);
vout.posW = posW.xyz;
// assumes nonuniform scaling, otherwise needs inverse-transpose of world matrix
vout.normalW = mul(vin.normalL, (float3x3)gWorld);
vout.tangentW = mul(vin.tangentU, (float3x3)gWorld);
// transform to homogenous clip space
vout.posH = mul(posW, gViewProj);
// pass texcoords to pixel shader
vout.texCoord = vin.texCoord;
//float4 texC = mul(float4(vin.TexC, 0.0f, 1.0f), gTexTransform);
//vout.TexC = mul(texC, gMatTransform).xy;
// generate projective tex-coords to project shadow map onto scene
vout.shadowPosH = mul(posW, gShadowTransform);
return vout;
}
Some last tests I tried before posting:
I tested the code with a Collada (DAE) model exported from Blender, only to observe the same distorted zigzagging in the Win32 desktop application.
I also confirmed the aiScene object for the loaded model returns an identity matrix for the global root transform (also verified in AssimpViewer).
I have stared at this code for about a week and am going out of my mind! Really hoping someone can spot what I have missed. If you need more code or info, please ask!
This seems to be a bug with the published code in the tutorials / documentation. It would be great if you could open an issue-report here: Assimp-Projectpage on GitHub .
It's taken almost another two weeks of pain, but I finally found the bug. It was in my own code, and it was self-inflicted. Before I show the solution, I should explain the further troubleshooting I did to get there.
After losing faith with Assimp (even though the AssimpViewer tool was animating my model correctly), I turned to the FBX SDK. The FBX ViewScene command line utility tool that's available as part of the SDK was also showing and animating my model properly, so I had hope...
So after a few days reviewing the FBX SDK tutorials, and taking another week to write an FBX importer for my Windows desktop game, I loaded my model and... saw exactly the same zig-zag animation anomaly as the version loaded by Assimp!
This frustrating outcome meant I could at least eliminate Assimp and the FBX SDK as the source of the problem, and focus again on the vertex shader. The shader I'm using for vertex skinning was adopted from the 'Character Animation' chapter of Frank Luna's text. It was identical in every way, which led me to recheck the C++ vertex structure declared on the application side...
Here's the C++ vertex declaration for skinned vertices:
struct Vertex
{
// added constructors
Vertex() = default;
Vertex(FLOAT x, FLOAT y, FLOAT z,
FLOAT nx, FLOAT ny, FLOAT nz,
FLOAT u, FLOAT v,
FLOAT tx, FLOAT ty, FLOAT tz) :
Pos(x, y, z),
Normal(nx, ny, nz),
TexC(u, v),
Tangent(tx, ty, tz) {}
Vertex(DirectX::SimpleMath::Vector3 pos,
DirectX::SimpleMath::Vector3 normal,
DirectX::SimpleMath::Vector2 texC,
DirectX::SimpleMath::Vector3 tangent) :
Pos(pos), Normal(normal), TexC(texC), Tangent(tangent) {}
DirectX::SimpleMath::Vector3 Pos;
DirectX::SimpleMath::Vector3 Normal;
DirectX::SimpleMath::Vector2 TexC;
DirectX::SimpleMath::Vector3 Tangent;
FLOAT BoneWeights[4];
BYTE BoneIndices[4];
//UINT BoneIndices[4]; <--- YOU HAVE CAUSED ME A MONTH OF PAIN
};
Quite early on, being confused by Luna's use of BYTE to store the array of bone indices, I changed this structure element to UINT, figuring this still matched the input declaration shown here:
static CONST D3D12_INPUT_ELEMENT_DESC inputElementDescSkinned[] =
{
{ "SV_POSITION", 0u, DXGI_FORMAT_R32G32B32_FLOAT, 0u, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0u },
{ "NORMAL", 0u, DXGI_FORMAT_R32G32B32_FLOAT, 0u, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0u },
{ "TEXCOORD", 0u, DXGI_FORMAT_R32G32_FLOAT, 0u, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0u },
{ "TANGENT", 0u, DXGI_FORMAT_R32G32B32_FLOAT, 0u, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0u },
//{ "BINORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "BONEWEIGHT", 0u, DXGI_FORMAT_R32G32B32A32_FLOAT, 0u, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0u },
{ "BONEINDEX", 0u, DXGI_FORMAT_R8G8B8A8_UINT, 0u, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0u },
};
Here was the bug. By declaring UINT in the vertex structure for bone indices, four bytes were being assigned to store each bone index. But in the vertex input declaration, the DXGI_FORMAT_R8G8B8A8_UINT format specified for the "BONEINDEX" was assigning one byte per index. I suspect this data type and format size mismatch was resulting in only one valid bone index being able to fit in the BONEINDEX element, and so only one index value was passed to the vertex shader each frame, instead of the whole array of four indices for correct bone transform lookups.
So now I've learned... the hard way... why Luna had declared an array of BYTE for bone indices in the original C++ vertex structure.
I hope this experience will be of value to someone else, and always be careful changing code from your original learning sources.

How to make Three.js ShaderMaterial gradient to transparent

I want to make a two-color gradient transparent. In the image below you can see.
Left is the final mesh and on the right a single face. I'm trying to achieve this with a vertex shader and a fragment shader. But unfortunately, I can't figure it out. Hopefully, somebody can help me
I have this so far:
var custom3Material = new this.$three.ShaderMaterial({
uniforms: {
vlak3color1: { value: new this.$three.Color('#31c7de')},
vlak3color2: {type: 'vec2', value: new this.$three.Color('#de3c31')},
positionVlak3: {value: -3.5},
},
vertexShader: `
varying vec3 vUv;
void main() {
vUv = position;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
}
`,
fragmentShader: `
uniform vec3 vlak3color1;
uniform vec3 vlak3color2;
uniform float positionVlak3;
varying vec3 vUv;
void main() {
gl_FragColor = vec4(mix(vlak3color1, vlak3color2, vUv.y-positionVlak3), 1);
}
`,
});
I would like to be able to adjust the position between the 2 colors and the transparency afterward
Thanks in advance!
You must make your material transparent by adding transparent: true to its attributes.
vlak3color2: {type: 'vec2', value: new this.$three.Color('#de3c31')} is confusing. Why are you trying to make a color of type vec2? Just get rid of the type, you don't need it. Three.js automatically recognizes the type when it sees it's a Color.
The fourth value of gl_FragColor is the alpha. Right now you're setting it to a constant 1, so you're getting a fully-opaque color. Try to make it fade from 0 - 1 with smoothstep():
void main() {
// y < 0 = transparent, > 1 = opaque
float alpha = smoothstep(0.0, 1.0, vUv.y);
// y < 1 = color1, > 2 = color2
float colorMix = smoothstep(1.0, 2.0, vUv.y);
gl_FragColor = vec4(mix(vlak3color1, vlak3color2, colorMix), alpha);
}

SceneKit – Drawing a line between two points

I have two points (let's call them pointA and pointB) of type SCNVector3. I want to draw a line between them. Seems like it should be easy, but can't find a way to do it.
I see two options, both have issues:
Use a SCNCylinder with a small radius, with length |pointA-pointB| and then position it/rotate it.
Use a custom SCNGeometry but not sure how; would have to define two triangles to form a very thin rectangle perhaps?
It seems like there should be an easier way of doing this, but I can't seem to find one.
Edit: Using the triangle method gives me this for drawing a line between (0,0,0) and (10,10,10):
CGFloat delta = 0.1;
SCNVector3 positions[] = { SCNVector3Make(0,0,0),
SCNVector3Make(10, 10, 10),
SCNVector3Make(0+delta, 0+delta, 0+delta),
SCNVector3Make(10+delta, 10+delta, 10+delta)};
int indicies[] = {
0,2,1,
1,2,3
};
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithVertices:positions count:4];
NSData *indexData = [NSData dataWithBytes:indicies length:sizeof(indicies)];
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:indexData primitiveType:SCNGeometryPrimitiveTypeTriangles primitiveCount:2 bytesPerIndex:sizeof(int)];
SCNGeometry *line = [SCNGeometry geometryWithSources:#[vertexSource] elements:#[element]];
SCNNode *lineNode = [SCNNode nodeWithGeometry:line];
[root addChildNode:lineNode];
But there are problems: due to the normals, you can only see this line from one side! It's invisible from the other side. Also, if "delta" is too small you can't see the line at all. As it is, it's technically a rectangle, rather than the line I was going for, which might result in small graphical glitches if I want to draw multiple joined up lines.
Here's a simple extension in Swift:
extension SCNGeometry {
class func lineFrom(vector vector1: SCNVector3, toVector vector2: SCNVector3) -> SCNGeometry {
let indices: [Int32] = [0, 1]
let source = SCNGeometrySource(vertices: [vector1, vector2])
let element = SCNGeometryElement(indices: indices, primitiveType: .Line)
return SCNGeometry(sources: [source], elements: [element])
}
}
There are lots of ways to do this.
As noted, your custom geometry approach has some disadvantages. You should be able to correct the problem of it being invisible from one side by giving its material the doubleSided property. You still may have issues with it being two-dimensional, though.
You could also modify your custom geometry to include more triangles, so you get a tube shape with three or more sides instead of a flat rectangle. Or just have two points in your geometry source, and use the SCNGeometryPrimitiveTypeLine geometry element type to have Scene Kit draw a line segment between them. (Though you won't get as much flexibility in rendering styles with line drawing as with shaded polygons.)
You can also use the SCNCylinder approach you mentioned (or any of the other built-in primitive shapes). Remember that geometries are defined in their own local (aka Model) coordinate space, which Scene Kit interprets relative to the coordinate space defined by a node. In other words, you can define a cylinder (or box or capsule or plane or whatever) that's 1.0 units wide in all dimensions, then use the rotation/scale/position or transform of the SCNNode containing that geometry to make it long, thin, and stretching between the two points you want. (Also note that since your line is going to be pretty thin, you can reduce the segmentCounts of whichever built-in geometry you're using, because that much detail won't be visible.)
Yet another option is the SCNShape class that lets you create an extruded 3D object from a 2D Bézier path. Working out the right transform to get a plane connecting two arbitrary points sounds like some fun math, but once you do it you could easily connect your points with any shape of line you choose.
New code for a line from (0, 0, 0) to (10, 10, 10) below.
I'm not sure if it could be improved further.
SCNVector3 positions[] = {
SCNVector3Make(0.0, 0.0, 0.0),
SCNVector3Make(10.0, 10.0, 10.0)
};
int indices[] = {0, 1};
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithVertices:positions
count:2];
NSData *indexData = [NSData dataWithBytes:indices
length:sizeof(indices)];
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:indexData
primitiveType:SCNGeometryPrimitiveTypeLine
primitiveCount:1
bytesPerIndex:sizeof(int)];
SCNGeometry *line = [SCNGeometry geometryWithSources:#[vertexSource]
elements:#[element]];
SCNNode *lineNode = [SCNNode nodeWithGeometry:line];
[root addChildNode:lineNode];
Here's one solution
class func lineBetweenNodeA(nodeA: SCNNode, nodeB: SCNNode) -> SCNNode {
let positions: [Float32] = [nodeA.position.x, nodeA.position.y, nodeA.position.z, nodeB.position.x, nodeB.position.y, nodeB.position.z]
let positionData = NSData(bytes: positions, length: MemoryLayout<Float32>.size*positions.count)
let indices: [Int32] = [0, 1]
let indexData = NSData(bytes: indices, length: MemoryLayout<Int32>.size * indices.count)
let source = SCNGeometrySource(data: positionData as Data, semantic: SCNGeometrySource.Semantic.vertex, vectorCount: indices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float32>.size, dataOffset: 0, dataStride: MemoryLayout<Float32>.size * 3)
let element = SCNGeometryElement(data: indexData as Data, primitiveType: SCNGeometryPrimitiveType.line, primitiveCount: indices.count, bytesPerIndex: MemoryLayout<Int32>.size)
let line = SCNGeometry(sources: [source], elements: [element])
return SCNNode(geometry: line)
}
if you would like to update the line width or anything related to modifying properties of the drawn line, you'll want to use one of the openGL calls in SceneKit's rendering callback:
func renderer(aRenderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: NSTimeInterval) {
//Makes the lines thicker
glLineWidth(20)
}
Here is a swift5 version:
func lineBetweenNodes(positionA: SCNVector3, positionB: SCNVector3, inScene: SCNScene) -> SCNNode {
let vector = SCNVector3(positionA.x - positionB.x, positionA.y - positionB.y, positionA.z - positionB.z)
let distance = sqrt(vector.x * vector.x + vector.y * vector.y + vector.z * vector.z)
let midPosition = SCNVector3 (x:(positionA.x + positionB.x) / 2, y:(positionA.y + positionB.y) / 2, z:(positionA.z + positionB.z) / 2)
let lineGeometry = SCNCylinder()
lineGeometry.radius = 0.05
lineGeometry.height = distance
lineGeometry.radialSegmentCount = 5
lineGeometry.firstMaterial!.diffuse.contents = GREEN
let lineNode = SCNNode(geometry: lineGeometry)
lineNode.position = midPosition
lineNode.look (at: positionB, up: inScene.rootNode.worldUp, localFront: lineNode.worldUp)
return lineNode
}
So inside your ViewController.cs define your vector points and call a Draw function, then on the last line there - it's just rotating it to look at point b.
var a = someVector3;
var b = someOtherVector3;
nfloat cLength = (nfloat)Vector3Helper.DistanceBetweenPoints(a, b);
var cyclinderLine = CreateGeometry.DrawCylinderBetweenPoints(a, b, cLength, 0.05f, 10);
ARView.Scene.RootNode.Add(cyclinderLine);
cyclinderLine.Look(b, ARView.Scene.RootNode.WorldUp, cyclinderLine.WorldUp);
Create a static CreateGeomery class and put this static method in there
public static SCNNode DrawCylinderBetweenPoints(SCNVector3 a,SCNVector3 b, nfloat length, nfloat radius, int radialSegments){
SCNNode cylinderNode;
SCNCylinder cylinder = new SCNCylinder();
cylinder.Radius = radius;
cylinder.Height = length;
cylinder.RadialSegmentCount = radialSegments;
cylinderNode = SCNNode.FromGeometry(cylinder);
cylinderNode.Position = Vector3Helper.GetMidpoint(a,b);
return cylinderNode;
}
you may also want these utility methods in a static helper class
public static double DistanceBetweenPoints(SCNVector3 a, SCNVector3 b)
{
SCNVector3 vector = new SCNVector3(a.X - b.X, a.Y - b.Y, a.Z - b.Z);
return Math.Sqrt(vector.X * vector.X + vector.Y * vector.Y + vector.Z * vector.Z);
}
public static SCNVector3 GetMidpoint(SCNVector3 a, SCNVector3 b){
float x = (a.X + b.X) / 2;
float y = (a.Y + b.Y) / 2;
float z = (a.Z + b.Z) / 2;
return new SCNVector3(x, y, z);
}
For all my Xamarin c# homies out there.
Here's a solution using triangles that works independent of the direction of the line.
It's constructed using the cross product to get points perpendicular to the line. So you'll need a small SCNVector3 extension, but it'll probably come in handy in other cases, too.
private func makeRect(startPoint: SCNVector3, endPoint: SCNVector3, width: Float ) -> SCNGeometry {
let dir = (endPoint - startPoint).normalized()
let perp = dir.cross(SCNNode.localUp) * width / 2
let firstPoint = startPoint + perp
let secondPoint = startPoint - perp
let thirdPoint = endPoint + perp
let fourthPoint = endPoint - perp
let points = [firstPoint, secondPoint, thirdPoint, fourthPoint]
let indices: [UInt16] = [
1,0,2,
1,2,3
]
let geoSource = SCNGeometrySource(vertices: points)
let geoElement = SCNGeometryElement(indices: indices, primitiveType: .triangles)
let geo = SCNGeometry(sources: [geoSource], elements: [geoElement])
geo.firstMaterial?.diffuse.contents = UIColor.blue.cgColor
return geo
}
SCNVector3 extension:
import Foundation
import SceneKit
extension SCNVector3
{
/**
* Returns the length (magnitude) of the vector described by the SCNVector3
*/
func length() -> Float {
return sqrtf(x*x + y*y + z*z)
}
/**
* Normalizes the vector described by the SCNVector3 to length 1.0 and returns
* the result as a new SCNVector3.
*/
func normalized() -> SCNVector3 {
return self / length()
}
/**
* Calculates the cross product between two SCNVector3.
*/
func cross(_ vector: SCNVector3) -> SCNVector3 {
return SCNVector3(y * vector.z - z * vector.y, z * vector.x - x * vector.z, x * vector.y - y * vector.x)
}
}
Swift version
To generate a line in a form of cylinder with a certain position and an orientation, let's implement the SCNGeometry extension with a cylinderLine() class method inside. The toughest part here is a trigonometry (for defining cylinder's direction). Here it is:
import SceneKit
extension SCNGeometry {
class func cylinderLine(from: SCNVector3, to: SCNVector3,
segments: Int = 5) -> SCNNode {
let x1 = from.x; let x2 = to.x
let y1 = from.y; let y2 = to.y
let z1 = from.z; let z2 = to.z
let subExpr01 = Float((x2-x1) * (x2-x1))
let subExpr02 = Float((y2-y1) * (y2-y1))
let subExpr03 = Float((z2-z1) * (z2-z1))
let distance = CGFloat(sqrtf(subExpr01 + subExpr02 + subExpr03))
let cylinder = SCNCylinder(radius: 0.005, height: CGFloat(distance))
cylinder.radialSegmentCount = segments
cylinder.firstMaterial?.diffuse.contents = NSColor.systemYellow
let lineNode = SCNNode(geometry: cylinder)
lineNode.position = SCNVector3((x1+x2)/2, (y1+y2)/2, (z1+z2)/2)
lineNode.eulerAngles = SCNVector3(x: CGFloat.pi / 2,
y: acos((to.z-from.z)/CGFloat(distance)),
z: atan2((to.y-from.y), (to.x-from.x)))
return lineNode
}
}
The rest is easy.
class ViewController: NSViewController {
#IBOutlet var sceneView: SCNView!
let scene = SCNScene()
var startingPoint: SCNVector3!
var endingPoint: SCNVector3!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.scene = scene
sceneView.backgroundColor = NSColor.black
sceneView.allowsCameraControl = true
self.startingPoint = SCNVector3Zero
self.endingPoint = SCNVector3(1,1,1)
self.lineInBetween()
}
func lineInBetween() {
self.addSphereDot(position: startingPoint)
self.addSphereDot(position: endingPoint)
self.addLine(start: startingPoint, end: endingPoint)
}
func addSphereDot(position: SCNVector3) {
let sphere = SCNSphere(radius: 0.03)
sphere.firstMaterial?.diffuse.contents = NSColor.red
let node = SCNNode(geometry: sphere)
node.position = position
scene.rootNode.addChildNode(node)
}
func addLine(start: SCNVector3, end: SCNVector3) {
let lineNode = SCNGeometry.cylinderLine(from: start, to: end)
scene.rootNode.addChildNode(lineNode)
}
}

Three.js - Fisheye effect

So, I've messed around with three.js, works out great. The only thing I can't figure out, is how to make a camera with a real fisheye effect.
How is that possible? camera.setLens()?
The fish eye effect can be achieved using Giliam de Carpentier's shader for lens distortion.
Shader code:
function getDistortionShaderDefinition()
{
return {
uniforms: {
"tDiffuse": { type: "t", value: null },
"strength": { type: "f", value: 0 },
"height": { type: "f", value: 1 },
"aspectRatio": { type: "f", value: 1 },
"cylindricalRatio": { type: "f", value: 1 }
},
vertexShader: [
"uniform float strength;", // s: 0 = perspective, 1 = stereographic
"uniform float height;", // h: tan(verticalFOVInRadians / 2)
"uniform float aspectRatio;", // a: screenWidth / screenHeight
"uniform float cylindricalRatio;", // c: cylindrical distortion ratio. 1 = spherical
"varying vec3 vUV;", // output to interpolate over screen
"varying vec2 vUVDot;", // output to interpolate over screen
"void main() {",
"gl_Position = projectionMatrix * (modelViewMatrix * vec4(position, 1.0));",
"float scaledHeight = strength * height;",
"float cylAspectRatio = aspectRatio * cylindricalRatio;",
"float aspectDiagSq = aspectRatio * aspectRatio + 1.0;",
"float diagSq = scaledHeight * scaledHeight * aspectDiagSq;",
"vec2 signedUV = (2.0 * uv + vec2(-1.0, -1.0));",
"float z = 0.5 * sqrt(diagSq + 1.0) + 0.5;",
"float ny = (z - 1.0) / (cylAspectRatio * cylAspectRatio + 1.0);",
"vUVDot = sqrt(ny) * vec2(cylAspectRatio, 1.0) * signedUV;",
"vUV = vec3(0.5, 0.5, 1.0) * z + vec3(-0.5, -0.5, 0.0);",
"vUV.xy += uv;",
"}"
].join("\n"),
fragmentShader: [
"uniform sampler2D tDiffuse;", // sampler of rendered scene?s render target
"varying vec3 vUV;", // interpolated vertex output data
"varying vec2 vUVDot;", // interpolated vertex output data
"void main() {",
"vec3 uv = dot(vUVDot, vUVDot) * vec3(-0.5, -0.5, -1.0) + vUV;",
"gl_FragColor = texture2DProj(tDiffuse, uv);",
"}"
].join("\n")
};
}
One way to setup the effect using effect composer (assuming scene and renderer have been been created):
// Create camera
camera = new THREE.PerspectiveCamera( 100, window.innerWidth / window.innerHeight, 1, 1000000 );
camera.position.z = 800;
// Create effect composer
composer = new THREE.EffectComposer( renderer );
composer.addPass( new THREE.RenderPass( scene, camera ) );
// Add distortion effect to effect composer
var effect = new THREE.ShaderPass( getDistortionShaderDefinition() );
composer.addPass( effect );
effect.renderToScreen = true;
// Setup distortion effect
var horizontalFOV = 140;
var strength = 0.5;
var cylindricalRatio = 2;
var height = Math.tan(THREE.Math.degToRad(horizontalFOV) / 2) / camera.aspect;
camera.fov = Math.atan(height) * 2 * 180 / 3.1415926535;
camera.updateProjectionMatrix();
effect.uniforms[ "strength" ].value = strength;
effect.uniforms[ "height" ].value = height;
effect.uniforms[ "aspectRatio" ].value = camera.aspect;
effect.uniforms[ "cylindricalRatio" ].value = cylindricalRatio;
Following script are needed and they can be found for example from three.js GitHub page:
<script src="examples/js/postprocessing/EffectComposer.js"></script>
<script src="examples/js/postprocessing/RenderPass.js"></script>
<script src="examples/js/postprocessing/MaskPass.js"></script>
<script src="examples/js/postprocessing/ShaderPass.js"></script>
<script src="examples/js/shaders/CopyShader.js"></script>
Link to Giliam's example: http://www.decarpentier.nl/downloads/lensdistortion-webgl/lensdistortion-webgl.html
Link to Giliam's article about lens distortion: http://www.decarpentier.nl/lens-distortion
Image of my test where lens distortion effect is used:
Put a camera inside a reflective sphere. Make sure the sphere is double sided. Parent the camera and sphere together if you want to move it around your scene. Works like a charm:
http://tileableart.com/code/NOCosmos/test.html
borrowed from:
http://mrdoob.github.io/three.js/examples/webgl_materials_cubemap_dynamic2.html
cubeCamera = new THREE.CubeCamera( 1, 3000, 1024);
cubeCamera.renderTarget.minFilter = THREE.LinearMipMapLinearFilter;
scene.add( cubeCamera );
camParent.add(cubeCamera);
var material = new THREE.MeshBasicMaterial( { envMap: cubeCamera.renderTarget } );
material.side = THREE.DoubleSide;
sphere = new THREE.Mesh( new THREE.SphereGeometry( 2, 60, 30 ), material );
It's possible to get the fisheye effect with a high Field of View.
var fishCamera = new THREE.PerspectiveCamera( 110, window.innerWidth / window.innerHeight, 1, 1100 );
var normalCamera = new THREE.PerspectiveCamera( 50, window.innerWidth / window.innerHeight, 1, 1100 );
or set
camera.fov = 110
camera.updateProjectionMatrix();
Live Example here:
http://mrdoob.github.com/three.js/examples/canvas_geometry_panorama_fisheye.html
One way is to set a large field of view on the camera:
new THREE.PerspectiveCamera(140, ... )
This will not technically be a fisheye effect, but it may be the effect you're looking for.
In a real camera lens, getting a large field of view without distorsion would likely make the lens pretty expensive, but in computer graphics, it's the easy way.
A real fisheye lens distorts the image so that straight line become curved, like in this image:
If you want to create an actual fisheye effect with this kind of distorsion, you would have to modify the geometry, as in Three.js's fisheye example. In that example, the geometry is actually modified beforehand, but for a more advanced scene, you'd want to use a vertex shader to update the vertices on the fly.
A wide angle lens generally have a very low focus length.
To achieve an extreme wide angle we need to reduce focus length.
Note that fish eye lens is an extreme wide angle lens.
To reduce focus length(or to achieve extreme wide angles), one can just increase FOV (field of view), as FOV is inversely proportional to focus length.
example:
var camera_1 = new THREE.PerspectiveCamera( 45, width / height, 1, 1000 );
var camera_2 = new THREE.PerspectiveCamera( 80, width / height, 1, 1000 );
Here camera_2 is a wider angle setup.
Note
To achieve desired effect, one may have to adjust camera position.