calculating forward kinematics using D-H matrix - robot

I have a 6-DOF robot arm model:
robot arm structure
I want to calculate forward kinematics, so I uses the D-H matrix. the D-H parameters are:
static const std::vector<float> theta = {
0,0,90.0f,0,-90.0f,0};
// d
static const std::vector<float> d = {
380.948f,0,0,-560.18f,0,0};
// a
static const std::vector<float> a = {
-220.0f,522.331f,80.0f,0,0,94.77f};
// alpha
static const std::vector<float> alpha = {
90.0f,0,90.0f,-90.0f,-90.0f,0};
and the calculation :
glm::mat4 Robothand::armForKinematics() noexcept
{
glm::mat4 pose(1.0f);
float cos_theta, sin_theta, cos_alpha, sin_alpha;
for (auto i = 0; i < 6;i++)
{
cos_theta = cosf(glm::radians(theta[i]));
sin_theta = sinf(glm::radians(theta[i]));
cos_alpha = cosf(glm::radians(alpha[i]));
sin_alpha = sinf(glm::radians(alpha[i]));
glm::mat4 Ai = {
cos_theta, -sin_theta * cos_alpha,sin_theta * sin_alpha, a[i] * cos_theta,
sin_theta, cos_theta * cos_alpha, -cos_theta * sin_alpha,a[i] * sin_theta,
0, sin_alpha, cos_alpha, d[i],
0, 0, 0, 1 };
pose = pose * Ai;
}
return pose;
}
the problem I have is that, I can't get the correct result, for example, I want to calculate the transformation matrix from first joint to the 4th joint, I will change the for loop i < 3,then I can get the pose matrix, and I can the origin coordinate in 4th coordinate system by pose * (0,0,0,1).but the result (380.948,382.331,0) seems not correct because it should be move along x-axis not y-axis. I have read many books and materials about D-H matrix, but I can't figure out what's wrong with it.

I have figured it out by myself, the real problem behind is glm::mat, glm::mat is col-type which means columns will be initialized before rows,I changed the code and get the correct result:
for (int i = 0; i < joint_num; ++i)
{
pose = glm::rotate(pose, glm::radians(degrees[i]), glm::vec3(0, 0, 1));
pose = glm::translate(pose,glm::vec3(0,0,d[i]));
pose = glm::translate(pose, glm::vec3(a[i], 0, 0));
pose = glm::rotate(pose,glm::radians(alpha[i]),glm::vec3(1,0,0));
}
then I can get the position by:
auto pos = pose * glm::vec4(x,y,z,1);

Related

Directx11 heightmap texture real-time modification problem

I'm making a terrain tool.
I made a 2D texture and am using it as a height map.
I want to change a specific part of the heightmap, but I'm having a problem.
I changed certain small parts, but the whole landscape of the texture is changed.
I would like to know the cause of this problem and how to solve it
thank you.
`HeightMap ShaderResourceView Create Code
void TerrainRenderer::BuildHeightmapSRV(ID3D11Device* device)
{
ReleaseCOM(mHeightMapSRV);
ReleaseCOM(m_hmapTex);
D3D11_TEXTURE2D_DESC texDesc;
texDesc.Width = m_terrainData.HeightmapWidth; //basic value 2049
texDesc.Height = m_terrainData.HeightmapHeight; //basic value 2049
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_R16_FLOAT;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DYNAMIC;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
texDesc.MiscFlags = 0;
// HALF is defined in xnamath.h, for storing 16-bit float.
std::vector<HALF> hmap(mHeightmap.size());
//current mHeightmap is all zero.
std::transform(mHeightmap.begin(), mHeightmap.end(), hmap.begin(), XMConvertFloatToHalf);
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = &hmap[0];
data.SysMemPitch = m_terrainData.HeightmapWidth * sizeof(HALF);
data.SysMemSlicePitch = 0;
HR(device->CreateTexture2D(&texDesc, &data, &m_hmapTex));
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = texDesc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MipLevels = -1;
HR(device->CreateShaderResourceView(m_hmapTex, &srvDesc, &mHeightMapSRV));
}
`HeightMap Texture modifying code
D3D11_MAPPED_SUBRESOURCE mappedData;
//m_hmapTex is ID3D11Texture2D*
HR(m_texMgr.m_context->Map(m_hmapTex, D3D11CalcSubresource(0, 0, 1), D3D11_MAP_WRITE_DISCARD, 0, &mappedData));
HALF* heightMapData = reinterpret_cast<HALF*>(mappedData.pData);
D3D11_TEXTURE2D_DESC heightmapDesc;
m_hmapTex->GetDesc(&heightmapDesc);
UINT width = heightmapDesc.Width;
for (int row = 0; row < width/4; ++row)
{
for (int col = 0; col < width/4; ++col)
{
idx = (row * width) + col;
heightMapData[idx] = static_cast<HALF>(XMConvertFloatToHalf(200));
}
}
m_texMgr.m_context->Unmap(m_hmapTex, D3D11CalcSubresource(0,0,1));
Please refer to the picture below
The lower right area renders the HeightMap texture.
I wanted to edit only 1/4 width and height, but that's all changed.
enter image description here
When the completed heightmap is applied, it works normally.
enter image description here
A texture does not always have the same width and height in memory as the definition suggests. Some textures strides (lines) are oversized. You have to use the Stride Size * Row to calculate the offset to write into.

Optimizing code to generate static

I am learning p5.js and wanted to generate a "static/noise texture" like so:
This is the code:
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
noiseVal = random(0,1);
stroke(255, noiseVal*255);
point(x,y);
}
}
This produces the desired outcome but it's obviously pretty slow since it has to iterate over every single pixel. What would be a more efficient way of doing this?
Your code is really not the best way to do with p5.js.
Take a look to the p5's pixels array.
When I run the following code, the function that use the pixels array run 100 times faster.
function setup() {
createCanvas(50, 50);
background(255);
let start, time;
start = performance.now();
noise_1();
time = performance.now() - start;
print("noise_1 : " + time);
start = performance.now();
noise_2();
time = performance.now() -start;
print("noise_2 : " + time);
}
// Your code
function noise_1() {
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
noiseVal = random(0,1);
stroke(noiseVal*255);
point(x,y);
}
}
}
// same with pixels array
function noise_2() {
loadPixels();
for (let i=0; i < pixels.length; i+=4){
noiseVal = random(0, 255);
pixels[i] = pixels[i+1] = pixels[i+2] = noiseVal;
}
updatePixels();
}
output :
noise_1 : 495.1
noise_2 : 5.92
To generate a single frame of static, you're going to have to iterate over each pixel. You could make your blocks larger than a single pixel, but that will only reduce the problem, not get rid of it completely.
Instead, you can probably get away with pre-computing a few images of static (let's say 10 or so). Save these as a file or to an off-screen buffer (the createGraphics() function is your friend), and then draw those images instead of drawing each pixel every frame.

Bidirectional path tracing

I'm making a bidirectional path tracer and I have some troubles.
To be clear :
1) One point light
2) All objects are diffuse
3) All objects are spheres, even walls (they are very large)
4) NO MIS WEIGHTING
The light emission is a 3D vector. The BRDF of a sphere is a 3D vector. Hard coded.
In the main function below I generate EyePath and LightPath then I connect them. At least I try.
In this post I will talking about the main function then EyePath then LightPath. The talking about connecting function will appear once EyePath and Light are good.
First questions :
Does the generation of the first light point is good ?
Do I need to compute this point according to the emission of the light source? or is it just the emission ? The line is commented where i'm filling the Vertices structure.
Do I need to translate fromlight ? In order to put it on the sphere
The code below is sampled in the main function. Above it there is two for loops going through all pixels. Camera.o is the eye. CameraRayDir is the direction to the current pixel.
//The path light starting point is at the same position as the light
Ray fromLight(Vec(0, 24.3, 0), Vec());
Sphere light = spheres[7];
#define PDF 0.15915494309 // 1 / (2 * PI)
for(int i = 0; i < samps; ++i)
{
std::vector<Vertices> PathEye;
std::vector<Vertices> PathLight;
Vec cameraRayDir = cx * (double(x) / w - .5) + cy * (double(y) / h - .5) + camera.d;
Ray rayEye(camera.o, cameraRayDir.norm());
// Hemisphere oriented towards the top
fromLight.d = generateRayInHemisphere(fromLight.o,Vec(0,1,0)).d;
double f = clamp(n.dot(fromLight.d.norm()));
Vertices vert;
vert.d = fromLight.d;
vert.x = fromLight.o;
vert.id = 7;
vert.cos = f;
vert.n = Vec(0,1,0).norm();
// this one ?
//vert.couleur = spheres[7].e * f / PDF;
// Or this one ?
vert.couleur = spheres[7].e;
PathLight.push_back(vert);
int sizeEye = generateEyePath(PathEye, rayEye, maxDepth);
int sizeLight = generateLightPath(PathLight, fromLight, maxDepth);
for (int s = 0; s < sizeLight; ++s)
{
for (int t = 1; t < sizeEye; ++t)
{
int depth = t + s - 1;
if ((s == 0 && t == 0) || depth < 0 || depth > maxDepth)
continue;
pixelValue = pixelValue + connectPaths(PathEye, PathLight, s, t);
}
}
}
For the EyePath I intersect the geometry then I compute the illumination according to the distance with the light. The colour is black if the point is in the shadow.
Second question : For the eye path and the direct illumination, is the computation good ? I've seen in many code, people use the pdf even in direct illumination. But I'm only using point light and spheres.
int generateEyePath(std::vector<Vertices>& v, Ray eye, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
int RussianRoulette;
while(v.size() <= maxDepth)
{
if(distribRREye(generatorRREye) < 10)
break;
// Intersect all the geometry
// id is the id of the intersected geometry in an array
intersect(eye, t, id);
const Sphere& obj = spheres[id];
// Intersection point
Vec x = eye.o + eye.d * t;
// normal
Vec n = (x - obj.p).norm();
Vec direction = light.p - x;
// Shadow ray
Ray RaytoLight = Ray(x, direction.norm());
const float distance = direction.length();
// shadow
const bool visibility = intersect(RaytoLight, t, id);
const Sphere &lumiere = spheres[id];
float degree = clamp(n.dot((lumiere.p - x).norm()));
// If the intersected geometry is not a light, then in shadow
if(lumiere.e.x == 0)
{
vert.couleur = Vec();
}
else // else we compute the colour
// obj.c is the brdf, lumiere.e is the emission
vert.couleur = (obj.c).mult(lumiere.e / (distance * distance)) * degree;
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = eye.d.normn();
vert.cos = degree;
v.push_back(vert);
eye = generateRayInHemisphere(x,n);
}
return v.size();
}
For the LightPath, for a given point, I compute it according to the previous one and the values at this point. Like in a common path tracing.\n
Third question: Is the colour computation good ?
int generateLightPath(std::vector<Vertices>& v, Ray fromLight, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
Vec previous;
while(v.size() <= maxDepth)
{
if(distribRRLight(generatorRRLight) < 10)
break;
previous = v.back().couleur;
intersect(fromLight, t, id);
// intersected geometry
const Sphere& obj = spheres[id];
// Intersection point
Vec x = fromLight.o + fromLight.d * t;
// normal
Vec n = (x - obj.p).norm();
double f = clamp(n.dot(fromLight.d.norm()));
// obj.c is the brdf
vert.couleur = previous.mult(((obj.c / M_PI) * f) / PDF);
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = fromLight.d.norm();
vert.cos = f;
v.push_back(vert);
fromLight = generateRayInHemisphere(x,n);
}
return v.size();
}
For the moment I get this result.
enter image description here
The connecting function will come once EyePath and LightPath are good.
Thank you all
Try the spherical reference scene mentioned in this paper. I think then you can work out most of your questions by yourself since it has an analytical solution.
https://www.researchgate.net/publication/221546261_Testing_Monte-Carlo_Global_Illumination_Methods_with_Analytically_Computable_Scenes
It would save your time to implement and verify your understanding with path tracing and light tracing first, then try to combine them with weights.

Check if image is dark-only bottom part

I am checking if UIImage is darker or more whiter . I would like to use this method ,but only to check the third bottom part of the image ,not all of it .
I wonder how exactly to change it to check that,i am not that familiar with the pixels stuff .
BOOL isDarkImage(UIImage* inputImage){
BOOL isDark = FALSE;
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(inputImage.CGImage));
const UInt8 *pixels = CFDataGetBytePtr(imageData);
int darkPixels = 0;
long length = CFDataGetLength(imageData);
int const darkPixelThreshold = (inputImage.size.width*inputImage.size.height)*.25;
//should i change here the length ?
for(int i=0; i<length; i+=4)
{
int r = pixels[i];
int g = pixels[i+1];
int b = pixels[i+2];
//luminance calculation gives more weight to r and b for human eyes
float luminance = (0.299*r + 0.587*g + 0.114*b);
if (luminance<150) darkPixels ++;
}
if (darkPixels >= darkPixelThreshold)
isDark = YES;
I can just crop that part of the image, but this will be not efficient way, and wast time .
The solution marked correct here is a more thoughtful approach for getting the pixel data (more tolerant of differing formats) and also demonstrates how to address pixels. With a small adjustment, you can get the bottom of the image as follows:
+ (NSArray*)getRGBAsFromImage:(UIImage*)image
atX:(int)xx
andY:(int)yy
toX:(int)toX
toY:(int)toY {
// ...
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
int byteIndexEnd = (bytesPerRow * toY) + toX * bytesPerPixel;
while (byteIndex < byteIndexEnd) {
// contents of the loop remain the same
// ...
}
To get the bottom third of the image, call this with xx=0, yy=2.0*image.height/3.0 and toX and toY equal to the image width and height, respectively. Loop the colors in the returned array and compute luminance as your post suggests.

GLSL shader generation of normals

Hi I am writing 3D modeling app and I want to speed up rendering in OpenGL. Currently I use glBegin/glEnd which is really slow and deprecated way. I need to draw very fast flat shaded models. I generate normals on CPU every single frame. This is very slow. I tried to use glDrawElements with indexed geometry, but there is problem in normal generation, because normals are specified at vertex not at triangle level.
Another idea was to use GLSL to generate normals on GPU in geometry shader. I written this code for normal generation:
#version 120
#extension GL_EXT_geometry_shader4 : enable
vec3 NormalFromTriangleVertices(vec3 triangleVertices[3])
{
// now is same as RedBook (OpenGL Programming Guide)
vec3 u = triangleVertices[0] - triangleVertices[1];
vec3 v = triangleVertices[1] - triangleVertices[2];
return cross(v, u);
}
void main()
{
// no change of position
// computes normal from input triangle and front color for that triangle
vec3 triangleVertices[3];
vec3 computedNormal;
vec3 normal, lightDir;
vec4 diffuse;
float NdotL;
vec4 finalColor;
for(int i = 0; i < gl_VerticesIn; i += 3)
{
for (int j = 0; j < 3; j++)
{
triangleVertices[j] = gl_PositionIn[i + j].xyz;
}
computedNormal = NormalFromTriangleVertices(triangleVertices);
normal = normalize(gl_NormalMatrix * computedNormal);
// hardcoded light direction
vec4 light = gl_ModelViewMatrix * vec4(0.0, 0.0, 1.0, 0.0);
lightDir = normalize(light.xyz);
NdotL = max(dot(normal, lightDir), 0.0);
// hardcoded
diffuse = vec4(0.5, 0.5, 0.9, 1.0);
finalColor = NdotL * diffuse;
finalColor.a = 1.0; // final color ignores everything, except lighting
for (int j = 0; j < 3; j++)
{
gl_FrontColor = finalColor;
gl_Position = gl_PositionIn[i + j];
EmitVertex();
}
}
EndPrimitive();
}
When I integrated shaders to my application, no speed improvement occurred. It was worse than before. I am newbie in GLSL and shaders overall so I don't know what I done wrong.
I tried this code on MacBook with Geforce 9400M.
To be more clear, this is code I want to replace:
- (void)drawAsCommandsWithScale:(Vector3D)scale
{
float frontDiffuse[4] = { 0.4, 0.4, 0.4, 1 };
CGFloat components[4];
[color getComponents:components];
float backDiffuse[4];
float selectedDiffuse[4] = { 1.0f, 0.0f, 0.0f, 1 };
for (uint i = 0; i < 4; i++)
backDiffuse[i] = components[i];
glMaterialfv(GL_BACK, GL_DIFFUSE, backDiffuse);
glMaterialfv(GL_FRONT, GL_DIFFUSE, frontDiffuse);
Vector3D triangleVertices[3];
float *lastDiffuse = frontDiffuse;
BOOL flip = scale.x < 0.0f || scale.y < 0.0f || scale.z < 0.0f;
glBegin(GL_TRIANGLES);
for (uint i = 0; i < triangles->size(); i++)
{
if (selectionMode == MeshSelectionModeTriangles)
{
if (selected->at(i))
{
if (lastDiffuse == frontDiffuse)
{
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, selectedDiffuse);
lastDiffuse = selectedDiffuse;
}
}
else if (lastDiffuse == selectedDiffuse)
{
glMaterialfv(GL_BACK, GL_DIFFUSE, backDiffuse);
glMaterialfv(GL_FRONT, GL_DIFFUSE, frontDiffuse);
lastDiffuse = frontDiffuse;
}
}
Triangle currentTriangle = [self triangleAtIndex:i];
if (flip)
currentTriangle = FlipTriangle(currentTriangle);
[self getTriangleVertices:triangleVertices fromTriangle:currentTriangle];
for (uint j = 0; j < 3; j++)
{
for (uint k = 0; k < 3; k++)
{
triangleVertices[j][k] *= scale[k];
}
}
Vector3D n = NormalFromTriangleVertices(triangleVertices);
n.Normalize();
for (uint j = 0; j < 3; j++)
{
glNormal3f(n.x, n.y, n.z);
glVertex3f(triangleVertices[j].x, triangleVertices[j].y, triangleVertices[j].z);
}
}
glEnd();
}
As you can see it is very inefficient, but working.triangles is array of indexes into vertices array.
I tried to use this code for drawing, but I can't have only one index array not two (one for vertices and second for normals).
glEnableClientState(GL_VERTEX_ARRAY);
uint *trianglePtr = (uint *)(&(*triangles)[0]);
float *vertexPtr = (float *)(&(*vertices)[0]);
glVertexPointer(3, GL_FLOAT, 0, vertexPtr);
glDrawElements(GL_TRIANGLES, triangles->size() * 3, GL_UNSIGNED_INT, trianglePtr);
glDisableClientState(GL_VERTEX_ARRAY);
Now, how can I specify pointer to normals, when some vertices are shared by different triangles, so different normals for them?
So I finally managed to increase rendering speed. I recalculate normals on CPU, only when vertices or triangles changes, which occurs only when working in one mesh not in whole scene.
It is not solution that I wanted but in real world it is better than previous approaches.
I cache whole geometry into separate normal and vertex array, indexed drawing cannot be used because I want flat shading (similar problem to smoothing groups in 3ds max).
I use simple glDrawArrays and for lighting vertex shader, that is because I want in triangle mode different color for selected triangle and another one for unselected and there is no array of materials (I didn't found any one).
You wouldn't usually calculate the normals every frame, only when the geometry changes.
And to have one normal per triangle just set the same normal for each vertex in the triangle. That does mean you can't share vertices between adjacent triangles in your mesh but that's not unusual at all in this kind of thing.
Your question makes me remember this Normals without Normals blog post.