How does the diffraction formula nλ=dsinθ change when the diffraction grating isn't perpendicular to the light rays? - physics

If the diffraction grating is misaligned and therefore the incident angle towards the grating isn't=90º, would there be a formula relating the wavelength of the light emitted and the angle at which the bright fringe is found in the diffraction pattern, and the angle at which the diffraction grating is placed at (similar to nλ=dsinθ)?

Prologue
Just a few days ago I needed exactly this while in process of upgrading my spectroscopes from CD/DVD to BluRay gratings. The diffraction angles did not match my angular tweakables with new gratings so I needed to see where exactly the usable spectrum will be reflected. So I made an simulation to show and prepare new construction configuration for my devices...
The well known formula for reflective gratings is:
sin(a)+sin(b)=m*lambda/d
where:
a angle between grating normal and incoming light in [rad]
b angle between grating normal and diffracted or reflected light b in [rad]
m is the degree of diffraction (m=0 is simple reflection)
lambda is light wavelength [m]
d is distance between tracks of grating in [m].
here simple C++ computation
void grating(double lambda,double d,double a) // [nm], [nm], [rad]
{
double m,b;
int cnt=0;
for (m=-3;m<=3;m++)
if (m)
{
b=(m*lambda/d)-sin(a);
if (fabs(b)>1.0) continue;
b=asin(b);
// here `b` is output light angle
}
}
here preview for d=1.6um (grating from CD) simulation based on this:
Done using simple 2D ray casting and RGB values of visible spectrum
The horizontal white line going from left to the grating is the light source. Diffracted rays length indicates the m as You can see some overlap. The higher the d the more this occurs.
Looks like formula and simulation is matching real gratings made from:
CD d=1.60um
DVD d=0.72um
BR d=0.32um
that I use for my home made spectroscopes. Beware non linear tracks have major impact on the diffraction result. The angles does not change however the shape and clarity does ...
In case you have refractive gratings then I do not know if the formula stays the same or on top of it is snell's law applied. Sorry but I do not have experience with such gratings...

Related

pose estimation: determine whether rotation and transmation matrix are right

Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.

Cosine factor in directional hemispherical reflectance

Directional hemispherical reflectance is the ratio between exitance to incoming irradiance. It is used to estimate degree of energy conservation
R(l) = exitance / irradiance
I can understand the relation between BRDF and R(l). BRDF is target to specific viewing angle. so the numerator is radiance. R(l) can be obtain by integrating brdf over all viewing direction.
R(l) = integral(brdf(l, v) * cos(theta0) * dw0)
theta0 is angle between n and v
My question is where do the cosine factor come from?
The angle theta0 should be the angle between incident light and surface normal. The closer incident light flux comes to surface normal, the greater energy surface receives (this is the case for cos(theta0) = 1).
The formula itself comes from the derivation of BRDF.
The directional hemispherical reflectance is the albedo of a surface which is being illuminated by a single, directional light source. This means that there is only a single direction for which there is illumination reaching the surface.
Since the lighting term is a delta function, the dimensionality of the integral is reduced and we only have to integrate over the viewing directions. The cosine term projects the differential solid angle of the light onto the surface
So it's basically a stripped down version of the rendering equation using the simplified light source.

Maya Mel project lookAt target into place after motion capture import

I have a facial animation rig which I am driving in two different manners: it has an artist UI in the Maya viewports as is common for interactive animating, and I've connected it up with the FaceShift markerless motion capture system.
I envision a workflow where a performance is captured, imported into Maya, sample data is smoothed and reduced, and then an animator takes over for finishing.
Our face rig has the eye gaze controlled by a mini-hierarchy of three objects (global lookAtTarget and a left and right eye offset).
Because the eye gazes are controlled by this LookAt setup, they need to be disabled when eye-gaze-including motion capture data is imported.
After the motion capture data is imported, the eye gazes are now set with motion capture rotations.
I am seeking a short Mel routine that does the following: it marches through the motion capture eye rotation samples, backwards calculates and sets each eyes' LookAt target position, and averages the two to get the global LookAt target's position.
After that Mel routine is run, I can turn the eye's LookAt constraint back on, the eyes gaze control returns to rig, nothing has changed visually, and the animator will have their eye UI working in the Maya viewport again.
I'm thinking this should be common logic for anyone doing facial mocap. Anyone got anything like this already?
How good is the eye tracking in the mocap? There may be issues if the targets are far away: depending on the sampling of the data, you may get 'crazy eyes' which seem not to converge, or jumpy data. If that's the case you may need to junk the eye data altogether, or smooth it heavily before retargeting.
To find the convergence of the two eyes, you try this (like #julian I'm using locators, etc since doing all the math in mel would be irritating).
1) constrain a locator to one eye so that one axis oriented along the look vector and the other is in the plane of the second eye. Let's say the eye aims down Z and the second eye is in the XZ plane
2) make a second locator, parented to the first, and constrained to the second eye in the same way: pointing down Z, with the first eye in the XZ plane
3) the local Y rotation of the second locator is the angle of convergence between the two eyes.
4) Figure out the focal distance using the law of sines and a cheat for the offset of the second eye relative to the first. The local X distance of the second eye is one leg of a right triangle. The angles of the triangle are the convergence angle from #3 and 90- the convergence angle. In other words:
focal distance eye_locator2.tx
-------------- = ---------------
sin(90 - eye_locator2.ry) sin( eye_locator2.ry)
so algebraically:
focal distance = eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)
You'll have to subtract the local Z of eye2, since the triangle we're solving is shifted backwards or forwards by that much:
focal distance = (eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)) - eye_locator2.tz
5) Position the target along the local Z direction of the eye locator at the distance derived above. It sounds like the actual control uses two look targets that can be moved apart to avoid crosseyes - it's kind of judgement call to know how much to use that vs the actual convergence distance. For lots of real world data the convergence may be way too far away for animator convenience: a target 30 meters away is pretty impractical to work with, but might be simulated with a target 10 meters away with a big spread. Unfortunately there's no empirical answer for that one - it's a judgement call.
I don't have this script but it would be fairly simple. Can you provide an example maya scene? You don't need any math. Here's how you could go about it:
Assume the axis pointing through the pupil is positive X, and focal length is 10 units.
Create 2 locators. Parent one to each eye. Set their translations to
(10, 0, 0).
Create 2 more locators in worldspace. Point constrain them to the others.
Create a plusMinusAverage node.
Connect the worldspace locator's translations to plusMinusAverage1 input 1 and 2
Create another locator (the lookAt)
Connect the output of plusMinusAverage1 to the translation of the lookAt locator.
Bake the translation of the lookAt locator.
Delete the other 4 locators.
Aim constrain the eyes' X axes to the lookAt.
This can all be done in a script using commands: spaceLocator, createNode, connectAttr, setAttr, bakeSimulation, pointConstraint, aimConstraint, delete.
The solution ended up being quite simple. The situation is motion capture data on the rotation nodes of the eyes, while simultaneously wanting (non-technical) animator over-ride control for the eye gaze. Within Maya, constraints have a weight factor: a parametric 0-1 value controlling the influence of the constraint. The solution is for the animator to simply key the eyes' lookAt constraint weight to 1 when they want control over the eye gaze, key those same weights to 0 when they want the motion captured eye gaze, and use a smooth transition of those constraint weights to mask the transition. This is better than my original idea described above, because the original motion capture data remains in place, available as reference, allowing the animator to switch back and forth if need be.

OpenGL Diffuse Lighting Shader Bug?

The Orange book, section 16.2, lists implementing diffuse lighting as:
void main()
{
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
vec4 V = gl_ModelViewMatrix * gl_vertex;
vec3 L = normalize(lightPos - V.xyz);
gl_FrontColor = gl_Color * vec4(max(0.0, dot(N, L));
}
However, when I run this, the lighting changes when I move my camera.
On the other hand, when I change
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
to
vec3 N = normalize(gl_Normal);
I get diffuse lighting that works like the fixed pipeline.
What is this gl_NormalMatrix, what did removing it do, ... and is this a bug in the orange book ... or am I setting up my OpenGl code improperly?
[For completeness, the fragment shader just copies the color]
OK, I hope there's nothing wrong with answering your question after over half a year? :)
So there are two things to discuss here:
a) What should the shader look like
You SHOULD transform your normals by the modelview matrix - that's a given. Consider what would happen if you don't - your modelview matrix can contain some kind of rotation. Your cube would be rotated, but the normals would still point in the old direction! This is clearly wrong.
So: When you transform your vertices by modelview matrix, you should also transform the normals. Your normals are vec3 not vec4, and you're not interested in translations (normals only contain direction), so you can just multiply your normal by mat3(gl_ModelViewMatrix), which is the upper-left 3-3 submatrix.
Then: This is ALMOST correct, but still a bit wrong - the reasons are well-described on Lighthouse 3D - go have a read. Long story short, instead of mat3(gl_ModelViewMatrix), you have to multiply by an inverse transpose of that.
And OpenGL 2 is very helpful and precalculates this for you as gl_NormalMatrix. Hence, the correct code is:
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
b) But it's different from fixed pipeline, why?
The first thing which comes to my mind is that "something's wrong with your usage of fixed pipeline".
I'm not really keen on FP (long live shaders!), but as far as I can remember, when you specify your lights via glLightParameterfv(GL_LIGHT_POSITION, something), this was affected by the modelview matrix. It was easy (at least for me :)) to make a mistake of specifying the light position (or light direction for directional lights) in the wrong coordinate system.
I'm not sure if I remember correctly how that worked back then since I use GL3 and shaders nowadays, but let me try... what was your state of modelview matrix? I think it just might be possible that you have specified the directional light direction in object space instead of eye space, so that your light would rotate together with your object. IDK if that's relevant here, but make sure to pay attention to that when using FF. That's a mistake I remember myself doing often when I was still using GL 1.1.
Depending on the modelview state, you could specify the light in:
eye (camera) space,
world space,
object space.
Make sure which one it is.
Huh.. I hope that makes the topic more clear for you. The conclusions are:
always transform your normals along with your vertices in your vertex shaders, and
if it looks different from what you expect, think how you specify your light positions. (Maybe you want to multiply the light postion vector in a shader too? The remarks about light position coordinate systems still hold)

Detecting Special touch on the iphone

I was asking myself if there are examples online which covers how you can for instance detect shapes in touch gestures.
for example a rectangle or a circle (or more complex a heart .. )
or determine the speed of swiping (over time ( like i'm swiping my iphone against 50mph ))
For very simple gestures (horizontal vs. vertical swipe), calculate the difference in x and y between two touches.
dy = abs(y2 - y1)
dx = abs(x2 - x1)
f = dy/dx
An f close to zero is a horizontal swipe. An f close to 1 is a diagonal swipe. And a very large f is a vertical swipe (keep in mind that dx could be zero, so the above won't yield valid results for all x and y).
If you're interested in speed, pythagoras can help. The length of the distance travelled between two touches is:
l = sqrt(dx*dx + dy*dy)
If the touches happened at times t1 and t2, the speed is:
tdiff = abs(t2 - t1)
s = l/tdiff
It's up to you to determine which value of s you interpret as fast or slow.
You can extend this approach for more complex figures, e.g. your square shape could be a horizontal/vertical/horizontal/vertical swipe with start/end points where the previous swipe stopped.
For more complex figures, it's probably better to work with an idealized shape. One could consider a polygon shape as the ideal, and check if a range of touches
don't have too high a distance to their closest point on the pologyon's outline, and
all touches follow the same direction along the polygon's outline.
You can refine things further from there.
There does exist other methods for detecting non-simple touches on a touchscreen. Check out the $1 unistroke gesture recognizer at the University of Washington. http://depts.washington.edu/aimgroup/proj/dollar/
It basically works like this:
Resample the recorded path into a fixed number of points that are evenly spaced along the path
Rotating the path so that the first point is directly to the right of the path’s center of mass
Scaling the path (non-uniformly) to a fixed height and width
For each reference path, calculating the average distance for the corresponding points in the input path. The path with the lowest average point distance is the match.
What’s great is that the output of steps 1-3 is a reference path that can be added to the array of known gestures. This makes it extremely easy to give your application gesture support and create your own set of custom gestures, as you see fit.
This has been ported to iOS by Adam Preble, repo on github:
http://github.com/preble/GLGestureRecognizer