Directional hemispherical reflectance is the ratio between exitance to incoming irradiance. It is used to estimate degree of energy conservation
R(l) = exitance / irradiance
I can understand the relation between BRDF and R(l). BRDF is target to specific viewing angle. so the numerator is radiance. R(l) can be obtain by integrating brdf over all viewing direction.
R(l) = integral(brdf(l, v) * cos(theta0) * dw0)
theta0 is angle between n and v
My question is where do the cosine factor come from?
The angle theta0 should be the angle between incident light and surface normal. The closer incident light flux comes to surface normal, the greater energy surface receives (this is the case for cos(theta0) = 1).
The formula itself comes from the derivation of BRDF.
The directional hemispherical reflectance is the albedo of a surface which is being illuminated by a single, directional light source. This means that there is only a single direction for which there is illumination reaching the surface.
Since the lighting term is a delta function, the dimensionality of the integral is reduced and we only have to integrate over the viewing directions. The cosine term projects the differential solid angle of the light onto the surface
So it's basically a stripped down version of the rendering equation using the simplified light source.
Related
If the diffraction grating is misaligned and therefore the incident angle towards the grating isn't=90º, would there be a formula relating the wavelength of the light emitted and the angle at which the bright fringe is found in the diffraction pattern, and the angle at which the diffraction grating is placed at (similar to nλ=dsinθ)?
Prologue
Just a few days ago I needed exactly this while in process of upgrading my spectroscopes from CD/DVD to BluRay gratings. The diffraction angles did not match my angular tweakables with new gratings so I needed to see where exactly the usable spectrum will be reflected. So I made an simulation to show and prepare new construction configuration for my devices...
The well known formula for reflective gratings is:
sin(a)+sin(b)=m*lambda/d
where:
a angle between grating normal and incoming light in [rad]
b angle between grating normal and diffracted or reflected light b in [rad]
m is the degree of diffraction (m=0 is simple reflection)
lambda is light wavelength [m]
d is distance between tracks of grating in [m].
here simple C++ computation
void grating(double lambda,double d,double a) // [nm], [nm], [rad]
{
double m,b;
int cnt=0;
for (m=-3;m<=3;m++)
if (m)
{
b=(m*lambda/d)-sin(a);
if (fabs(b)>1.0) continue;
b=asin(b);
// here `b` is output light angle
}
}
here preview for d=1.6um (grating from CD) simulation based on this:
Done using simple 2D ray casting and RGB values of visible spectrum
The horizontal white line going from left to the grating is the light source. Diffracted rays length indicates the m as You can see some overlap. The higher the d the more this occurs.
Looks like formula and simulation is matching real gratings made from:
CD d=1.60um
DVD d=0.72um
BR d=0.32um
that I use for my home made spectroscopes. Beware non linear tracks have major impact on the diffraction result. The angles does not change however the shape and clarity does ...
In case you have refractive gratings then I do not know if the formula stays the same or on top of it is snell's law applied. Sorry but I do not have experience with such gratings...
I'm trying to use a VectorNav VN100 IMU to map a path through an underground tunnel (GPS denied environment) and am wondering what is the best approach to take to do this.
I get lots of data points from the VN100 these include: orientation/pose (Euler angles, quaternions), and acceleration and gyroscope values in three dimensions. The acceleration and gyro values are given in raw and filtered formats where filtered outputs have been filtered using an onboard Kalman filter.
In addition to IMU measurements I also measure GPS-RTK coordinates in three dimensions at the start and end-points of the tunnel.
How should I approach this mapping problem? I'm quite new to this area and do not know how to extract position from the acceleration and orientation data. I know acceleration can be integrated once to give velocity and that in turn can be integrated again to get position but how do I combine this data together with orientation data (quaternions) to get the path?
In robotics, Mapping means representing the environment using perception sensor (like 2D,3D laser or Cameras).
Once you got the map, it can be used by robot to know its location(Localization). Map is also used for find a path between locations to move from one place to another place(Path planning).
In your case you need a perception sensor to get the better location estimation. With only IMU you can track the position using Extended Kalman filter(EKF) but it drifts quickly.
Robot Operating System has EKF implementation you can refer it.
Ok so I came across a solution that gets me somewhat closer to my goal of finding the path travelled underground, although it is by no means the final solution I'm posting my algorithm here in the hopes that it helps someone else.
My method is as follows:
Rotate the Acceleration vector A = [Ax, Ay, Az] output by the VectorNav VN100 into the North, East, Down frame by multiplying by the quaternion VectorNav output Q = [q0, q1, q2, q3]. How to multiply a vector by a quaternion is outlined in this other post.
Basically you take the acceleration vector and add a fourth component on to the end of it to act as the scalar term, then multiply by the quaternion and it's conjugate (N.B. the scalar terms in both matrices should be in the same position, in this case the scalar quaternion term is the first term, so therefore a zero scalar term should be added on to the start of the acceleration vector) e.g. A = [0,Ax,Ay,Az]. Then perform the following multiplication:
A_ned = Q A Q*
where Q* is the complex conjugate of the quaternion (i, j, and k terms are negated).
Integrate the rotated acceleration vector to get the velocity vector: V_ned
Integrate the Velocity vector to get the position in north, east, down: R_ned
There is substantial drift in the velocity and position due to sensor bias which causes drift. This can be corrected for somewhat if we know the start and end velocity and start and end positions. In this case the start and end velocities were zero so I used this to correct the drift in the velocity vector.
Uncorrected Velocity
Corrected Velocity
My final comparison between IMU position vs GPS is shown here (read: there's still a long way to go).
GPS-RTK data vs VectorNav IMU data
Now I just need to come up with a sensor fusion algorithm to try to improve the position estimation...
I'm trying to implement different types of lights in my ray-tracer coded in C. I have successfully implemented spot, point, directional and rectangular area lights.
For rectangular area light I define two vectors (U and V) in space and I use them to move into the virtual (delimited) rectangle they form.
Depending on the intensity of the light I take several samples on the rectangle then I calculate the amount of the light reaching a point as though each sample were a single spot light.
With rectangles it is very easy to find the position of the various samples, but things get complicated when I try to do the same with a disk light.
I found little documentation about that and most of them already use ready-made functions to do so.
The only interesting thing I found is this document (https://graphics.pixar.com/library/DiskLightSampling/paper.pdf) but I'm unable to exploit it.
Would you know how to help me achieve a similar result (of the following image) with vector operations? (ex. Having the origin, orientation, radius of the disk and the number of samples)
Any advice or documentation in this regard would help me a lot.
This question reduces to:
How can I pick a uniformly-distributed random point on a disk?
A naive approach would be to generate random polar coordinates and transform them to cartesian coordinates:
Randomly generate an angle θ between 0 and 2π
Randomly generate a distance d between 0 and radius r of your disk
Transform to cartesian coordinates with x = r cos θ and y = r sin θ
This is incorrect because it causes the points to bunch up in the center; for example:
A correct, but inefficient, way to do this is via rejection sampling:
Uniformly generate random x and y, each over [0, 1]
If sqrt(x^2 + y^2) < 1, return the point
Goto 1
The correct way to do this is illustrated here:
Randomly generate an angle θ between 0 and 2π
Randomly generate a distance d between 0 and radius r of your disk
Transform to cartesian coordinates with x = sqrt(r) cos θ and y = sqrt(r) sin θ
I'm trying to create a solar system simulation, and I'm having problems trying to figure out initial velocity vectors for random objects I've placed into the simulation.
Assume:
- I'm using Gaussian grav constant, so all my units are AU/Solar Masses/Day
- Using x,y,z for coordinates
- One star, which is fixed at 0,0,0. Quasi-random mass is determined for it
- I place a planet, at a random x,y,z coordinate, and its own quasi-random mass determined.
Before I start the nbody loop (using RK4), I would like the initial velocity of the planet to be such that it has a circular orbit around the star. Other placed planets will, of course, pull on it once the simulation starts, but I want to give it the chance to have a stable orbit...
So, in the end, I need to have an initial velocity vector (x,y,z) for the planet that means it would have a circular orbit around the star after 1 timestep.
Help? I've been beating my head against this for weeks and I don't believe I have any reasonable solution yet...
It is quite simple if you assume that the mass of the star M is much bigger than the total mass of all planets sum(m[i]). This simplifies the problem as it allows you to pin the star to the centre of the coordinate system. Also it is much easier to assume that the motion of all planets is coplanar, which further reduces the dimensionality of the problem to 2D.
First determine the magnitude of the circular orbit velocity given the magnitude of the radius vector r[i] (the radius of the orbit). It only depends on the mass of the star, because of the above mentioned assumption: v[i] = sqrt(mu / r[i]), where mu is the standard gravitational parameter of the star, mu = G * M.
Pick a random orbital phase parameter phi[i] by sampling uniformly from [0, 2*pi). Then the initial position of the planet in Cartesian coordinates is:x[i] = r[i] * cos(phi[i]) y[i] = r[i] * sin(phi[i])
With circular orbits the velocity vector is always perpendicular to the radial vector, i.e. its direction is phi[i] +/- pi/2 (+pi/2 for counter-clockwise (CCW) rotation and -pi/2 for clockwise rotation). Let's take CCW rotation as an example. The Cartesian coordinates of the planet's velocity are:vx[i] = v[i] * cos(phi[i] + pi/2) = -v[i] * sin(phi[i])vy[i] = v[i] * sin(phi[i] + pi/2) = v[i] * cos(phi[i])
This easily extends to coplanar 3D motion by adding z[i] = 0 and vz[i] = 0, but it makes no sense, since there are no forces in the Z direction and hence z[i] and vz[i] would forever stay equal to 0 (i.e. you will be solving for a 2D subspace problem of the full 3D space).
With full 3D simulation where each planet moves in a randomly inclined initial orbit, one can work that way:
This step is equal to step 1 from the 2D case.
You need to pick an initial position on the surface of the unit sphere. See here for examples on how to do that in a uniformly random fashion. Then scale the unit sphere coordinates by the magnitude of r[i].
In the 3D case, instead of two possible perpendicular vectors, there is a whole tangential plane where the planet velocity lies. The tangential plane has its normal vector collinear to the radius vector and dot(r[i], v[i]) = 0 = x[i]*vx[i] + y[i]*vy[i] + z[i]*vz[i]. One could pick any vector that is perpendicular to r[i], for example e1[i] = (-y[i], x[i], 0). This results in a null vector at the poles, so there one could pick e1[i] = (0, -z[i], y[i]) instead. Then another perpendicular vector can be found by taking the cross product of r[i] and e1[i]:e2[i] = r[i] x e1[i] = (r[2]*e1[3]-r[3]*e1[2], r[3]*e1[1]-r[1]*e1[3], r[1]*e1[2]-r[2]*e1[1]). Now e1[i] and e2[i] can be normalised by dividing them by their norms:n1[i] = e1[i] / ||e1[i]||n2[i] = e2[i] / ||e2[i]||where ||a|| = sqrt(dot(a, a)) = sqrt(a.x^2 + a.y^2 + a.z^2). Now that you have an orthogonal vector basis in the tangential plane, you can pick one random angle omega in [0, 2*pi) and compute the velocity vector as v[i] = cos(omega) * n1[i] + sin(omega) * n2[i], or as Cartesian components:vx[i] = cos(omega) * n1[i].x + sin(omega) * n2[i].xvy[i] = cos(omega) * n1[i].y + sin(omega) * n2[i].yvz[i] = cos(omega) * n1[i].z + sin(omega) * n2[i].z.
Note that by construction the basis in step 3 depends on the radius vector, but this does not matter since a random direction (omega) is added.
As to the choice of units, in simulation science we always tend to keep things in natural units, i.e. units where all computed quantities are dimensionless and kept in [0, 1] or at least within 1-2 orders of magnitude and so the full resolution of the limited floating-point representation could be used. If you take the star mass to be in units of Solar mass, distances to be in AUs and time to be in years, then for an Earth-like planet at 1 AU around a Sun-like star, the magnitude of the orbital velocity would be 2*pi (AU/yr) and the magnitude of the radius vector would be 1 (AU).
Just let centripetal acceleration equal gravitational acceleration.
m1v2 / r = G m1m2 / r2
v = sqrt( G m2 / r )
Of course the star mass m2 must be much greater than the planet mass m1 or you don't really have a one-body problem.
Units are a pain in the butt when setting up physics problems. I've spent days resolving errors in seconds vs timestep units. Your choice of AU/Solar Masses/Day is utterly insane. Fix that before anything else.
And, keep in mind that computers have inherently limited precision. An nbody simulation accumulates integration error, so after a million or a billion steps you will certainly not have a circle, regardless of the step duration. I don't know much about that math, but I think stable n-body systems keep themselves stable by resonances which absorb minor variations, whether introduced by nearby stars or by the FPU. So the setup might work fine for a stable, 5-body problem but still fail for a 1-body problem.
As Ed suggested, I would use the mks units, rather than some other set of units.
For the initial velocity, I would agree with part of what Ed said, but I would use the vector form of the centripetal acceleration:
m1v2/r r(hat) = G m1 m2 / r2 r(hat)
Set z to 0, and convert from polar coordinates to cartesian coordinates (x,y). Then, you can assign either y or x an initial velocity, and compute what the other variable is to satisfy the circular orbit criteria. This should give you an initial (Vx,Vy) that you can start your nbody problem from. There should also be quite a bit of literature out there on numerical recipes for nbody central force problems.
I am currently trying to figure out a way to calcute the size of a given object with kinect
since I have the following data
angular field of view of the lens
distance
and width in pixels from a 800*600 resolution
I believe this can be possible to calculate. Does anyone has math skills to give me a little help?
With some trigonometry, it should be possible to approximate.
If you draw a right trangle ABC, with the camera at one of the legs (A), and the object at the far end (edge BC), where the right angle is (C), then the height of the object is going to be the height of leg BC. the distance to the pixel might be the distance of leg AC or AB. The Kinect sensor specifications are going to regulate that. If you get distance to the center of a pixel, then it will be AC. if you have distances to pixel corners then the distance will be AB.
With A representing the angle at the camera that the pixel takes up, d is the distance of the hypotenuse of a right angle and y is the distance of the far leg (edge BC):
sin(A) = y / d
y = d sin(A)
y is the length of the pixel projected into the object plane. You calculate it by multiplying the sin of the angel by the distance to the object.
Here I confess I do not know the API of the kinect, and what level of detail it provides. You say you have the angle of the field of vision. You might assume each pixel of your 800x600 pixel grid takes up an equal angle of your camera's field of vision. If you do, then you can break up that field of vision into equal pieces to measure the linear size of your object in each pixel.
You also mentioned that you have the distance to the object. I was assuming that you have a distance map for each pixel of the 800x600 grid. If this is incorrect, some calculations can be done to approximate a distance grid for the pixels involving the object of interest if you make some assumptions about the object being measured.