1D spherical grid in FiPy - mesh

I would like to solve the diffusion equation in FiPy in spherical coordinates on a 1D grid. I would also like the left boundary to be at r=0.1, not r=0.
I can't find a module for 1D spherical symmetry; only cylindrical. I figure I do it with Grid1D and simply write the del^2 operator in spherical coordinates, then multiply through by r^2 (as mentioned here). However, I still don't know how to specify the locations of the boundaries.
Could someone advise me how to do this? Many thanks.

All fipy meshes can be offset by a vector of appropriate dimension, e.g.,
>>> m = fp.Grid1D(nx=10, dx=.1) + [[1.5]]
>>> print m.x
[1.55 1.65 1.75 1.85 1.95 2.05 2.15 2.25 2.35 2.45]
A spherically symmetric mesh, mirrored on CylindricalUniformGrid1D, would be a welcome pull request.

Related

Is there a simple math solution to sample a disk area light? (Raytracing)

I'm trying to implement different types of lights in my ray-tracer coded in C. I have successfully implemented spot, point, directional and rectangular area lights.
For rectangular area light I define two vectors (U and V) in space and I use them to move into the virtual (delimited) rectangle they form.
Depending on the intensity of the light I take several samples on the rectangle then I calculate the amount of the light reaching a point as though each sample were a single spot light.
With rectangles it is very easy to find the position of the various samples, but things get complicated when I try to do the same with a disk light.
I found little documentation about that and most of them already use ready-made functions to do so.
The only interesting thing I found is this document (https://graphics.pixar.com/library/DiskLightSampling/paper.pdf) but I'm unable to exploit it.
Would you know how to help me achieve a similar result (of the following image) with vector operations? (ex. Having the origin, orientation, radius of the disk and the number of samples)
Any advice or documentation in this regard would help me a lot.
This question reduces to:
How can I pick a uniformly-distributed random point on a disk?
A naive approach would be to generate random polar coordinates and transform them to cartesian coordinates:
Randomly generate an angle θ between 0 and 2π
Randomly generate a distance d between 0 and radius r of your disk
Transform to cartesian coordinates with x = r cos θ and y = r sin θ
This is incorrect because it causes the points to bunch up in the center; for example:
A correct, but inefficient, way to do this is via rejection sampling:
Uniformly generate random x and y, each over [0, 1]
If sqrt(x^2 + y^2) < 1, return the point
Goto 1
The correct way to do this is illustrated here:
Randomly generate an angle θ between 0 and 2π
Randomly generate a distance d between 0 and radius r of your disk
Transform to cartesian coordinates with x = sqrt(r) cos θ and y = sqrt(r) sin θ

Is the IOU in Tensorflow Object Detection API wrong?

I just digged a bit through the Tensorflow Object Detection API code especially the eval_util part, as I wanted to implement the COCO metrics.
But I noticed that the metrics are solely calculated using the bounding boxes which have normalized coordinates between [0, 1].
There are no aspect ratios or absolute coordinates used.
So, doesn't this mean that the intersection over unions calculated on these results are incorrect?
Let's take an 200x100 image pixel as an example.
If the box would be off by 20px to the left, that's 0.1 in normalized coordinates.
But if it would be off by 20px to the top, that would be 0.2 in normalized coordinates.
Doesn't that mean, being off to the top is harder penalizing the score than being off to the side?
I believe the predicted coordinates are resized to the absolute image coordinates in the eval binary.
But the other thing I would say is that IOU is scale invariant in the sense that if you scale two boxes by some factor, they will still have the same IOU overlap. As an example if we scale by 2 in the x-direction and scale by 3 in the y direction:
If A is (x1, y1, x2, y2) and B is (u1, v1, u2, v2), then IOU((A, B))
= IOU((2*x1, 3*y1, 2*x2, 3*y2), (2*u1, 3*v1, 2*u2, 3*v2))
What this means is that evaluating in normalized coordinates should give the same result as evaluating in absolute coordinates.

How can I plot multiple vectors in Wolfram Alpha?

If I have the following vectors:
55 degrees, 200 magnitude, start at origin
35 degrees, 130 magnitude, start at head of vector 1
How do I visualize them?
I would expect there to be a more elegant way to express this, but while
this does not put the vectors "tip-to-tail" (sort of), it does compute and display the resulting vector:
vector (200*cos(55),200*sin(55))+(130*cos(35),130*sin(35))

Barycentric coordinates texture mapping

I want to map textures with correct perspective for 3D rendering. I am using barycentric coordinates to locate points on the faces of triangles. Simple affine transformation gave me that standard, weird looking result. This is what I did to correct my perspective, but it seems to have only made the distortion greater:
three triangle vertices v1 v2 v3
vertex coordinates are v_.x v_.y v_.z
texture coordinates are v_.u v_.v
barycentric coordinates corresponding to vertices are b1 b2 b3
I am trying to get the correct texture coordinates U and V
z=b1/v1.z + b2/v2.z + b3/v3.z
U=(b1*v1.u/v1.z + b2*v2.u/v2.z + b3*v3.u/v3.z) / z
V=(b1*v1.v/v1.z + b2*v2.v/v2.z + b3*v3.v/v3.z) / z
This SHOULD work shouldn't it? Why isn't this working?
EDIT: The response on this page looks useful, but I am unsure what the w coordinate is. Maybe somebody could just explain that, which would also likely solve my problem. http://www.gamedev.net/topic/593669-perspective-correct-barycentric-coordinates/
note: My tags were all wrong at first. That is now fixed.
Okay, this one I DID manage to solve on my own. I was dividing by the z coordinate in screen space. The solution is to divide by the homogeneous w coordinate instead.
Well, that took a while to figure out.

Solving for optimal alignment of 3d polygonal mesh

I'm trying to implement a geometry templating engine. One of the parts is taking a prototypical polygonal mesh and aligning an instantiation with some points in the larger object.
So, the problem is this: given 3d point positions for some (perhaps all) of the verts in a polygonal mesh, find a scaled rotation that minimizes the difference between the transformed verts and the given point positions. I also have a centerpoint that can remain fixed, if that helps. The correspondence between the verts and the 3d locations is fixed.
I'm thinking this could be done by solving for the coefficients of a transformation matrix, but I'm a little unsure how to build the system to solve.
An example of this is a cube. The prototype would be the unit cube, centered at the origin, with vert indices:
4----5
|\ \
| 6----7
| | |
0 | 1 |
\| |
2----3
An example of the vert locations to fit:
v0: 1.243,2.163,-3.426
v1: 4.190,-0.408,-0.485
v2: -1.974,-1.525,-3.426
v3: 0.974,-4.096,-0.485
v5: 1.974,1.525,3.426
v7: -1.243,-2.163,3.426
So, given that prototype and those points, how do I find the single scale factor, and the rotation about x, y, and z that will minimize the distance between the verts and those positions? It would be best for the method to be generalizable to an arbitrary mesh, not just a cube.
Assuming you have all points and their correspondences, you can fine-tune your match by solving the least squares problem:
minimize Norm(T*V-M)
where T is the transformation matrix you are looking for, V are the vertices to fit, and M are the vertices of the prototype. Norm refers to the Frobenius norm. M and V are 3xN matrices where each column is a 3-vector of a vertex of the prototype and corresponding vertex in the fitting vertex set. T is a 3x3 transformation matrix. Then the transformation matrix that minimizes the mean squared error is inverse(V*transpose(V))*V*transpose(M). The resulting matrix will in general not be orthogonal (you wanted one which has no shear), so you can solve a matrix Procrustes problem to find the nearest orthogonal matrix with the SVD.
Now, if you don't know which given points will correspond to which prototype points, the problem you want to solve is called surface registration. This is an active field of research. See for example this paper, which also covers rigid registration, which is what you're after.
If you want to create a mesh on an arbitrary 3D geometry, this is not the way it's typically done.
You should look at octree mesh generation techniques. You'll have better success if you work with a true 3D primitive, which means tetrahedra instead of cubes.
If your geometry is a 3D body, all you'll have is a surface description to start with. Determining "optimal" interior points isn't meaningful, because you don't have any. You'll want them to be arranged in such a way that the tetrahedra inside aren't too distorted, but that's the best you'll be able to do.