Algorithm to define a 2d grid - optimization

Suppose a grid is defined by a set of grid parameters: its origin (x0, y0), an angel between one side and x axis, and increments and - please see the figure below.
There are scattered points with known coordinates on the grid but they don’t exactly fall on grid intersections. Is there an algorithm to find a set of grid parameters to define the grid so that the points are best fit to grid intersections?
Suppose the known coordinates are:
(2 , 5.464), (3.732, 6.464), (5.464, 7.464)
(3 , 3.732), (4.732, 4.732), (6.464, 5.732)
(4 , 2 ), (5.732, 3 ), (7.464, 4 ).
I expect the algorithm to find the origin (4, 2), the angle 30 degree, and the increments both 2.

You can solve the problem by finding a matrix that transforms points from positions (0, 0), (0, 1), ... (2, 2) onto the given points.
Although the grid has only 5 degrees of freedom (position of the origin + angle + scale), it is easier to define the transformation using 2x3 matrix A, because the problem can be made linear in this case.
Let a point with index (x0, y0) to be transformed into point (x0', y0') on the grid, for example (0, 0) -> (2, 5.464) and let a_ij be coefficients of matrix A. Then this pair of points results in 2 equations:
a_00 * x0 + a_01 * y0 + a_02 = x0'
a_10 * x0 + a_11 * y0 + a_12 = y0'
The unknowns are a_ij, so these equations can be written in form
a_00 * x0 + a_01 * y0 + a_02 + a_10 * 0 + a_11 * 0 + a_12 * 0 = x0'
a_00 * 0 + a_01 * 0 + a_02 * 0 + a_10 * x0 + a_11 * y0 + a_12 = y0'
or in matrix form
K0 * (a_00, a_01, a_02, a_10, a_11, a_12)^T = (x0', y0')^T
where
K0 = (
x0, y0, 1, 0, 0, 0
0, 0, 0, x0, y0, 1
)
These equations for each pair of points can be combined in a single equation
K * (a_00, a_01, a_02, a_10, a_11, a_12)^T = (x0', y0', x1', y1', ..., xn', yn')^T
or K * a = b
where
K = (
x0, y0, 1, 0, 0, 0
0, 0, 0, x0, y0, 1
x1, y1, 1, 0, 0, 0
0, 0, 0, x1, y1, 1
...
xn, yn, 1, 0, 0, 0
0, 0, 0, xn, yn, 1
)
and (xi, yi), (xi', yi') are pairs of corresponding points
This can be solved as a non-homogeneous system of linear equations. In this case the solution will minimize sum of squares of distances from each point to nearest grid intersection. This transform can be also considered to maximize overall likelihood given the assumption that points are shifted from grid intersections with normally distributed noise.
a = (K^T * K)^-1 * K^T * b
This algorithm can be easily implemented if there is a linear algebra library is available. Below is an example in Python:
import numpy as np
n_points = 9
aligned_points = [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)]
grid_points = [(2, 5.464), (3.732, 6.464), (5.464, 7.464), (3, 3.732), (4.732, 4.732), (6.464, 5.732), (4, 2), (5.732, 3), (7.464, 4)]
K = np.zeros((n_points * 2, 6))
b = np.zeros(n_points * 2)
for i in range(n_points):
K[i * 2, 0] = aligned_points[i, 0]
K[i * 2, 1] = aligned_points[i, 1]
K[i * 2, 2] = 1
K[i * 2 + 1, 3] = aligned_points[i, 0]
K[i * 2 + 1, 4] = aligned_points[i, 1]
K[i * 2 + 1, 5] = 1
b[i * 2] = grid_points[i, 0]
b[i * 2 + 1] = grid_points[i, 1]
# operator '#' is matrix multiplication
a = np.linalg.inv(np.transpose(K) # K) # np.transpose(K) # b
A = a.reshape(2, 3)
print(A)
[[ 1. 1.732 2. ]
[-1.732 1. 5.464]]
Then the parameters can be extracted from this matrix:
theta = math.degrees(math.atan2(A[1, 0], A[0, 0]))
scale_x = math.sqrt(A[1, 0] ** 2 + A[0, 0] ** 2)
scale_y = math.sqrt(A[1, 1] ** 2 + A[0, 1] ** 2)
origin_x = A[0, 2]
origin_y = A[1, 2]
theta = -59.99927221917264
scale_x = 1.99995599951599
scale_y = 1.9999559995159895
origin_x = 1.9999999999999993
origin_y = 5.464
However there remains a minor issue: matrix A corresponds to an affine transform. This means that grid axes are not guaranteed to be perpendicular. If this is a problem, then the first two columns of the matrix can be modified in a such way that the transform preserves angles.

Update: I fixed the mistakes and resolved sign ambiguities, so now this algorithm produces the expected result. However it should be tested to see if all cases are handled correctly.
Here is another attempt to solve this problem. The idea is to decompose transformation into non-uniform scaling matrix and rotation matrix A = R * S and then solve for coefficients sx, sy, r1, r2 of these matrices given restriction that r1^2 + r2^2 = 1. The minimization problem is described here: How to find a transformation (non-uniform scaling and similarity) that maps one set of points to another?
def shift_points(points):
n_points = len(points)
shift = tuple(sum(coords) / n_points for coords in zip(*points))
shifted_points = [(point[0] - shift[0], point[1] - shift[1]) for point in points]
return shifted_points, shift
n_points = 9
aligned_points = [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)]
grid_points = [(2, 5.464), (3.732, 6.464), (5.464, 7.464), (3, 3.732), (4.732, 4.732), (6.464, 5.732), (4, 2), (5.732, 3), (7.464, 4)]
aligned_points, aligned_shift = shift_points(aligned_points)
grid_points, grid_shift = shift_points(grid_points)
c1, c2 = 0, 0
b11, b12, b21, b22 = 0, 0, 0, 0
for i in range(n_points):
c1 += aligned_points[i][0] ** 2
c2 += aligned_points[i][0] ** 2
b11 -= 2 * aligned_points[i][0] * grid_points[i][0]
b12 -= 2 * aligned_points[i][1] * grid_points[i][0]
b21 -= 2 * aligned_points[i][0] * grid_points[i][1]
b22 -= 2 * aligned_points[i][1] * grid_points[i][1]
k = (b11 ** 2 * c2 + b22 ** 2 * c1 - b21 ** 2 * c2 - b12 ** 2 * c1) / \
(b21 * b11 * c2 - b12 * b22 * c1)
# r1_sqr and r2_sqr might need to be swapped
r1_sqr = 2 / (k ** 2 + 4 + k * math.sqrt(k ** 2 + 4))
r2_sqr = 2 / (k ** 2 + 4 - k * math.sqrt(k ** 2 + 4))
for sign1, sign2 in [(1, 1), (-1, 1), (1, -1), (-1, -1)]:
r1 = sign1 * math.sqrt(r1_sqr)
r2 = sign2 * math.sqrt(r2_sqr)
scale_x = -b11 / (2 * c1) * r1 - b21 / (2 * c1) * r2
scale_y = b12 / (2 * c2) * r2 - b22 / (2 * c2) * r1
if scale_x >= 0 and scale_y >= 0:
break
theta = math.degrees(math.atan2(r2, r1))
There might be ambiguities in choosing r1_sqr and r2_sqr. Origin point can be estimated from aligned_shift and grid_shift, but I didn't implement it yet.
theta = -59.99927221917264
scale_x = 1.9999559995159895
scale_y = 1.9999559995159895

Related

Numpy: How To Vectorize Operations?

I have the following vectors
shape u_w: (50,)
shape Vt: (6, 50)
shape v: (50,)
and with them I perform the following calculations
w = np.tanh(u_w + Vt[0])
w_squared = w ** 2
z = np.dot(v, w)
s = sigmoid(np.dot(v, w))
J = -np.log(sigmoid(z))
dv = np.dot(sigmoid(z) - 1, w)
du_w = np.dot(s - 1, v, (1 - w_squared))
dVt = np.dot(s - 1, v, (1 - w_squared))
for vt in Vt[1:]:
t = np.tanh(u_w + vt)
svt = sigmoid(np.dot(-v, t))
J -= np.log(svt)
dv -= np.dot((svt - 1), t)
du_w -= np.dot((svt - 1), v, (1 - t**2))
dVt = np.vstack((dVt, -np.dot(svt - 1, v, (1 - t**2))))
How do I vectorize the calculations for J, dv, du_w and dVt, so that they work for a batch of S items with the following shapes?
shape(u_w) => (512, 50)
shape(Vt) => (512, 6, 50)
shape(v) => (50,)

Mask values inside given path (triangle, square etc) for a contourf plot

I am trying to mask specific locations (triangles, squares) for a contourf plot. I can do the mask based on the Z values but finding it difficult to get it work based on x and y values. For the MWE below, I want to create a mask between given X,Y values (triangle or square). Lets say for the example below, I want to mask values inside triangle formed between points (0,0),(2,0),(0,2). I want to basically be able to provide an enclosed path and mask everything in between those values. I have tried the approach here but I have to provide the logic for individual X and Y values which becomes cumbersome for a complicated path.
import numpy as np
import matplotlib.pyplot as plt
origin = 'lower'
delta = 0.025
x = y = np.arange(-3.0, 3.01, delta)
X, Y = np.meshgrid(x, y)
Z1 = np.exp(-X**2 - Y**2)
Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2)
Z = (Z1 - Z2) * 2
fig1, ax2 = plt.subplots(constrained_layout=True)
CS = ax2.contourf(X, Y, Z, 10, cmap=plt.cm.viridis, origin=origin,extend='both')
ax2.set_title('Random Plot')
ax2.set_xlabel('X Axis')
ax2.set_ylabel('Y Axis')
cbar = fig1.colorbar(CS)
A convex shape such as a triangle can be defined by the equations of the lines going through their vertices. In this case the equations are quite simple: X >= 0 is the zone right of the line through 0,0 and 0,2. Similar Y >= 0 and X + Y <= 2 are the two other zones. The triangle is the intersection of these 3 zones.
Setting the corresponding Z values to NaN will create the empty triangle in the contour plot.
import numpy as np
import matplotlib.pyplot as plt
delta = 0.025
x = y = np.arange(-3.0, 3.01, delta)
X, Y = np.meshgrid(x, y)
Z1 = np.exp(-X ** 2 - Y ** 2)
Z2 = np.exp(-(X - 1) ** 2 - (Y - 1) ** 2)
Z = (Z1 - Z2) * 2
Z[(X >= 0) & (Y >= 0) & (X + Y <= 2)] = np.nan
fig1, ax2 = plt.subplots()
CS = ax2.contourf(X, Y, Z, 10, cmap=plt.cm.viridis, origin='lower', extend='both')
ax2.set_title('Random Plot missing a triangle')
ax2.set_xlabel('X Axis')
ax2.set_ylabel('Y Axis')
cbar = fig1.colorbar(CS)
plt.show()
PS: The equation of a line through two points x1,y1 and x2,y2 is
(X - x1) * (y2 - y1) - (Y - y1) * (x2 - x1) == 0
So, a more general code could look like:
def line_eq(X, Y, p1, p2):
x1, y1 = p1
x2, y2 = p2
return (X - x1) * (y2 - y1) - (Y - y1) * (x2 - x1) >= 0
p = [(0, 0), (0, 2), (2, 0)] # clockwise ordering
Z[line_eq(X, Y, p[0], p[1]) & line_eq(X, Y, p[1], p[2]) & line_eq(X, Y, p[2], p[0])] = np.nan
Note that when the vertices are ordered counterclockwise, the equation should be <= 0 to grab the interior convex shape.
Concave shapes can be created by taking the union (logical or) of several convex shapes:
def line_eq(X, Y, p1, p2):
x1, y1 = p1
x2, y2 = p2
return (X - x1) * (y2 - y1) - (Y - y1) * (x2 - x1) >= 0
def convex_eq(X, Y, p):
mask = line_eq(X, Y, p[-1], p[0])
for p1, p2 in zip(p[:-1], p[1:]):
mask &= line_eq(X, Y, p1, p2)
return mask
def multiple_convex_eq(X, Y, c):
mask = convex_eq(X, Y, c[0])
for ci in c:
mask |= convex_eq(X, Y, ci)
return mask
p = [(0, 2.5), (1.5, 1), (1, -2), (-1, -2), (-1.5, 1)] # pentagon, clockwise ordering
five_trianggles = [[(0, 0), p1, p2] for p1, p2 in zip(p, (p + p)[2:])]
Z[multiple_convex_eq(X, Y, five_trianggles)] = np.nan

How to use r kmeans cluster vector to repaint plot?

km = kmeans(FourA,3)
km$cluster
[1] 1 1 1 2 1 1 2 2 2 2 3 2 ...
How do I use the km$cluster vector to create 3 new arrays so that I can plot the graph with the three clusters using a different character/color?
For your reference
x <- rbind(matrix(rnorm(100, sd = 0.3), ncol = 2),
matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2))
cl <- kmeans(x, 3, nstart = 25)
plot(x, col = cl$cluster)
points(cl$centers, col = 1:3, pch = 8)

sum numpy array at given indices

I want to add the values of a vector:
a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='d')
to the values of another vector:
c = np.array([10, 10, 10], dtype='d')
at position given by another array (of the same size of a, with values 0 <= b[i] < len(c))
b = np.array([2, 0, 1, 0, 2, 0, 1, 1, 0, 2], dtype='int32')
This is very simple to write in pseudo code:
for I in range(b.shape[0]):
J = b[I]
c[J] += a[I]
Something like this, but vectorized (length of c is some hundreds in real case).
c[0] += np.sum(a[b==0]) # 27 (10 + 1 + 3 + 5 + 8)
c[1] += np.sum(a[b==1]) # 25 (10 + 2 + 6 + 7)
c[2] += np.sum(a[b==2]) # 23 (10 + 0 + 4 + 9)
My initial guess was:
c[b] += a
but only last values of a are summed.
You can use np.bincount to get ID based weighted summations and then add with c, like so -
np.bincount(b,a) + c

Independent rotation of stereoscopic cameras

I have two cameras pointing at the same scene. When they are parallel to each other, I can convert from a real location to each screen coordinate and from two screen coordinates to a real location.
From a real location to each screen coordinate (the focal f is known):
xl = XL / Z * f
yl = 0
xr = XR / Z * f
yr = 0
From two screen coordinates to a real location:
XL + XR = D
xl / f = XL / Z
xr / f = XR / Z
Z = f * D / (xl + xr)
XL = xl / f * Z
YL = yl / f * Z
The cameras have now two independent three-axis rotations (α, β, ζ) and (α', β', ζ'). This is really their yaw, pitch and roll. The camera first rotates of α along the y axis, then it rotates of β along its new x axis and finally rotates of ζ along its new z axis.
I can still convert from a real location to each screen coordinate by rotating the real position and applying the same formula as the case above:
(AL, BL, CL) = rot33_axis3(ζ) * rot33_axis1(β) * rot33_axis2(α) * (XL, YL, Z)
(AR, BR, CR) = rot33_axis3(ζ') * rot33_axis1(β') * rot33_axis2(α') * (XR, YR, Z)
xl = AL / CL * f
yl = BL / CL * f
xr = AR / CR * f
yr = BR / CR * f
I have tested and the calculated coordinates match the screen.
My problem now is to calculate the real location from the 2 screen coordinates. I'm doing:
(al, bl, cl) = rot33_axis2(-α) * rot33_axis1(-β) * rot33_axis3(-ζ) * (xl, yl, f)
(ar, br, cr) = rot33_axis2(-α') * rot33_axis1(-β') * rot33_axis3(-ζ') * (xr, yr, f)
XL + XR = D
al / f = XL / Z
ar / f = XR / Z
Z = f * D / (al + ar)
XL = al / f * Z
YL = bl / f * Z
Unfortunately, that doesn't work.
My idea is to take the screen coordinate, assign the z value to the focal, apply the rotation matrices in reverse order with negative angles (at this moment, the screen have rotated "back" to planes parallel to the line joining the two cameras) and apply the same formula as in the first case.
What am I doing wrong? Is it wrong to start with (xl, yl, f)?
EDIT 1:
Based on aledalgrande anwer, Here is some opencv code:
//Image is 640x360, focal is 0.42
Matx33d camMat = Matx33d(
0.42f * 640.0f, 0.0f, 320.0f,
0.0f, 0.42f * 360.0f, 180.0f,
0.0f, 0.0f, 1.0f);
Matx41d distCoeffs = Matx41d(0.0f, 0.0f, 0.0f, 0.0f);
Matx31d rvec0, tvec0, rvec1, tvec1;
solvePnP(objPoints, imgPoints0, camMat, distCoeffs, rvec0, tvec0);
solvePnP(objPoints, imgPoints1, camMat, distCoeffs, rvec1, tvec1);
//Results make sense if I use projectPoints
Matx33d rot0;
Rodrigues(rvec0, rot0);
Matx34d P0 = Matx34d(
rot0(0, 0), rot0(0, 1), rot0(0, 2), tvec0(0, 0),
rot0(1, 0), rot0(1, 1), rot0(1, 2), tvec0(0, 1),
rot0(2, 0), rot0(2, 1), rot0(2, 2), tvec0(0, 2));
Matx33d rot1;
Rodrigues(rvec1, rot1);
Matx34d P1 = Matx34d(
rot1(0, 0), rot1(0, 1), rot1(0, 2), tvec1(0, 0),
rot1(1, 0), rot1(1, 1), rot1(1, 2), tvec1(0, 1),
rot1(2, 0), rot1(2, 1), rot1(2, 2), tvec1(0, 2));
Point u0_(353, 156);
Point u1_(331, 94);
Matx33d camMatInv = camMat.inv();
u0.x = u0_.x * camMatInv(0, 0) + u0_.y * camMatInv(0, 1) + 1.0f * camMatInv(0, 2);
u0.y = u0_.y * camMatInv(1, 0) + u0_.y * camMatInv(1, 1) + 1.0f * camMatInv(1, 2);
u1.x = u1_.x * camMatInv(0, 0) + u1_.y * camMatInv(0, 1) + 1.0f * camMatInv(0, 2);
u1.y = u1_.y * camMatInv(1, 0) + u1_.y * camMatInv(1, 1) + 1.0f * camMatInv(1, 2);
Matx14d A1(u0.x * P0(2, 0) - P0(0, 0), u0.x * P0(2, 1) - P0(0, 1), u0.x * P0(2, 2) - P0(0, 2), u0.x * P0(2, 3) - P0(0, 3));
Matx14d A2(u0.y * P0(2, 0) - P0(1, 0), u0.y * P0(2, 1) - P0(1, 1), u0.y * P0(2, 2) - P0(1, 2), u0.y * P0(2, 3) - P0(1, 3));
Matx14d A3(u1.x * P1(2, 0) - P1(0, 0), u1.x * P1(2, 1) - P1(0, 1), u1.x * P1(2, 2) - P1(0, 2), u1.x * P1(2, 3) - P1(0, 3));
Matx14d A4(u1.y * P1(2, 0) - P1(1, 0), u1.y * P1(2, 1) - P1(1, 1), u1.y * P1(2, 2) - P1(1, 2), u1.y * P1(2, 3) - P1(1, 3));
double normA1 = norm(A1), normA2 = norm(A2), normA3 = norm(A3), normA4 = norm(A4);
Matx44d A(
A1(0) / normA1, A1(1) / normA1, A1(2) / normA1, A1(3) / normA1,
A2(0) / normA2, A2(1) / normA2, A2(2) / normA2, A2(3) / normA2,
A3(0) / normA3, A3(1) / normA3, A3(2) / normA3, A3(3) / normA3,
A4(0) / normA4, A4(1) / normA4, A4(2) / normA4, A4(3) / normA4);
SVD svd;
Matx41d u;
svd.solveZ(A, u);
You cannot use the simplified formula for triangulation if the cameras are rotated (general case). You will have to resort to linear triangulation (or other methods if you want a more accurate result).
// points u0 and u1, projection matrices firstP and secondP
// "Multiple View Geometry in Computer Vision" 12.2 and 4.1.1
cv::Matx14d A1 = u0(0) * firstP.row(2) - firstP.row(0);
cv::Matx14d A2 = u0(1) * firstP.row(2) - firstP.row(1);
cv::Matx14d A3 = u1(0) * secondP.row(2) - secondP.row(0);
cv::Matx14d A4 = u1(1) * secondP.row(2) - secondP.row(1);
double normA1 = cv::norm(A1), normA2 = cv::norm(A2), normA3 = cv::norm(A3), normA4 = cv::norm(A4);
cv::Matx44d A(A1(0) / normA1, A1(1) / normA1, A1(2) / normA1, A1(3) / normA1,
A2(0) / normA2, A2(1) / normA2, A2(2) / normA2, A2(3) / normA2,
A3(0) / normA3, A3(1) / normA3, A3(2) / normA3, A3(3) / normA3,
A4(0) / normA4, A4(1) / normA4, A4(2) / normA4, A4(3) / normA4);
cv::SVD svd;
cv::Matx41d pointHomogeneous;
svd.solveZ(A, pointHomogeneous);