how to perform coordinates affine transformation using python? part 2 - numpy

I have same problem as described here:
how to perform coordinates affine transformation using python?
I was trying to use method described but some reason I will get error messages.
Changes I made to code was to replace primary system and secondary system points. I created secondary coordinate points by using different origo. In real case for which I am studying this topic will have some errors when measuring the coordinates.
primary_system1 = (40.0, 1160.0, 0.0)
primary_system2 = (40.0, 40.0, 0.0)
primary_system3 = (260.0, 40.0, 0.0)
primary_system4 = (260.0, 1160.0, 0.0)
secondary_system1 = (610.0, 560.0, 0.0)
secondary_system2 = (610.0,-560.0, 0.0)
secondary_system3 = (390.0, -560.0, 0.0)
secondary_system4 = (390.0, 560.0, 0.0)
Error I get from when executing is following.
*Traceback (most recent call last):
File "affine_try.py", line 57, in <module>
secondary_system3, secondary_system4 )
File "affine_try.py", line 22, in solve_affine
A2 = y * x.I
File "/usr/lib/python2.7/dist-packages/numpy/matrixlib/defmatrix.py", line 850, in getI
return asmatrix(func(self))
File "/usr/lib/python2.7/dist-packages/numpy/linalg/linalg.py", line 445, in inv
return wrap(solve(a, identity(a.shape[0], dtype=a.dtype)))
File "/usr/lib/python2.7/dist-packages/numpy/linalg/linalg.py", line 328, in solve
raise LinAlgError, 'Singular matrix'
numpy.linalg.linalg.LinAlgError: Singular matrix*
What might be the problem ?

The problem is that your matrix is singular, meaning it's not invertible. Since you're trying to take the inverse of it, that's a problem. The thread that you linked to is a basic solution to your problem, but it's not really the best solution. Rather than just inverting the matrix, what you actually want to do is solve a least-squares minimization problem to find the optimal affine transform matrix for your possibly noisy data. Here's how you would do that:
import numpy as np
primary = np.array([[40., 1160., 0.],
[40., 40., 0.],
[260., 40., 0.],
[260., 1160., 0.]])
secondary = np.array([[610., 560., 0.],
[610., -560., 0.],
[390., -560., 0.],
[390., 560., 0.]])
# Pad the data with ones, so that our transformation can do translations too
n = primary.shape[0]
pad = lambda x: np.hstack([x, np.ones((x.shape[0], 1))])
unpad = lambda x: x[:,:-1]
X = pad(primary)
Y = pad(secondary)
# Solve the least squares problem X * A = Y
# to find our transformation matrix A
A, res, rank, s = np.linalg.lstsq(X, Y)
transform = lambda x: unpad(np.dot(pad(x), A))
print "Target:"
print secondary
print "Result:"
print transform(primary)
print "Max error:", np.abs(secondary - transform(primary)).max()
The reason that your original matrix was singular is that your third coordinate is always zero, so there's no way to tell what the transform on that coordinate should be (zero times anything gives zero, so any value would work).
Printing the value of A tells you the transformation that least-squares has found:
A[np.abs(A) < 1e-10] = 0 # set really small values to zero
print A
results in
[[ -1. 0. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 0. 0. 0.]
[ 650. -600. 0. 1.]]
which is equivalent to x2 = -x1 + 650, y2 = y1 - 600, z2 = 0 where x1, y1, z1 are the coordinates in your original system and x2, y2, z2 are the coordinates in your new system. As you can see, least-squares just set all the terms related to the third dimension to zero, since your system is really two-dimensional.

Related

Forward Substitution Method

I have following matrix equation [L][z]=[b]
where [L] is 3x3 matrix and z (solution vector) and b (Right-hand side vector) are 3x1 vectors.
I want to find the solution vector(i.e., the values of z0,z1,z2)
The value for z I am getting is following
z vector from Forward-substitution:
[ 106.8 177.2 -335.968]
link--https://www.youtube.com/watch?v=5uNBrCw0M3g
import numpy as np
L=np.array([[1., 0., 0. ],
[2.56, 1., 0. ],
[5.76, 3.5, 1. ]],float)
print(L)
# Right hand side vector
b=np.array([106.8,177.2,279.2],float)
n =len(b)
z=np.zeros(n,float)
z[0]=b[0]/L[0,0] ## or z[0]=b[0] because L[0,0]=1
# print(z)
#Forward Substitution method
for i in range(1,n):
sum_Lz=0
for j in range(0,i-1):
sum_Lz+=L[i,j]*z[j]
# print(sum_Lz)
z[i]=(b[i]-sum_Lz)/L[i,i] ##L[i,i]=1
print("z vector from forward substitution: ")
print(z)
What basically I want to do is in each row (in each iteration I want to calculate enter code here z separately, z0,z1,z2)
For first iteration I have already separately defined z[0]=b[0]/L[0,0]. Then I run a for loop which start from second row L[1,0]*z0 + L[1,1]*z1 = b[1], which gives value of z1, where I want to use z0, obtained from b[0]/L[0,0]. In third row similarly, L[1,0]*z0 + L[1,1]*z1 = b[2], where, I want to use z1 which I calculated in earlier iteration. This is how I get a solution vector z.
changing index from i-1 to i in j loop
import numpy as np
L=np.array([[1., 0., 0. ],
[2.56, 1., 0. ],
[5.76, 3.5, 1. ]],float)
print(L)
# Right hand side vector
b=np.array([106.8,177.2,279.2],float)
n =len(b)
z=np.zeros(n,float)
z[0]=b[0]/L[0,0] ## or z[0]=b[0] because L[0,0]=1
# print(z)
#Forward Substitution method
for i in range(1,n):
sum_Lz=0
for j in range(0,i):
sum_Lz+=L[i,j]*z[j]
# print(sum_Lz)
z[i]=(b[i]-sum_Lz)/L[i,i] ##L[i,i]=1
print("z vector from forward substitution: ")
print(z)

Can the y-axis value be greater than 1 fter using density=True in plt.hist function?

In the following code using density=True gives probability density function whose area under the curve is equal to 1. I want to make sure on y-axis I'm getting values greater than 1. Shouldn't it be in the range of [0,1]? I have used similar Ls data even in some cases area under the curve is 1.00000002. Can we get such value greater than 1 after using density=True?
`Ls= np.array([0.28904281, 0.07674593 ,0.1602232 ,
0.91125812, 0.31972387, 0.44688592, 5.91075163
,0.52440172, 0.51376938, 0.16481705 ,0.35505426,
0.4354722, 5.79930666 ,0.19999694,
6.86172285, 0.44483234 ,0.30679022
,0.68441839,0.58445867 ,0.32532547, 0.3061434,
0.33319276, 0.26755858, 0.40010735,0.53881366
,0.07560947, 0.29215042, 0.39045618, 0.27220402,
0.8472669,0.11772259 ,0.20980161, 0.31013296,
0.60890591, 5.78487512, 0.5614211,0.36864035])
fig = plt.figure()
ax1 = fig.add_subplot(111)
hist,bin_edges,patches =
ax1.hist(Ls,bins=20,density=True)
print("hist: ",hist)
#hist: [1.67273235, 0.79653921, 0.15930784, 0., 0.,
#0.,0., 0.,0.,0., 0.,0.,0.,0.,0.,0.,0.15930784
#0.07965392,0.,0.07965392]`

High Eigen values always for Edge detection

I am trying to understand Harris detector, using the explanation here. As per explanation, I understand, if we calculate the eigen values, then,
However, when I try to calculate the eigen values are always high. Below is my main image from which I extract parts to calculate eigen values.
For a flat area with no visible features, I get this distribution (on right most) which is good, but eigen values are large
260935.70201362,434796.29798638
For a linear edge, also I get high eigen values: 16290305.45393251 567780.54606749
For corner, it is expected to get high values, but now I am doubtful if these high values are correct due to above cases.
8958127.80563239 10986758.19436761
Here is my method, translated from matlab code here. Its the vals value I directly get from numpy's linear algebra library.
def plot_derivatives_1(img_rgb, mode=1):
'''
img_rgb = image in rgb color space (3 channeled)
'''
img_1c = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
if mode == 1: # method 1 derivative
Ix = cv2.Sobel(img_1c, cv2.CV_64F, 1, 0, ksize=3)
Iy = cv2.Sobel(img_1c, cv2.CV_64F, 0, 1, ksize=3)
else:
# another method of derivatives
dx = np.array([
[-1, 0, 1],
[-1, 0, 1],
[-1, 0, 1]
]);
dy = np.transpose(dx)
Ix = signal.convolve2d(img_1c, dx, mode='valid')
Iy = signal.convolve2d(img_1c, dy, mode='valid')
Ix, Iy = Ix.astype(np.float64), Iy.astype(np.float64) # else gaussian blur later is failing
# yet to solve why we need A and eigen outputs
A = np.array([
[ np.sum(Ix*Ix), np.sum(Ix*Iy) ],
[ np.sum(Ix*Iy), np.sum(Iy*Iy) ]
])
vals, V = linalg.eig(A)
lamb = vals/np.max(vals)
print('lambda values:{}'.format(vals))
fig, ax = plt.subplots(1,4, figsize=(20,5))
ax[0].imshow(img_rgb);ax[0].set_title('Input Image')
ax[1].imshow(Ix, cmap='gray');ax[1].set_title('$I_x = \dfrac{\partial I}{\partial x}$')
ax[2].imshow(Iy, cmap='gray');ax[2].set_title('$I_y = \dfrac{\partial I}{\partial y}$')
ax[3].scatter(Ix, Iy);ax[3].set_xlim([-200,200]);ax[3].set_ylim([-200,200]);
ax[3].set_aspect('equal');ax[3].set_title('Derivatives Distribution');
ax[3].set_xlabel('Ix');ax[3].set_ylabel('Iy')
ax[3].axvline(x=0, color = 'r');ax[3].axhline(y=0, color ='r')
plt.tight_layout();plt.show()
return Ix, Iy
A sample call for a case (here shown for corner).
img = cv2.imread(SRC_FOLDER + 'checkersandbooksmall_sample_6.jpg')
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
Ix, Iy = plot_derivatives_1(img_rgb, mode=1)
I use jupyter notebook and the code is just built as I try to understand the concept.
What am I doing wrong to get high eigen values always for all cases?
The sample images used for above cases could be found here

Possible tensorflow cholesky_solve inconsistency?

I am trying to solve a linear system of equations using tensorflow.cholesky_solve and I'm getting some unexpected results.
I wrote a script to compare the output of a very simple linear system with simple matrix inversion a la tensorflow.matrix_inverse, the non-cholesky based matrix equation solver tensorflow.matrix_solve, and tensorflow.cholesky_solve.
According to my understanding of the docs I've linked, these three cases should all yield a solution of the identity matrix divided by 2, but this is not the case for tensorflow.cholesky_solve. Perhaps I'm misunderstanding the docs?
import tensorflow as tf
I = tf.eye(2, dtype=tf.float32)
X = 2 * tf.eye(2, dtype=tf.float32)
X_inv = tf.matrix_inverse(X)
X_solve = tf.matrix_solve(X, I)
X_chol_solve = tf.cholesky_solve(tf.cholesky(X), I)
with tf.Session() as sess:
for x in [X_inv, X_solve, X_chol_solve]:
print('{}:\n{}'.format(x.name, sess.run(x)))
print
yielding output:
MatrixInverse:0:
[[ 0.5 0. ]
[ 0. 0.5]]
MatrixSolve:0:
[[ 0.5 0. ]
[ 0. 0.5]]
cholesky_solve/MatrixTriangularSolve_1:0:
[[ 1. 0.]
[ 0. 1.]]
Process finished with exit code 0
I think it's a bug. Notice how the result doesn't even depend on the RHS, unless RHS = 0, in which case you get nan instead of 0. Please report it on GitHub.

Tensorflow R0.12 softmax_cross_entropy_with_logits ASSERT Error

I have been working on getting "softmax_cross_entropy_with_logits" working as part of my cost function for a 147 class problem. I have the code working with "sigmoid_cross_entropy_with_logits" but would like to move to softmax.
I have tried a number of different attempts to get the code working by reshaping from a rank 3 to rank 2 (didn't help) and just stuck. I have tried some toy code through Notebook and the softmax_cross.... does not assert an error. Also tried casting the float32 to float64 (as my Notebook example used 64 bit and worked) but still asserted the error.
Here is the toy code:
y_hat_softmax = tf.nn.softmax(y_hat)
sess.run(y_hat_softmax)
# array([[ 0.227863 , 0.61939586, 0.15274114],
# [ 0.49674623, 0.20196195, 0.30129182]])
y_true = tf.convert_to_tensor(np.array([[0.0, 1.0, 0.0],[0.0, 0.0, 1.0]]))
sess.run(y_true)
# array([[ 0., 1., 0.],
# [ 0., 0., 1.]])
loss_per_instance_2 = tf.nn.softmax_cross_entropy_with_logits(y_hat, y_true)
sess.run(loss_per_instance_2)
# array([ 0.4790107 , 1.19967598])
cross_ent = tf.nn.softmax_cross_entropy_with_logits(y_hat, y_true)
print sess.run(cross_ent)
#[ 0.4790107 1.19967598]
print y_hat
#Tensor("Const:0", shape=(2, 3), dtype=float64)
print y_true
#Tensor("Const_1:0", shape=(2, 3), dtype=float64)
total_loss_2 = tf.reduce_mean(cross_ent)
sess.run(total_loss_2)
# 0.83934333897877922
Here is my code fragment: (sizes are printed in error below)
self.error0 = tf.nn.softmax_cross_entropy_with_logits(tf.to_double(self.outputSplit0), tf.to_double(self.YactSplit0), "SoftMax0")
self.error1 = tf.nn.softmax_cross_entropy_with_logits(self.outputSplit1, self.YactSplit1, "SoftMax1")
self.error = self.error0 + self.error1
What I am trying to do is I have 2 encoded "words" for each result, so I am now trying to calculate the error seperately for each word, still didn't work. Error occurs on the first line above:
self.outputSplit0 Tensor("LSTM/Reshape_2:0", shape=(8000, 147), dtype=float32)
self.YactSplit0 Tensor("LSTM/Reshape_4:0", shape=(8000, 147), dtype=float32)
Traceback (most recent call last):
File "modelbuilder.py", line 352, in <module>
brain.create_variables()
File "/home/greg/Model/LSTM_qnet.py", line 58, in create_variables
self.error0 = tf.nn.softmax_cross_entropy_with_logits(tf.to_double(self.outputSplit0), tf.to_double(self.YactSplit0), "SoftMax0")
File "/home/greg/tensorflow/_python_build/tensorflow/python/ops/nn_ops.py", line 1436, in softmax_cross_entropy_with_logits
precise_logits = _move_dim_to_end(precise_logits, dim, input_rank)
File "/home/greg/tensorflow/_python_build/tensorflow/python/ops/nn_ops.py", line 1433, in _move_dim_to_end
0, [math_ops.range(dim_index), math_ops.range(dim_index + 1, rank),
File "/home/greg/tensorflow/_python_build/tensorflow/python/ops/math_ops.py", line 1094, in range
assert all(arg.dtype in dtype_hierarchy for arg in [start, limit, delta])
AssertionError
Any ideas what might be happening here? The error seems to be from the "range" function, just can't figure out what I have done wrong.
The third argument you pass to the softmax function is implicitly taken to be the dimension, but you pass the name instead, which leads to the assertion being triggered. You should pass the name of the parameter to the function:
tf.nn.softmax_cross_entropy_with_logits(tf.to_double(self.outputSplit0), tf.to_double(self.YactSplit0), name = "SoftMax0")