I wrote the following function to perform SVD according to page 45 of 'the deep learning book' by Ian Goodfellow and co.
def SVD(A):
#A^T
AT = np.transpose(A)
#AA^T
AAT = A.dot(AT)
#A^TA
ATA = AT.dot(A)
#Left single values
LSV = np.linalg.eig(AAT)[1]
U = LSV #some values of U have the wrong sign
#Right single values
RSV = np.linalg.eig(ATA)[1]
V = RSV
V[:,0] = V[:,0] #V isnt arranged properly
values = np.sqrt(np.linalg.eig(ata)[0])
#descending order
values = np.sort(values)[::-1]
rows = A.shape[0]
columns = A.shape[1]
D = np.zeros((rows,columns))
np.fill_diagonal(D,values)
return U, D, V
However for any given matrix the results are not the same as using
np.linalg.svd(A)
and I have no idea why.
I tested my algorithm by saying
abs(UDV^T - A) < 0.0001
to check if it decomposed properly and it hasn't. The problem seems to lie with the V and U components but I can't see what's going wrong. D seems to be correct.
If anyone can see the problem it would be much appreciated.
I think you have a problem with the order of the eigenpairs that eig(ATA) and eig(AAT) return. The documentation of np.linalg.eig tells that no order is guaranteed. Replacing eig by eigh, which returns the eigenpairs in ascending order, should help. Also don't rearrange the values.
By the way, eigh is specific for symmetric matrices, such as the ones that you are passing, and will not return complex numbers if the original matrix is real.
Related
I am trying to calculate an intercepting vector based on Velocity Location and time of two objects.
I found an post covering my problem but was left over with some technical questions i could not ask because my reputation is below 50.
Calculating Intercepting Vector
The answer marked as best goes over the process of how to solve my problem, however when i tried to calculate myself, i could not understand how the vectors of position and velocity are converted to a real number.
Using the data provided here for the positions and speeds of the target and the interceptor, the solving equation is the following:
plugging in the numbers, the coefficients of the quadratic equation in t are:
s_t = [120, 40]; v_t = [5,2]; s_i = [80, 80]; v_i = 10;
a = dot(v_t, v_t)-10^2
b = 2*dot((s_t - s_i),v_t)
c = dot(s_t - s_i, s_t - s_i)
Solving for t yields:
delta = sqrt(b^2-4*a*c)
t1 = (b + sqrt(b^2 - 4*a*c))/(2*a)
t2 = (b - sqrt(b^2 - 4*a*c))/(2*a)
With the data at hand, t1 turns out to be negative, and can be discarded.
I am trying to minimize the function || Cx - d ||_2^2 with constraints Ax <= b. Some information about their sizes is as such:
* C is a (138, 22) matrix
* d is a (138,) vector
* A is a (138, 22) matrix
* b is a (138, ) vector of zeros
So I have 138 equation and 22 free variables that I'd like to optimize. I am currently coding this in Python and am using the transpose C.T*C to form a square matrix. The entire code looks like this
C = matrix(np.matmul(w, b).astype('double'))
b = matrix(np.matmul(w, np.log(dwi)).astype('double').reshape(-1))
P = C.T * C
q = -C.T * b
G = matrix(-constraints)
h = matrix(np.zeros(G.size[0]))
dt = np.array(solvers.qp(P, q, G, h, dims)['x']).reshape(-1)
where np.matmul(w, b) is C and np.matmul(w, np.log(dwi)) is d. Variables P and q are C and b multiplied by the transpose C.T to form a square multiplier matrix and constant vector, respectively. This works perfectly and I can find a solution.
I'd like to know whether this my approach makes mathematical sense. From my limited knowledge of linear algebra I know that a square matrix produces a unique solution, but is there is a way to run the same this to produce an overdetermined solution? I tried this but solver.qp said input Q needs to be a square matrix.
We can also parse in a dims argument to solver.qp, which I tried, but received the error:
use of function valued P, G, A requires a user-provided kktsolver.
How do I correctly setup dims?
Thanks a lot for any help. I'll try to clarify any questions as best as I can.
Coming from this question, I wonder if a more generalized einsum was possible. Let us assume, I had the problem
using PyCall
#pyimport numpy as np
a = rand(10,10,10)
b = rand(10,10)
c = rand(10,10,10)
Q = np.einsum("imk,ml,lkj->ij", a,b,c)
Or something similar, how were I to solve this problem without looping through the sums?
with best regards
Edit/Update: This is now a registered package, so you can Pkg.add("Einsum") and you should be good to go (see the example below to get started).
Original Answer: I just created some very preliminary code to do this. It follows exactly what Matt B. described in his comment. Hope it helps, let me know if there are problems with it.
https://github.com/ahwillia/Einsum.jl
This is how you would implement your example:
using Einsum
a = rand(10,10,10)
b = rand(10,10)
c = rand(10,10,10)
Q = zeros(10,10)
#einsum Q[i,j] = a[i,m,k]*b[m,l]*c[l,k,j]
Under the hood the macro builds the following series of nested for loops and inserts them into your code before compile time. (Note this is not the exact code inserted, it also checks to make sure the dimensions of the inputs agree, using macroexpand to see the full code):
for j = 1:size(Q,2)
for i = 1:size(Q,1)
s = 0
for l = 1:size(b,2)
for k = 1:size(a,3)
for m = 1:size(a,2)
s += a[i,m,k] * b[m,l] * c[l,k,j]
end
end
end
Q[i,j] = s
end
end
I am a newbie with Matlab and I have the following scenario( which is part of a larger problem).
matrix A with 4754x1024 and matrix B with 6800x1024 rows.
For every row in matrix A i need to calculate the euclidean distance in matrix B. I am using the following technique to calculate the distance but I find that this is very inefficient and very time consuming in Matlab.
for i=1:row_A
A_data=A_test(i,:);
for j=1:row_B
B_data=B_train(j,:);
X=[A_data;B_data];
%calculate distance
d=pdist(X,'euclidean');
dist(j,i)=d;
end
end
Any suggestions to optimise this because the final step involves performing this operation on 50 such sets of A and B.
Thanks and Regards,
Bhavya
I'm not sure what your code is actually doing.
Assuming your data has the following properties
assert(size(A,2) == size(B,2))
Try
d = zeros(size(A,1), size(B,1));
for i = 1:size(A,1)
d(i,:) = sqrt(sum(bsxfun(#minus, B, A(i,:)).^2, 2));
end
Or possibly better organised by columns (See "Store and Access Data in Columns" in http://www.mathworks.co.uk/company/newsletters/news_notes/june07/patterns.html):
At = A.'; Bt = B.';
d = zeros(size(At,2), size(Bt,2));
for i = 1:size(At,2)
d(i,:) = sqrt(sum(bsxfun(#minus, Bt, At(:,i)).^2, 1));
end
I am hunting job now and doing many algorithm exercises. Here is my problem:
Given two arrays: a and b with same length, the subject is to make |sum(a)-sum(b)| minimal, by swapping elements between a and b.
Here is my though:
assume we swap a[i] and b[j], set Delt = sum(a) - sum(b), x = a[i]-b[j]
then Delt2 = sum(a)-a[i]+b[j] - (sum(b)-b[j]+a[i]) = Delt - 2*x,
then the change = |Delt| - |Delt2|, which is proportional to |Delt|^2 - |Delt2|^2 = 4*x*(Delt-x),
Based on the thought above I got the following code:
Delt = sum(a) - sum(b);
done = false;
while(!done)
{
done = true;
for i = [0, n)
{
for j = [0,n)
{
x = a[i]-b[j];
change = x*(Delt-x);
if(change >0)
{
swap(a[i], b[j]);
Delt = Delt - 2*x;
done = false;
}
}
}
}
However, does anybody have a much better solution ? If you got, please tell me and I would be very grateful of you!
This problem is basically the optimization problem for Partition Problem with an extra constraint of equal parts. I'll prove that adding this constraint doesn't make the problem easier.
NP-Hardness proof:
Assume there was an algorithm A that solves this problem in polynomial time, we can solve the Partition-Problem in polynomial time.
Partition(S):
for i in range(|S|):
S += {0}
result <- A(S\2,S\2) //arbitrary split S into 2 parts
if result is a partition: //simple to check, since partition is NP.
return true.
return false //no partition
Correctness:
If there is a partition denote as (S1,S2) [assume S2 has more elements], on iteration |S2|-|S1| [i.e. when adding |S2|-|S1| zeros]. The input to A will contatin enough zeros so we can return two equal length arrays: S2,S1+{0,0,...,0}, which will be a partition to S, and the algorithm will yield true.
If the algorithm yields true, and iteration k, we had two arrays: S2,S1, with same number of elements, and equal values. by removing k zeros from the arrays, we get a partition to the original S, so S had a partition.
Polynomial:
assume A takes P(n) time, the algorithm we produced will take n*P(n) time, which is also polynomial.
Conclusion:
If this problem is solveable in polynomial time, so does the Partion-Problem, and thus P=NP. based on this: this problem is NP-Hard.
Because this problem is NP-Hard, for an exact solution you will probably need an exponential algorith. One of those is simple backtracking [I leave it as an exercise to the reader to implement a backtracking solution]
EDIT: as mentioned by #jpalecek: by simply creating a reduction: S->S+(0,0,...,0) [k times 0], one can directly prove NP-Hardness by reduction. polynomial is trivial and correctness is very similar to the above partion's correctness proof: [if there is a partition, adding 'balancing' zeros is possible; the other direction is simply trimming those zeros]
Just a comment. Through all this swapping you can basically arrange the contents of both arrays as you like. So it is unimportant in which array the values are at start.
Can't do it in my head but I'm pretty sure there is a constructive solution. I think if you sort them first and then deal them according to some rule. Something along the lines If value > 0 and if sum(a)>sum(b) then insert to a else into b