Optimize Blas-like operation - A`*B*A - optimization

Given two matrices, A and B, where B is symetric (and positive semi-definite), What is the best (fastest) way to calculate A`*B*A?
Currently, using BLAS, I first compute C=B*A using dsymm (introducing a temporary matrix C) and then A`*C using dgemm.
Is there a better (faster, no temporaries) way to do this using BLAS and mkl?
Thanks.

I'll offer somekind of answer: Compared to the general case A*B*C you know that the end result is symmetric matrix. After computing C=B*A with BLAS subroutine dsymm, you want to compute A'C, but you only need to compute the upper diagonal part of the matrix and the copy the strictly upper diagonal part to the lower diagonal part.
Unfortunately there doesn't seem to be a BLAS routine where you can claim beforehand that given two general matrices, the output matrix will be symmetric. I'm not sure if it would be beneficial to write you own function for this. This probably depends on the size of your matrices and the implementation.
EDIT:
This idea seems to be addressed recently here: A Matrix Multiplication Routine that Updates Only the Upper or Lower Triangular Part of the Result Matrix

Related

How to solve this quadratic optimization problem

My problem is described in this picture(It's like a Pyramid structure):
The objective function is below:
In this problem, D is known, A is the object that I want to get. It is a layered structure, each block in the upper layer is divided into four sub-blocks in the layer below. And the value of the upper layer node is equal to the sum of the four child nodes of the lower layer. In above example, I used only 2 layers.
What I want to do is simulate the distribution of D with A, so in the objective function is the ratio of two adjacent squares in each row in A compared to the value in D. I do this comparison on each layer and sum them. Then it is all of my objective function. But in the finest layer, the value in A has a constrain A<=1, the value in A can be a number between 0 and 1. I have tried to solve it using Quadratic programming in python library CVXPY. However, it seems the speed is slow.
So I want to solve it in another way, because this is a convex optimization problem, which can guarantee the global optimal solution. What I think is whether it is possible to use the method of derivation. There are two unknown variables in each item, that is, the two items with A in the formula. Partial derivatives are obtained for them, and the restriction of A<=1 is added, then solve using gradient descent method. Is this mathematically feasible, because I don't know much about optimization, and if it is possible, how should I do it? If not possible, what other methods can I use?

Why the inverse of the inverse of one matrix is not itself in python?

Why the inverse of the inverse of one matrix is not itself in python?
Why the inverse of the inverse of one matrix is not itself in python?
code
code
(Deleted the previous answer, since I made a mistake copying the matrix)
Your matrix is perfectly singular, so the inverse does not actually exist. Due to limits of numerical precision, numpy.linalg.inv gives you a matrix with very large values that is the inverse of another (similar) matrix.
I don't know what Matlab does in this situation, but it's possible that it gives the Moore-Penrose pseudoinverse, which would behave as you describe. Note however that this is not the same thing as the inverse, which does not exist. In numpy, you can get the Moore-Penrose pseudoinverse as np.linalg.pinv.

Explained variance calculation

My questions are specific to https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html#sklearn.decomposition.PCA.
I don't understand why you square eigenvalues
https://github.com/scikit-learn/scikit-learn/blob/55bf5d9/sklearn/decomposition/pca.py#L444
here?
Also, explained_variance is not computed for new transformed data other than original data used to compute eigen-vectors. Is that not normally done?
pca = PCA(n_components=2, svd_solver='full')
pca.fit(X)
pca.transform(Y)
In this case, won't you separately calculate explained variance for data Y as well. For that purpose, I think we would have to use point 3 instead of using eigen-values.
Explained variance can be also computed by taking the variance of each axis in the transformed space and dividing by the total variance. Any reason that is not done here?
Answers to your questions:
1) The square roots of the eigenvalues of the scatter matrix (e.g. XX.T) are the singular values of X (see here: https://math.stackexchange.com/a/3871/536826). So you square them. Important: the initial matrix X should be centered (data has been preprocessed to have zero mean) in order for the above to hold.
2) Yes this is the way to go. explained_variance is computed based on the singular values. See point 1.
3) It's the same but in the case you describe you HAVE to project the data and then do additional computations. No need for that if you just compute it using the eigenvalues / singular values (see point 1 again for the connection between these two).
Finally, keep in mind that not everyone really wants to project the data. Someone can only get the eigenvalues and then immediately estimate the explained variance WITHOUT projecting the data. So that's the best gold standard way to do it.
EDIT 1:
Answer to edited Point 2
No. PCA is an unsupervised method. It only transforms the X data not the Y (labels).
Again, the explained variance can be computed fast, easily, and with half line of code using the eigenvalues/singular values OR as you said using the projected data e.g. estimating the covariance of the projected data, then variances of PCs will be in the diagonal.

Fastest way to multiply X*X.transpose() in Eigen?

I want to multiple matrix with self transposed. The size of matrix about X[8, 100].
Now it looks " MatrixXf h = X*X.transpose()"
a) Is it possible to use faster multiplication using explicit facts:
Result matrix is symmetric
The X matrix use same data so, can use custom procedure for multiplication.
?
b)Also i can generate X matrix as transposed and use X.transpose()*X, whitch i should prefer for my dimensions ?
c) Any tips on faster multiplication of such matrixes.
Thanks.
(a) Your matrix is too small to take advantage of the symmetry of the result because if you do so, then you will loose vectorization. So there is not much you can do.
(b) The default column storage should be fine for that example.
(c) Make sure you compile with optimizations ON, that you enabled SSE2 (this is the default on 64 bits systems), the devel branch is at least twice as fast for such sizes, and you can get additional speedup by enabling AVX.

Optimize MATLAB code (nested for loop to compute similarity matrix)

I am computing a similarity matrix based on Euclidean distance in MATLAB. My code is as follows:
for i=1:N % M,N is the size of the matrix x for whose elements I am computing similarity matrix
for j=1:N
D(i,j) = sqrt(sum(x(:,i)-x(:,j)).^2)); % D is the similarity matrix
end
end
Can any help with optimizing this = reducing the for loops as my matrix x is of dimension 256x30000.
Thanks a lot!
--Aditya
The function to do so in matlab is called pdist. Unfortunately it is painfully slow and doesnt take Matlabs vectorization abilities into account.
The following is code I wrote for a project. Let me know what kind of speed up you get.
Qx=repmat(dot(x,x,2),1,size(x,1));
D=sqrt(Qx+Qx'-2*x*x');
Note though that this will only work if your data points are in the rows and your dimensions the columns. So for example lets say I have 256 data points and 100000 dimensions then on my mac using x=rand(256,100000) and the above code produces a 256x256 matrix in about half a second.
There's probably a better way to do it, but the first thing I noticed was that you could cut the runtime in half by exploiting the symmetry D(i,j)==D(i,j)
You can also use the function norm(x(:,i)-x(:,j),2)
I think this is what you're looking for.
D=zeros(N);
jIndx=repmat(1:N,N,1);iIndx=jIndx'; %'# fix SO's syntax highlighting
D(:)=sqrt(sum((x(iIndx(:),:)-x(jIndx(:),:)).^2,2));
Here, I have assumed that the distance vector, x is initalized as an NxM array, where M is the number of dimensions of the system and N is the number of points. So if your ordering is different, you'll have to make changes accordingly.
To start with, you are computing twice as much as you need to here, because D will be symmetric. You don't need to calculate the (i,j) entry and the (j,i) entry separately. Change your inner loop to for j=1:i, and add in the body of that loop D(j,i)=D(i,j);
After that, there's really not much redundancy left in what that code does, so your only room left for improvement is to parallelize it: if you have the Parallel Computing Toolbox, convert your outer loop to a parfor and before you run it, say matlabpool(n), where n is the number of threads to use.