Problem with optimizing matrix multiplication speed in NumPy/MATLAB - numpy

Currently, I'm in the process of optimizing a piece of code I have. I'm trying to speed-up a bunch of matrix multiplications by reducing the size of matrix dimensions. However, in some cases, using both NumPy and MATLAB, I'm not able to obtain the speed-ups I expect. Using MATLAB, I first defined 2 randomized matrices: bigger_mat which is 10000x10000 and smaller_mat which is 10000x100. I then created 2 smaller matrices by slicing bigger_mat and smaller_mat, such that I get matrix dimensions of 200x10000 (bigger_mat_sliced) and 10000x2 (smaller_mat_sliced).
% Defining full (big) array dimension
dim_big = 100;
% Defining sliced (small) array dimension.
dim_small = 2;
% Creating a 10000x10000 randomized array
bigger_mat = rand(dim_big^2, dim_big^2);
% Creating a 10000x100 randomized array
smaller_mat = rand(dim_big^2, dim_big);
% Slicing bigger_mat to obtain a 200x10000 array
bigger_mat_sliced = bigger_mat(1:dim_small * dim_big, :);
% Slicing smaller_mat to obtain a 10000x2 array
smaller_mat_sliced = smaller_mat(:, 1:dim_small);
I then measured the runtimes for the following 3 matrix multiplications:
bigger_mat x smaller_mat
bigger_mat_sliced x smaller_mat
bigger_mat x smaller_mat_sliced
My expectations were as the following:
Multiplication #1 should take the longest amount of time since the unsliced (full) matrices are being multiplied
Multiplications #2 and #3 should take less time than #1, as in both #2 and #3 I'm multiplying a full matrix with a sliced matrix. Specifically, #2 and #3 both should require the same amount of time, and both should be 50 times faster than #1 (the sliced dimensions are scaled down by a factor of dim_big/dim_small = 100/2 = 50).
The timings I got were:
bigger_mat x smaller_mat: Elapsed time is 0.110538 seconds
bigger_mat_sliced x smaller_mat: Elapsed time is 0.002564 seconds
bigger_mat x smaller_mat_sliced: Elapsed time is 0.068878 seconds
While #2 is behaving as expected with a 43x speed-up compared to #1, #3 is only 1.6x faster than #1. I tried running this test using NumPy, but I also got similar timings to those above.
It seems to me like when multiplying 2 matrices of unequal outer dimensions, for example A(i,k)*B(k,j) where i >> j, slicing the largest of the 2 outer dimensions (i) by some factor scales down the multiplication time as expected. However, for some reason, scaling down (or slicing) the smaller dimension (j) yields barely any speed-up. I'm really having a hard time understanding these results. I tried looking up matrix multiplication algorithms implemented in BLAS libraries, hoping to find an explanation, but soon I found myself out of my depth.
Lastly, is there a way to make multiplication #2 as fast as #3? Thanks!

Related

How to calculate a very large correlation matrix

I have an np.array of observations z where z.shape is (100000, 60). I want to efficiently calculate the 100000x100000 correlation matrix and then write to disk the coordinates and values of just those elements > 0.95 (this is a very small fraction of the total).
My brute-force version of this looks like the following but is, not surprisingly, very slow:
for i1 in range(z.shape[0]):
for i2 in range(i1+1):
r = np.corrcoef(z[i1,:],z[i2,:])[0,1]
if r > 0.95:
file.write("%6d %6d %.3f\n" % (i1,i2,r))
I realize that the correlation matrix itself could be calculated much more efficiently in one operation using np.corrcoef(z), but the memory requirement is then huge. I'm also aware that one could break up the data set into blocks and calculate bite-size subportions of the correlation matrix at one time, but programming that and keeping track of the indices seems unnecessarily complicated.
Is there another way (e.g., using memmap or pytables) that is both simple to code and doesn't put excessive demands on physical memory?
After experimenting with the memmap solution proposed by others, I found that while it was faster than my original approach (which took about 4 days on my Macbook), it still took a very long time (at least a day) -- presumably due to inefficient element-by-element writes to the outputfile. That wasn't acceptable given my need to run the calculation numerous times.
In the end, the best solution (for me) was to sign in to Amazon Web Services EC2 portal, create a virtual machine instance (starting with an Anaconda Python-equipped image) with 120+ GiB of RAM, upload the input data file, and do the calculation (using the matrix multiplication method) entirely in core memory. It completed in about two minutes!
For reference, the code I used was basically this:
import numpy as np
import pickle
import h5py
# read nparray, dimensions (102000, 60)
infile = open(r'file.dat', 'rb')
x = pickle.load(infile)
infile.close()
# z-normalize the data -- first compute means and standard deviations
xave = np.average(x,axis=1)
xstd = np.std(x,axis=1)
# transpose for the sake of broadcasting (doesn't seem to work otherwise!)
ztrans = x.T - xave
ztrans /= xstd
# transpose back
z = ztrans.T
# compute correlation matrix - shape = (102000, 102000)
arr = np.matmul(z, z.T)
arr /= z.shape[0]
# output to HDF5 file
with h5py.File('correlation_matrix.h5', 'w') as hf:
hf.create_dataset("correlation", data=arr)
From my rough calculations, you want a correlation matrix that has 100,000^2 elements. That takes up around 40 GB of memory, assuming floats.
That probably won't fit in computer memory, otherwise you could just use corrcoef.
There's a fancy approach based on eigenvectors that I can't find right now, and that gets into the (necessarily) complicated category...
Instead, rely on the fact that for zero mean data the covariance can be found using a dot product.
z0 = z - mean(z, 1)[:, None]
cov = dot(z0, z0.T)
cov /= z.shape[-1]
And this can be turned into the correlation by normalizing by the variances
sigma = std(z, 1)
corr = cov
corr /= sigma
corr /= sigma[:, None]
Of course memory usage is still an issue.
You can work around this with memory mapped arrays (make sure it's opened for reading and writing) and the out parameter of dot (For another example see Optimizing my large data code with little RAM)
N = z.shape[0]
arr = np.memmap('corr_memmap.dat', dtype='float32', mode='w+', shape=(N,N))
dot(z0, z0.T, out=arr)
arr /= sigma
arr /= sigma[:, None]
Then you can loop through the resulting array and find the indices with a large correlation coefficient. (You may be able to find them directly with where(arr > 0.95), but the comparison will create a very large boolean array which may or may not fit in memory).
You can use scipy.spatial.distance.pdist with metric = correlation to get all the correlations without the symmetric terms. Unfortunately this will still leave you with about 5e10 terms that will probably overflow your memory.
You could try reformulating a KDTree (which can theoretically handle cosine distance, and therefore correlation distance) to filter for higher correlations, but with 60 dimensions it's unlikely that would give you much speedup. The curse of dimensionality sucks.
You best bet is probably brute forcing blocks of data using scipy.spatial.distance.cdist(..., metric = correlation), and then keep only the high correlations in each block. Once you know how big a block your memory can handle without slowing down due to your computer's memory architecture it should be much faster than doing one at a time.
please check out deepgraph package.
https://deepgraph.readthedocs.io/en/latest/tutorials/pairwise_correlations.html
I tried on z.shape = (2500, 60) and pearsonr for 2500 * 2500. It has an extreme fast speed.
Not sure for 100000 x 100000 but worth trying.

Zoom in on np.fft2 result

Is there a way to chose the x/y output axes range from np.fft2 ?
I have a piece of code computing the diffraction pattern of an aperture. The aperture is defined in a 2k x 2k pixel array. The diffraction pattern is basically the inner part of the 2D FT of the aperture. The np.fft2 gives me an output array same size of the input but with some preset range of the x/y axes. Of course I can zoom in by using the image viewer, but I have already lost detail. What is the solution?
Thanks,
Gert
import numpy as np
import matplotlib.pyplot as plt
r= 500
s= 1000
y,x = np.ogrid[-s:s+1, -s:s+1]
mask = x*x + y*y <= r*r
aperture = np.ones((2*s+1, 2*s+1))
aperture[mask] = 0
plt.imshow(aperture)
plt.show()
ffta= np.fft.fft2(aperture)
plt.imshow(np.log(np.abs(np.fft.fftshift(ffta))**2))
plt.show()
Unfortunately, much of the speed and accuracy of the FFT come from the outputs being the same size as the input.
The conventional way to increase the apparent resolution in the output Fourier domain is by zero-padding the input: np.fft.fft2(aperture, [4 * (2*s+1), 4 * (2*s+1)]) tells the FFT to pad your input to be 4 * (2*s+1) pixels tall and wide, i.e., make the input four times larger (sixteen times the number of pixels).
Begin aside I say "apparent" resolution because the actual amount of data you have hasn't increased, but the Fourier transform will appear smoother because zero-padding in the input domain causes the Fourier transform to interpolate the output. In the example above, any feature that could be seen with one pixel will be shown with four pixels. Just to make this fully concrete, this example shows that every fourth pixel of the zero-padded FFT is numerically the same as every pixel of the original unpadded FFT:
# Generate your `ffta` as above, then
N = 2 * s + 1
Up = 4
fftup = np.fft.fft2(aperture, [Up * N, Up * N])
relerr = lambda dirt, gold: np.abs((dirt - gold) / gold)
print(np.max(relerr(fftup[::Up, ::Up] , ffta))) # ~6e-12.
(That relerr is just a simple relative error, which you want to be close to machine precision, around 2e-16. The largest error between every 4th sample of the zero-padded FFT and the unpadded FFT is 6e-12 which is quite close to machine precision, meaning these two arrays are nearly numerically equivalent.) End aside
Zero-padding is the most straightforward way around your problem. But it does cost you a lot of memory. And it is frustrating because you might only care about a tiny, tiny part of the transform. There's an algorithm called the chirp z-transform (CZT, or colloquially the "zoom FFT") which can do this. If your input is N (for you 2*s+1) and you want just M samples of the FFT's output evaluated anywhere, it will compute three Fourier transforms of size N + M - 1 to obtain the desired M samples of the output. This would solve your problem too, since you can ask for M samples in the region of interest, and it wouldn't require prohibitively-much memory, though it would need at least 3x more CPU time. The downside is that a solid implementation of CZT isn't in Numpy/Scipy yet: see the scipy issue and the code it references. Matlab's CZT seems reliable, if that's an option; Octave-forge has one too and the Octave people usually try hard to match/exceed Matlab.
But if you have the memory, zero-padding the input is the way to go.

Time complexity for finding largest eigenvalue

I'm trying to figure out the time complexity for calculating the largest eigenvector in a whole bunch of small matrices.
Each matrix is the adjacency matrix of the 1-step neighborhood of a node in a weighted, undirected graph. So all values are positive and the matrix is symmetric.
E.g.
0 2 1 1
2 0 1 0
1 1 0 0
1 0 0 0
I've found that the Power iteration method that is supposed to be O(n^2) complexity per iteration.
So does that mean the complexity for finding largest eigenvector for the 1-step neighborhood for every node in a graph is O(n * p^2), where n is the number of nodes, and p is the average degree of the graph (i.e. number of edges / number of nodes)?
Off the top of my head I would say your best bet is to use an iterative randomised algorithm known as the power iteration. The algorithm is iterative and finds true max eigenvalue with geometric convergence, with ratio of the largest eigenvalue to the second largest. So, if you two largest eigenvalues are equal do not use this method, otherwise it works quite nicely. You actually get the largest eigenvalue and respective eigenvector.
However, if you matrices are very small you might be just as well to perform PCA because it will not be so expensive. I am not aware of any hard threshold of when you should switch between the two. Also it depends if you are willing to accept small inaccuracies or the absolute true value.

Multiply one fixed matrix by a huge number of vectors

I'll need to change the basis of some 10^7 vectors, each having
200 coordinates. So I will multiply one [200 x 200] matrix by 10^7 [200 x 1] vectors. I need it to run very fast but I need to code it fast (one day or less)
and my CUDA is poor, so I don't want to code it from scratch in CUDA or OpenCL. Maybe some existing library can do it for me? Notice that, if the solution uses GPGPU, the matrix should be transfered to the GPU only once, otherwise the performance will be poor. Could I could use OpenACC (or OpenMP, I don't know)? Is it possible to do this in a day?
I prefer open source solutions (for both convenience and ethical reasons) but I can tolerate a closed source solution, even paid (assuming it is not too costly).
This is for my dissertation.
Thank you for your attention.
You can put your vectors in a matrix, 200 * 10^7 is perhaps to much space at once depending on our system, so you can split it.
And then you use any code that is optimized for matrix matrix multiplication, like BLAS. There are many implementations on CPUs, GPUs (cuBLAS, MAGMA,...), multicores (PLASMA,...), or distributed memory.
Since you will have big matrices you vill have a better acceleration than by doing matrix vector multiplications.
You're going to multiply 10 million big vectors by a huge matrix that is the same for all of them.
It would be fastest if all possible decision-making could be compiled-out ahead of time.
In other words, there are lots of index calculations and loop testing that would be identically repeated millions of times.
This sounds like a perfect case for pre-compilation:
Write a small program that would take as input your 200x200 matrix data values, and have it print out a piece of program text defining a function capable of inputting the input vector and outputting the result vector.
It could look something like this:
void multTheMatrixByTheVector(double a[200], double b[200]){
b[0] = 0
+ a[0] * <a constant, the value of mat[0][0]>
+ a[1] * <a constant, the value of mat[1][0]>
...
+ a[199] * <a constant, the value of mat[199][0]>
;
b[1] = 0
+ a[0] * <a constant, the value of mat[0][1]>
+ a[1] * <a constant, the value of mat[1][1]>
...
+ a[199] * <a constant, the value of mat[199][1]>
;
...
b[199] = etc. etc.
}
You see, that function will be around 40000 lines long, but a decent compiler should be able to handle it.
Of course, if any of the matrix elements are zero, i.e. there's some sparsity, you can omit those lines (or let the compiler optimizer do it).
To do this on CUDA or vectorized instructions, you'd have to modify it accordingly, but that should be do-able.
When you include that function in your main program, it should be able to run about as fast as the machine can go.
It's not wasting any cycles doing index calculations, loop testing, or multiplying by empty matrix cells.
Then if it takes 10ns per multiply and add, my back-of-the envelope says it should take 400 usec per vector, or 4000 seconds overall - a little over an hour.

Units of frequency when using FFT in NumPy

I am using the FFT function in NumPy to do some signal processing. I have array called signal
which has one data point for each hour and has a total of 576 data points. I use the following code on signal to look at its fourier transform.
t = len(signal)
ft = fft(signal,n=t)
mgft=abs(ft)
plot(mgft[0:t/2+1])
I see two peaks but I am unsure as to what the units of the x axis are i.e., how they map onto hours? Any help would be appreciated.
Given sampling rate FSample and transform blocksize N, you can calculate the frequency resolution deltaF, sampling interval deltaT, and total capture time capT using the relationships:
deltaT = 1/FSample = capT/N
deltaF = 1/capT = FSample/N
Keep in mind also that the FFT returns value from 0 to FSample, or equivalently -FSample/2 to FSample/2. In your plot, you're already dropping the -FSample/2 to 0 part. NumPy includes a helper function to calculate all this for you: fftfreq.
For your values of deltaT = 1 hour and N = 576, you get deltaF = 0.001736 cycles/hour = 0.04167 cycles/day, from -0.5 cycles/hour to 0.5 cycles/hour. So if you have a magnitude peak at, say, bin 48 (and bin 528), that corresponds to a frequency component at 48*deltaF = 0.0833 cycles/hour = 2 cycles/day.
In general, you should apply a window function to your time domain data before calculating the FFT, to reduce spectral leakage. The Hann window is almost never a bad choice. You can also use the rfft function to skip the -FSample/2, 0 part of the output. So then, your code would be:
ft = np.fft.rfft(signal*np.hanning(len(signal)))
mgft = abs(ft)
xVals = np.fft.fftfreq(len(signal), d=1.0) # in hours, or d=1.0/24 in days
plot(xVals[:len(mgft)], mgft)
Result of fft transformation doesn't map to HOURS, but to frequencies contained in your dataset. It would be beneficial to have your transformed graph so we can see where the spikes are.
You might be having spike at the beginning of the transformed buffer, since you didn't do any windowing.
In general, the dimensional units of frequency from an FFT are the same as the dimensional units of the sample rate attributed to the data fed to the FFT, for example: per meter, per radian, per second, or in your case, per hour.
The scaled units of frequency, per FFT result bin index, are N / theSampleRate, with the same dimensional units as above, where N is the length of the full FFT (you might only be plotting half of this length in the case of strictly real data).
Note that each FFT result peak bin represents a filter with a non-zero bandwidth, so you might want to add some uncertainty or error bounds to the result points you map onto frequency values. Or even use an interpolation estimation method, if needed and appropriate for the source data.