Increase performance when calculating feature matrix? - data-science

Does calculate_feature_matrix use any libraries such as numba to increase performance?

I am one of the maintainers of Featuretools. calculate_feature_matrix currently only uses functions from Pandas/Numpy/Scipy to increase performance over raw Python. There are several areas where using numba or Cython may help, particularly in the PandasBackend class and in individual feature computation functions.
However, doing so requires a C-compiler or compiled C code, and so adds extra complexity to the installation. Because of this complexity it's currently not high on our priority list, but we may consider adding it in the future.
Instead, we are more focused on scalability to larger datasets, which involves parallelization rather than subroutine optimization.

Related

With numpy, what is the fastest way to compute one solution to an underdetermined linear system?

With numpy, what is the fastest way to compute one solution to an underdetermined linear system? I don't care which solution the method would return, I'd be happy with any solution.
In particular, I'm dealing with a 7x7 rank-6 matrix which describes the dynamics of a physical system. I'm noticing numpy.linalg.lstsq, numpy.linalg.qr, scipy.linalg.null_space, and scipy.linalg.lu run on the full matrix are all slower on my machine than numpy.linalg.solve run on a correctly-trimmed 6x6 full-rank matrix; solve is twice as fast as lstsq (14.8 µs vs 29.1 µs).
Is there any way to speed up the computation without some horrible C LAPACK-level hacking?
Numpy is not designed to be efficient on very small matrices. Its overheads (due to type checks, value checks, iterators, allocations, etc.) can be quite big on such matrices. In fact, dozens of microseconds is reasonable for such Numpy function call. Numba can reduce the overheads thanks to a fully compiled native code. That being said, Numba can still have a small overhead (due to the call from CPython, few type checks and allocations), but there are generally reasonable unless you work on extremely small inputs. In that case, it is better to use Numba in the caller function since the problem is actually the slow CPython interpreter. The lazy compilation of the Numba function make the first execution significantly slower. You can provide the signature to Numba to make it faster (eager compilation).
import numba as nb
#nb.njit('(float64[:,::1], float64[::1])')
def solve_nb(a, b):
return np.linalg.solve(a, b)
On my machine. It is about 16% faster on a 7x7 matrix. It requires the matrices to be contiguous (working on non-contiguous is fundamentally inefficient, especially here). If this is not fast enough, then you can call dgesv directly for double-precision matrices (or sgesv for simple-precision).
Actually, solve does use dgesv internally. lstsq appears to use a singular value decomposition (SVD). SVD are significantly slower than a QR decomposition which is generally a bit slower than a LU decomposition.
I am not an expert of the numerical/mathematical part, but AFAIK, solving this with a LU decomposition is less numerically stable than using a QR which is also less numerically stable than a SVD. Also, I think a SVD/QR method should be used instead of a simple LU decomposition for matrices that are not full-rank one.
The implementation of dgesv of the standard Netlib LAPACK uses a LU factorization followed by a call to dgetrs (see here). This later call should be fast compared to the LU factorization. The code of LAPACK implementations are generally pretty generic so they may have significant overhead on 7x7 matrices (AFAIK, the Intel implementation is one of the fastest for that).
An alternative solution is to write your own specialized LU decomposition and your own system solving using Numba or Cython. This solution is tedious, but it should be significantly faster since the compiler can unroll the loop if it know the bounds reducing the overheads. You can also perform 1 allocation instead of multiple ones.

SHould I trust BLAS libraries unconditionally to improve performance

I am working on some project that involves computationally intensive image processing algorithms that involve a lot of steps that could be handled by BLAS libraries (mostly level 1 routines). Since my data is quite large it certainly makes sense to consider using BLAS.
I have seen examples where optimised BLAS libraries offer a tremendous increase in performance (factor 10 in speedup for matrix matrix multiplications are nothing unusual).
Should I apply the BLAS functions whenever possible and trust it blindly that it will yield a better performance or should I do a case by case analysis and only apply BLAS where it is necessary?
Blindly applying BLAS has the benefit that I save some time now since I don't have to profile my code in detail. On the other hand, carefully analysing each method might give me the best possible performance but I wonder if it is worth spending a few hours now just to gain half a second later when running the software.
A while agon, I read in a book: (1) Golden rule about optimization: don't do it (2) Golden rule about optimization (for experts only): don't do it yet. In short, I'd recommend to proceed as follows:
step 1: implement the algorithms in the simplest / most legible way
step 2: measure performances
step 3: if (and only if) performances are not satisfactory, use a profiler to detect the hot spots. They are often not where we think !!
step 4: try different alternatives for the hot spots only (measure performances for each alternative)
More speficically about your question: yes, a good implementation of BLAS can make some difference (it may use AVX instruction sets, and for matrix times matrix multiply, decompose the matrix into blocs in a way that is more cache-friendly), but again, I would not "trust unconditionally" (depends on the version of BLAS, on the data, on the target machine etc...), then measuring performances and comparing is absolutely necessary.

OpenMP with matrices and vectors

What is the best way to utilize OpenMP with a matrix-vector product? Would the for directive suffice (if so, where should I place it? I assume outer loop would be more efficient) or would I need schedule, etc..?
Also, how would I take advantage different algorithms to attempt this m-v product most efficiently?
Thanks
The first step you should take is the obvious one, wrap the outermost loop in a parallel for directive. As you assume. It's always worth experimenting a bit to get some evidence to support your (and my) assumptions, but if you were only allowed to make 1 change that would be the one to make.
I don't know much about cache-oblivious algorithms but I understand that they, generally, work by recursive division of a problem into sub-problems. This doesn't seem to fit with the application of parallel for directives. I suspect you could implement such an algorithm with OpenMP's tasks, but I suspect that the overhead of doing this would outweigh any execution improvements on any m-v product of reasonable dimensions.
(If you demonstrate the falsity of this argument on m-v products of size N I will retort 'N's not a reasonable dimension'. As ever with these performance questions, evidence trumps argument every time.)
Finally, depending on your compiler and the availability of libraries, you may not need to use OpenMP for m-v calculations, you might find auto-parallelisation works efficiently, or already have a library implementation which multi-threads this sort of computation.

How to optimize MATLAB loops?

I have been working lately on a number of iterative algorithms in MATLAB, and been getting hit hard by MATLAB's performance (or lack thereof) when it comes to loops. I'm aware of the benefit of vectorizing code when possible, but are there any tools for optimization when you need the loop for your algorithm?
I am aware of the MEX-file option to write small subroutines in C/C++, although given my algorithms, this can be a very painful option given the data structures required. I mainly use MATLAB for the simplicity and speed of prototyping, so a syntactically complex, statically typed language is not ideal for my situation.
Are there any other suggestions? Even other languages (python?) which have relatively painless matrix tools are an option.
It was once true that vectorization would improve the speed of your MATLAB code. However, that is largely no longer true with the JIT-accelerator
This video demonstrating the MATLAB profiler might help.
PROFILER is very useful tool to find bottlenecks in Matlab code. it does not change your code of course, but helps to find which functions/lines to optimize with vectorization or mex.
http://www.mathworks.com/access/helpdesk/help/techdoc/ref/profile.html
http://www.mathworks.com/access/helpdesk/help/techdoc/matlab_env/f9-17018.html
If you have a choice, be sure to set up your loops so you scan the data column-wise which is how the data in MATLAB are arranged. In addition, be sure to preallocate any output arrays before the loop and index into them instead of growing the array inside the for-loop.
If you can cast your code so your operations are called on the whole matrix then you will see great improvement in the speed of your code. Many functions are much quicker when operating on the whole matrix rather than in an element-wise fashion with loops.
You might want to investigate MATLAB's Parallel Computing Toolbox which can make a big difference if you have the right hardware. I re-wrote about 12 lines of code and got 4 - 6 times speedup for one of our loop-intensive programs on and eight core PC.

How much speed-up from converting 3D maths to SSE or other SIMD?

I am using 3D maths in my application extensively. How much speed-up can I achieve by converting my vector/matrix library to SSE, AltiVec or a similar SIMD code?
In my experience I typically see about a 3x improvement in taking an algorithm from x87 to SSE, and a better than 5x improvement in going to VMX/Altivec (because of complicated issues having to do with pipeline depth, scheduling, etc). But I usually only do this in cases where I have hundreds or thousands of numbers to operate on, not for those where I'm doing one vector at a time ad hoc.
That's not the whole story, but it's possible to get further optimizations using SIMD, have a look at Miguel's presentation about when he implemented SIMD instructions with MONO which he held at PDC 2008,
(source: tirania.org)
Picture from Miguel's blog entry.
For some very rough numbers: I've heard some people on ompf.org claim 10x speed ups for some hand-optimized ray tracing routines. I've also had some good speed ups. I estimate I got somewhere between 2x and 6x on my routines depending on the problem, and many of these had a couple of unnecessary stores and loads. If you have a huge amount of branching in your code, forget about it, but for problems that are naturally data-parallel you can do quite well.
However, I should add that your algorithms should be designed for data-parallel execution.
This means that if you have a generic math library as you've mentioned then it should take packed vectors rather than individual vectors or you'll just be wasting your time.
E.g. Something like
namespace SIMD {
class PackedVec4d
{
__m128 x;
__m128 y;
__m128 z;
__m128 w;
//...
};
}
Most problems where performance matters can be parallelized since you'll most likely be working with a large dataset. Your problem sounds like a case of premature optimization to me.
For 3D operations beware of un-initialized data in your W component. I've seen cases where SSE ops (_mm_add_ps) would take 10x normal time because of bad data in W.
The answer highly depends on what the library is doing and how it is used.
The gains can go from a few percent points, to "several times faster", the areas most susceptible of seeing gains are those where you're not dealing with isolated vectors or values, but multiple vectors or values that have to be processed in the same way.
Another area is when you're hitting cache or memory limits, which, again, requires a lot of values/vectors being processed.
The domains where gains can be the most drastic are probably those of image and signal processing, computational simulations, as well general 3D maths operation on meshes (rather than isolated vectors).
These days all the good compilers for x86 generate SSE instructions for SP and DP float math by default. It's nearly always faster to use these instructions than the native ones, even for scalar operations, so long as you schedule them correctly. This will come as a surprise to many, who in the past found SSE to be "slow", and thought compilers could not generate fast SSE scalar instructions. But now, you have to use a switch to turn off SSE generation and use x87. Note that x87 is effectively deprecated at this point and may be removed from future processors entirely. The one down point of this is we may lose the ability to do 80bit DP float in register. But the consensus seems to be if you are depending on 80bit instead of 64bit DP floats for the precision, your should look for a more precision loss-tolerant algorithm.
Everything above came as a complete surprise to me. It's very counter intuitive. But data talks.
Most likely you will see only very small speedup, if any, and the process will be more complicated than expected. For more details see The Ubiquitous SSE vector class article by Fabian Giesen.
The Ubiquitous SSE vector class: Debunking a common myth
Not that important
First and foremost, your vector class is probably not as important for the performance of your program as you think (and if it is, it's more likely because you're doing something wrong than because the computations are inefficient). Don't get me wrong, it's probably going to be one of the most frequently used classes in your whole program, at least when doing 3D graphics. But just because vector operations will be common doesn't automatically mean that they'll dominate the execution time of your program.
Not so hot
Not easy
Not now
Not ever