Python numpy: does indexing take precedence? - numpy

If I have two arrays, and I do stuff like a*b[...], does the indexing take precedence over the multiplication? I.e. does b get indexed first, and then multiplied with a? Or does it index the product ab?
Same question by other operators (division, etc).
Or do I have to say a * (b[...])?

Related

Cache intermediate results of prior calls to lapacke gelsd

An iterative algorithm calls LAPACKE_sgelsd each iteration with a single column of B. Subsequent calls often use the same A matrix. I believe a substantial performance improvement would be to cache or some how reuse intermediate results from the previous iteration when the A matrix has not changed. This should be somewhat similar to the gains possible when passing multiple columns for B. Is that correct? How difficult would it be to implement, and how could it be done? It uses openblas. Thank you.
Instead of caching intermediate results, the pseudo inverse can be computed and cached. It can be computed this approach, summarized as:
Calculate the SVD
Set all "small" singular values to zero
Invert all non-zero singular values
Multiply the three matrices again
Pseudo inverse is the transpose of the result.
The result is the pseudo inverse * B.

julia index matrix with vector

Suppose I have a 20-by-10 matrix m
and a 20-by-1 vector v, where each element is an integer between 1 to 10.
Is there smart indexing command something like m[:,v]
that would give a vector, where each element i is element of m at the index [i,v[i]]?
No, it seems that you cannot do it. Documentation (http://docs.julialang.org/en/stable/manual/arrays/) says:
If all the indices are scalars, then the result X is a single element from the array A. Otherwise, X is an array with the same number of dimensions as the sum of the dimensionalities of all the indices.
So, to get 1d result from indexing operation you need to have one of the indices to have dimensionality 0, i.e. to be just a scalar -- and you won't get what you want then.
Use comprehension, as proposed in the comment to your question.
To be explicit about the comprehension approach:
[m[i,v[i]] for i = 1:length(v)]
This is concise and clear enough that having a special syntax seems unnecessary.

Element-wise operations on arrays of different rank

How do I multiply two arrays of different rank, element-wise? For example, element-wise multiplying every row of a matrix with a vector.
real :: a(m,n), b(n)
My initial thought was to use spread(b,...), but it is my understanding that this tiles b in memory, which would make it undesirable for large arrays.
In MATLAB I would use bsxfun for this.
If the result of the expression is simply being assigned to another variable (versus being an intermediate in a more complicated expression or being used as an actual argument), then a loop (DO [CONCURRENT]) or FORALL assignment is likely to be best from the point of view of execution speed (though it will be processor dependent).

Working with columns of NumPy matrices

I've been unable to figure out how to access, add, multiply, replace, etc. single columns of a NumPy matrix. I can do this via looping over individual elements of the column, but I'd like to treat the column as a unit, something that I can do with rows.
When I've tried to search I'm usually directed to answers handling NumPy arrays, but this is not the same thing.
Can you provide code that's giving trouble? The operations on columns that you list are among the most basic operations that are supported and optimized in NumPy. Consider looking over the tutorial on NumPy for MATLAB users, which has many examples of accessing rows or columns, performing vectorized operations on them, and modifying them with copies or in-place.
NumPy for MATLAB Users
Just to clarify, suppose you have a 2-dimensional NumPy ndarray or matrix called a. Then a[:, 0] would access the first column just the same as a[0] or a[0, :] would access the first row. Any operations that work for rows should work for columns as well, with some caveats for broadcasting rules and certain mathematical operations that depend upon array alignment. You can also use the numpy.transpose(a) function (which is also exposed with a.T) to transpose a making the columns become rows.

Row / column vs linear indexing speed (spatial locality)

Related question:
This one
I am using a spatial grid which can potentially get big (10^6 nodes) or even bigger. I will regularly have to perform displacement operations (like a particle from a node to another). I'm not a crack in informatics but I begin to understand the concepts of cache lines and spatial locality, though not well yet. So, I was wandering if it is preferible to use a 2D array (and if yes, which one? I'd prefer to avoid boost for now, but maybe I will link it later) and indexing the displacement for example like this:
Array[i][j] -> Array[i-1][j+2]
or, with a 1D array, if NX is the "equivalent" number of columns:
Array[i*NX+j] -> Array[(i-1)*NX+j+2]
Knowing that it will be done nearly one million times per iteration, with nearly one million iteration as well.
With modern compilers and optimization enabled both of these will probably generate the exact same code
Array[i-1][j+2] // Where Array is 2-dimensional
and
Array[(i-1)*NX+j+2] // Where Array is 1-dimensional
assuming NX is the dimension of the second subscript in the 2-dimensional Array (the number of columns).