Related
Given
df = pd.DataFrame({'x': [np.array(['1', '2.3']), np.array(['30', '99'])]},
index=[pd.date_range('2020-01-01', '2020-01-02', freq='D')])
I would like to filter for np.array(['1', '2.3']). I can do
df[df['x'].apply(lambda x: np.array_equal(x, np.array(['1', '2.3'])))]
but is this the fastest way to do it?
EDIT:
Let's assume that all the elements inside the numpy array are strings, even though it's not good practice!
DataFrame length can go to 500k rows and the number of values in each numpy array can go to 10.
You can rely on list comprehension for performance:
df[np.array([np.array_equal(x,np.array([1, 2.3])) for x in df['x'].values])]
Performance via timeit(on my system currently using 4gb ram) :
%timeit -n 2000 df[np.array([np.array_equal(x,np.array([1, 2.3])) for x in df['x'].values])]
#output:
425 µs ± 10.8 µs per loop (mean ± std. dev. of 7 runs, 2000 loops each)
%timeit -n 2000 df[df['x'].apply(lambda x: np.array_equal(x, np.array([1, 2.3])))]
#output:
875 µs ± 28.6 µs per loop (mean ± std. dev. of 7 runs, 2000 loops each)
My suggestion would be to do the following:
import numpy as np
mat = np.stack([np.array(["a","b","c"]),np.array(["d","e","f"])])
In reality this would be the actual data from the cols of your dataframe. Make sure that these are a single numpy array.
Then do:
matching_rows = (np.array(["a","b","c"]) == mat).all(axis=1)
Which outputs you an array of bools indicating where the matches are located.
So you can then filter your rows like this:
df[matching_rows]
I am trying to do random sampling in the most efficient way in Python, however, I am puzzled because when using the numpy's random.choices() was slower than using the random.choices()
import numpy as np
import random
np.random.seed(12345)
# use gamma distribution
shape, scale = 2.0, 2.0
s = np.random.gamma(shape, scale, 1000000)
meansample = []
samplesize = 500
%timeit meansample = [ np.mean( np.random.choice( s, samplesize, replace=False)) for _ in range(500)]
23.3 s ± 229 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit meansample = [np.mean(random.choices(s, k=samplesize)) for x in range(0,500)]
152 ms ± 324 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
23 Seconds vs 152 ms is a lot of time
What i'am doing wrong?
Two issues here. First, for the pure-Python random library, you probably mean to use sample instead of choices to sample without replacement. That alters the benchmark somewhat. Second, np.random.choice has better performing alternatives for sampling without replacement. This is a known issue related to random generator API. You can use np.random.Generator to get better performance. My timings:
%timeit meansample = [ np.mean( np.random.choice( s, samplesize, replace=False)) for _ in range(500)]
# 1 loop, best of 3: 12.4 s per loop
%timeit meansample = [np.mean(random.choices(s, k=samplesize)) for x in range(0,500)]
# 10 loops, best of 3: 118 ms per loop
sl = s.tolist()
%timeit meansample = [np.mean(random.sample(sl, k=samplesize)) for x in range(0,500)]
# 1 loop, best of 3: 219 ms per loop
g = np.random.Generator(np.random.PCG64())
%timeit meansample = [ np.mean( g.choice( s, samplesize, replace=False)) for _ in range(500)]
# 10 loops, best of 3: 25 ms per loop
So, without replacement, random.sample outperforms np.random.choice but is slower than np.random.Generator.choice.
Hi I have a file with approximately 6M comma-separated values all on one line
I am trying
import pandas as pd
v = pd.read_csv(file_name,
nrows=1, skiprows=3, header=None, verbose=True, dtype=np.float32)
with the file being
Name
Tue Nov 6 13:52:15 2018
Description
52.2269,52.2148,52.246,52.361,52.5263,52.7399,52.9738,53.1952,...45.4,
I get the output
Tokenization took: 0.00 ms
Type conversion took: 53023.43 ms
Parser memory cleanup took: 212.13 ms
v summary shows
1 rows × 6316057 columns
The file reading time takes a lot longer than expected, I think it may be due to the data being in one row. Is there anything I can do to speed it up, or do I need a different library ?
For my timings below, some dummy data:
data = np.random.randn(1_000_000)
with open('tmp', 'wt') as f:
f.write('dummy\n')
f.write('dummy\n')
f.write('dummy\n')
for val in data:
f.write(str(val) + ',')
f.write('\n')
In general, pandas parser is optimized for the 'long' data case, rather than a single very wide row like this. You could pre-process the data, turning the delimiter into newlines, which for my example is ~40x faster.
def parse_wide_to_long(f):
from io import StringIO
data = open(f).read().splitlines()[-1]
data = data.replace(',', '\n')
return pd.read_csv(StringIO(data), header=None)
In [33]: %timeit pd.read_csv('tmp', nrows=1, skiprows=3, header=None, dtype=np.float32)
20.6 s ± 2.04 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [39]: %timeit parse_wide_to_long('tmp')
484 ms ± 35.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
I am attempting to find the most performant method to find unique values from a NumPy array. NumPy's unique function is very slow and sorts the values first before finding the unique. Pandas hashes the values using the klib C library which is much faster. I am looking for a Cython solution.
The simplest solution seems to just iterate through the array and use a Python set to add each element like this:
from numpy cimport ndarray
from cpython cimport set
#cython.wraparound(False)
#cython.boundscheck(False)
def unique_cython_int(ndarray[np.int64_t] a):
cdef int i
cdef int n = len(a)
cdef set s = set()
for i in range(n):
s.add(a[i])
return s
I also tried an unordered_set from c++
from libcpp.unordered_set cimport unordered_set
#cython.wraparound(False)
#cython.boundscheck(False)
def unique_cpp_int(ndarray[np.int64_t] a):
cdef int i
cdef int n = len(a)
cdef unordered_set[int] s
for i in range(n):
s.insert(a[i])
return s
Performance
# create array of 1,000,000
a = np.random.randint(0, 50, 1000000)
# Pure Python
%timeit set(a)
86.4 ms ± 2.58 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# Convert to list first
a_list = a.tolist()
%timeit set(a_list)
10.2 ms ± 74.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# NumPy
%timeit np.unique(a)
32 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# Pandas
%timeit pd.unique(a)
5.3 ms ± 257 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# Cython
%timeit unique_cython_int(a)
13.4 ms ± 1.02 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
# Cython - c++ unordered_set
%timeit unique_cpp_int(a)
17.8 ms ± 158 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Discussion
So pandas is about 2.5x faster than a cythonized set. Its lead increases when there are more distinct elements. Surprisingly, a pure python set (on a list) beats out a cythonized set.
My question here - is there a faster way to do this in Cython than just use the add method repeatedly? And could the c++ unordered_set be improved?
Using Unicode strings
The story changes when we use unicode strings. I believe I have to convert the numpy array to an object data type to properly add its type for Cython.
#cython.wraparound(False)
#cython.boundscheck(False)
def unique_cython_str(ndarray[object] a):
cdef int i
cdef int n = len(a)
cdef set s = set()
for i in range(n):
s.add(a[i])
return s
And again I tried an unordered_set from c++
#cython.wraparound(False)
#cython.boundscheck(False)
def unique_cpp_str(ndarray[object] a):
cdef int i
cdef int n = len(a)
cdef unordered_set[string] s
for i in range(n):
s.insert(a[i])
return s
Performance
Create an array of 1 million strings with 1,000 distinct values
s_1000 = []
for i in range(1000):
s = np.random.choice(list('abcdef'), np.random.randint(5, 50))
s_1000.append(''.join(s))
s_all = np.random.choice(s_1000, 1000000)
# s_all has numpy unicode as its data type. Must convert to object
s_unicode_obj = s_all.astype('O')
# c++ does not easily handle unicode. Convert to bytes and then to object
s_bytes_obj = s_all.astype('S').astype('O')
# Pure Python
%timeit set(s_all)
451 ms ± 5.94 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit set(s_unicode_obj)
71.9 ms ± 5.91 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# using set on a list
s_list = s_all.tolist()
%timeit set(s_list)
63.1 ms ± 7.38 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# NumPy
%timeit np.unique(s_unicode_obj)
1.69 s ± 97.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit np.unique(s_all)
633 ms ± 3.99 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# Pandas
%timeit pd.unique(s_unicode_obj)
97.6 ms ± 6.61 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# Cython
%timeit unique_cython_str(s_unicode_obj)
60 ms ± 5.81 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# Cython - c++ unordered_set
%timeit unique_cpp_str2(s_bytes_obj)
247 ms ± 8.45 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Discussion
So, it appears that Python's set outperforms pandas for unicode strings but not on integers. And again, iterating through the array in Cython doesn't really help us at all.
Cheating with integers
It's possible to circumvent sets if you know the range of your integers isn't too crazy. You can simply create a second array of all zeros/False and turn their position True when you encounter each one and append that number to a list. This is extremely fast since no hashing is done.
The following works for positive integer arrays. If you had negative integers, you would have to add a constant to shift the numbers up to 0.
#cython.wraparound(False)
#cython.boundscheck(False)
def unique_bounded(ndarray[np.int64_t] a):
cdef int i, n = len(a)
cdef ndarray[np.uint8_t, cast=True] unique = np.zeros(n, dtype=bool)
cdef list result = []
for i in range(n):
if not unique[a[i]]:
unique[a[i]] = True
result.append(a[i])
return result
%timeit unique_bounded(a)
1.18 ms ± 21.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The downside is of course memory usage since your largest integer could force an extremely large array. But this method could work for floats too if you knew precisely how many significant digits each number had.
Summary
Integers 50 unique of 1,000,000 total
Pandas - 5 ms
Python set of list - 10 ms
Cython set - 13 ms
'Cheating' with integers - 1.2 ms
Strings 1,000 unique of 1,000,000 total
Cython set - 60 ms
Python set of list - 63 ms
Pandas - 98 ms
Appreciate all the help making these faster.
I think the answer to you question "what is the fastest way to find unique elements" is "it depends". It depends on your data set and on your hardware.
For your scenarios (I mostly looked at integer case) pandas (and used khash) does a pretty decent job. I was not able to match this performance using std::unordered_map.
However, google::dense_hash_set was slightly faster in my experiments than the pandas-solution.
Please read on for a more detailed explanation.
I would like to start out by explaining the results you are observing and use these insights later on.
I start with your int-example: there are only 50 unique elements but 1,000,000 in the array:
import numpy as np
import pandas as pd
a=np.random.randint(0,50, 10**6, dtype=np.int64)
As baseline the timings of np.unique() and pd.unique() for my machine:
%timeit np.unique(a)
>>>82.3 ms ± 539 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit pd.unique(a)
>>>9.4 ms ± 110 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
pandas approach with the set (O(n)) is about 10 times faster than numpy's approach with sorting (O(nlogn)). log n = 20 for n=10**6, so the factor 10 is about the expected difference.
Another difference is, that np.unique returns a sorted array, so one could use binary search to look up the elements. pd.unique returns an unsorted array so we need either to sort it (which might be O(n log n) if there are not many duplicates in the original data) or to transform it to a set-like structure.
Let's take a look at the simple Python-Set:
%timeit set(a)
>>> 257 ms ± 21.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
First thing we must be aware here: we are comparing apples and oranges. The previous unique-functions return numpy arrays, which consists out of lowly c-integers. This one returns a set of full-fledged Python-integers. Quite a different thing!
That means for every element in the numpy-array we must first create a python-object - quite an overhead and only then can we add it to the set.
The conversion to Python-integers can be done in a preprocessing step - your version with list:
A=list(a)
%timeit set(A)
>>> 104 ms ± 952 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit set(list(a))
>>> 270 ms ± 23.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
More than 100 ms are needed for the creation of the Python-integers. However, the python-integers are more complex than the lowly C-ints and thus handling them costs more. Using pd.unique on C-int and than promoting to Python-set is much faster.
And now your Cython version:
%timeit unique_cython_int(a)
31.3 ms ± 630 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
That I don't really understand. I would expect it to perform similar to set(a) -cython would cut out the interpreter, but that would not explain the factor 10. However, we have only 50 different integers (which are even in the integers-pool because they are smaller than 256), so there is probably some optimization, which plays a role/difference.
Let's try another data-set (there are now 10**5 different numbers):
b=np.random.randint(0, 10**5,10**6, dtype=np.int64)
%timeit unique_cython_int(b)
>>> 236 ms ± 31.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit set(b)
>>> 388 ms ± 15.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
A speed-up less than 2 is something what I would expect.
Let's take a look at cpp-version:
%timeit unique_cpp_int(a)
>>> 25.4 ms ± 534 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit unique_cpp_int(b)
>>> 100 ms ± 4.8 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
There is some overhead in copying the data from the cpp-set to the Python set (as DavidW have pointed out), but otherwise the behavior as I would expect given my experience with it: std::unordered_map is somewhat faster than Python, but not the greatest implementation around - panda seems to beat it:
%timeit set(pd.unique(b))
>>> 45.8 ms ± 3.48 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
So it looks like, that in the situation, where there are many duplicated and the hash-function is cheap, the pandas-solution is hard to beat.
One maybe could try out the google data structures.
However, when the data has only very few duplicates, the numpy's sorting solution may become the faster one. The main reason is, that numpy's unique needs only twice the memory - the original data and the output, while pandas hash-set-solution needs much more memory: the original data, the set and the output. For huge datasets it might become the difference between having enough RAM and not having enough RAM.
It depends on the set-implementation how much memory-overhead is needed and it is always about the trade-off between memory and speed. For example std::unordered_set needs at least 32 byte to save a 8-byte integer. Some google's data structures can do better.
Running /usr/bin/time -fpeak_used_memory:%M python check_mem.py with pandas/numpy unique:
#check_mem.py
import numpy as np
import pandas as pd
c=np.random.randint(0, 2**63,5*10**7, dtype=np.int64)
#pd.unique(c)
np.unique(c)
shows 1.2 GB for numpy and 2.0GB for pandas.
Actually, on my Windows machine np.unique is faster than pd.unique if there are (next to) only unique elements in the array, even for "only" 10^6 elements (probably because of the needed rehashes as the used set grows). This is however not the case for my Linux machine.
Another scenario in which pandas doesn't shine is when the calculation of the hash function is not cheap: Consider long strings (let's say of 1000 characters) as objects.
To calculate the hash-value one needs to consider all 1000 characters (which means a lot of data-> a lot of hash misses), the comparison of two strings is mostly done after one or two characters - the probability is then already very high, that we know that the strings are different. So the log n factor of the numpy's unique doesn't look that bad anymore.
It could be better to use a tree-set instead of a hash-set in this case.
Improving on cpp-unordered set:
The method using cpp's unordered set could be improved due to its method reserve(), which would eliminate the need for rehashing. But it is not imported to cython, so the usage is quite cumbersome from Cython.
The reserving however would not have any impact on the runtimes for data with only 50 unique elements and at most factor 2 (amortized costs due to the used resize-strategy) for the data with almost all elements unique.
The hash-function for ints is identity (at least for gcc), so not much to gain here (I don't think using a more fancy hash-function would help here).
I see no way how cpp's unordered-set could be tweaked to beat the khash-implementation used by pandas, which seems to be quite good for this type of tasks.
Here are for example these pretty old benchmarks, which show that khash is somewhat faster than std::unordered_map with only google_dense being even faster.
Using google dense map:
In my experiments, google dense map (from here) was able to beat khash - benchmark code can be found at the end of the answer.
It was faster if there were only 50 unique elements:
#50 unique elements:
%timeit google_unique(a,r)
1.85 ms ± 8.26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit pd.unique(a)
3.52 ms ± 33.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
but also faster if there were only unique elements:
%timeit google_unique(c,r)
54.4 ms ± 375 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [3]: %timeit pd.unique(c)
75.4 ms ± 499 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
My few experiments have also shown, that google_hash_set uses maybe more memory (up to 20%) than khash, but more tests are needed to see whether this is really the case.
I'm not sure my answer helped you at all. My take-aways are:
If we need a set of Python-integers, set(pd.unique(...)) seems to be a good starting point.
There are some cases for which numpy's sorting solution might be better (less memory, sometimes hash-calculation is too expensive)
Knowing more about data can be used to tweak the solution, by making a better trade-off (e.g. using less/more memory/preallocating so we don't need to rehash or to use a bitset for look-up).
Pandas solution seems to be tweaked pretty good for some usual cases, but then for other cases another trade-off might be better - google_dense being the most promising candidate.
Listings for google-tests:
#google_hash.cpp
#include <cstdint>
#include <functional>
#include <sparsehash/dense_hash_set>
typedef int64_t lli;
void cpp_unique(lli *input, int n, lli *output){
google::dense_hash_set<lli, std::hash<lli> > set;
set.set_empty_key(-1);
for (int i=0;i<n;i++){
set.insert(input[i]);
}
int cnt=0;
for(auto x : set)
output[cnt++]=x;
}
the corresponding pyx-file:
#google.pyx
cimport numpy as np
cdef extern from "google_hash.cpp":
void cpp_unique(np.int64_t *inp, int n, np.int64_t *output)
#out should have enough memory:
def google_unique(np.ndarray[np.int64_t,ndim=1] inp, np.ndarray[np.int64_t,ndim=1] out):
cpp_unique(&inp[0], len(inp), &out[0])
the setup.py-file:
from distutils.core import setup, Extension
from Cython.Build import cythonize
import numpy as np
setup(ext_modules=cythonize(Extension(
name='google',
language='c++',
extra_compile_args=['-std=c++11'],
sources = ["google.pyx"],
include_dirs=[np.get_include()]
)))
Ipython-benchmark script, after calling python setup.py build_ext --inplace:
import numpy as np
import pandas as pd
from google import google_unique
a=np.random.randint(0,50,10**6,dtype=np.int64)
b=np.random.randint(0, 10**5,10**6, dtype=np.int64)
c=np.random.randint(0, 2**63,10**6, dtype=np.int64)
r=np.zeros((10**6,), dtype=np.int64)
%timeit google_unique(a,r
%timeit pd.unique(a)
Other listings
Cython version after fixes:
%%cython
cimport cython
from numpy cimport ndarray
from cpython cimport set
cimport numpy as np
#cython.wraparound(False)
#cython.boundscheck(False)
def unique_cython_int(ndarray[np.int64_t] a):
cdef int i
cdef int n = len(a)
cdef set s = set()
for i in range(n):
s.add(a[i])
return s
C++ version after fixes:
%%cython -+ -c=-std=c++11
cimport cython
cimport numpy as np
from numpy cimport ndarray
from libcpp.unordered_set cimport unordered_set
#cython.wraparound(False)
#cython.boundscheck(False)
def unique_cpp_int(ndarray[np.int64_t] a):
cdef int i
cdef int n = len(a)
cdef unordered_set[int] s
for i in range(n):
s.insert(a[i])
return s
I want to calculate the row-wise dot product of two matrices of the same dimension as fast as possible. This is the way I am doing it:
import numpy as np
a = np.array([[1,2,3], [3,4,5]])
b = np.array([[1,2,3], [1,2,3]])
result = np.array([])
for row1, row2 in a, b:
result = np.append(result, np.dot(row1, row2))
print result
and of course the output is:
[ 26. 14.]
Straightforward way to do that is:
import numpy as np
a=np.array([[1,2,3],[3,4,5]])
b=np.array([[1,2,3],[1,2,3]])
np.sum(a*b, axis=1)
which avoids the python loop and is faster in cases like:
def npsumdot(x, y):
return np.sum(x*y, axis=1)
def loopdot(x, y):
result = np.empty((x.shape[0]))
for i in range(x.shape[0]):
result[i] = np.dot(x[i], y[i])
return result
timeit npsumdot(np.random.rand(500000,50),np.random.rand(500000,50))
# 1 loops, best of 3: 861 ms per loop
timeit loopdot(np.random.rand(500000,50),np.random.rand(500000,50))
# 1 loops, best of 3: 1.58 s per loop
Check out numpy.einsum for another method:
In [52]: a
Out[52]:
array([[1, 2, 3],
[3, 4, 5]])
In [53]: b
Out[53]:
array([[1, 2, 3],
[1, 2, 3]])
In [54]: einsum('ij,ij->i', a, b)
Out[54]: array([14, 26])
Looks like einsum is a bit faster than inner1d:
In [94]: %timeit inner1d(a,b)
1000000 loops, best of 3: 1.8 us per loop
In [95]: %timeit einsum('ij,ij->i', a, b)
1000000 loops, best of 3: 1.6 us per loop
In [96]: a = random.randn(10, 100)
In [97]: b = random.randn(10, 100)
In [98]: %timeit inner1d(a,b)
100000 loops, best of 3: 2.89 us per loop
In [99]: %timeit einsum('ij,ij->i', a, b)
100000 loops, best of 3: 2.03 us per loop
Note: NumPy is constantly evolving and improving; the relative performance of the functions shown above has probably changed over the years. If performance is important to you, run your own tests with the version of NumPy that you will be using.
Played around with this and found inner1d the fastest. That function however is internal, so a more robust approach is to use
numpy.einsum("ij,ij->i", a, b)
Even better is to align your memory such that the summation happens in the first dimension, e.g.,
a = numpy.random.rand(3, n)
b = numpy.random.rand(3, n)
numpy.einsum("ij,ij->j", a, b)
For 10 ** 3 <= n <= 10 ** 6, this is the fastest method, and up to twice as fast as its untransposed equivalent. The maximum occurs when the level-2 cache is maxed out, at about 2 * 10 ** 4.
Note also that the transposed summation is much faster than its untransposed equivalent.
The plot was created with perfplot (a small project of mine)
import numpy
from numpy.core.umath_tests import inner1d
import perfplot
def setup(n):
a = numpy.random.rand(n, 3)
b = numpy.random.rand(n, 3)
aT = numpy.ascontiguousarray(a.T)
bT = numpy.ascontiguousarray(b.T)
return (a, b), (aT, bT)
b = perfplot.bench(
setup=setup,
n_range=[2 ** k for k in range(1, 25)],
kernels=[
lambda data: numpy.sum(data[0][0] * data[0][1], axis=1),
lambda data: numpy.einsum("ij, ij->i", data[0][0], data[0][1]),
lambda data: numpy.sum(data[1][0] * data[1][1], axis=0),
lambda data: numpy.einsum("ij, ij->j", data[1][0], data[1][1]),
lambda data: inner1d(data[0][0], data[0][1]),
],
labels=["sum", "einsum", "sum.T", "einsum.T", "inner1d"],
xlabel="len(a), len(b)",
)
b.save("out1.png")
b.save("out2.png", relative_to=3)
You'll do better avoiding the append, but I can't think of a way to avoid the python loop. A custom Ufunc perhaps? I don't think numpy.vectorize will help you here.
import numpy as np
a=np.array([[1,2,3],[3,4,5]])
b=np.array([[1,2,3],[1,2,3]])
result=np.empty((2,))
for i in range(2):
result[i] = np.dot(a[i],b[i]))
print result
EDIT
Based on this answer, it looks like inner1d might work if the vectors in your real-world problem are 1D.
from numpy.core.umath_tests import inner1d
inner1d(a,b) # array([14, 26])
I came across this answer and re-verified the results with Numpy 1.14.3 running in Python 3.5. For the most part the answers above hold true on my system, although I found that for very large matrices (see example below), all but one of the methods are so close to one another that the performance difference is meaningless.
For smaller matrices, I found that einsum was the fastest by a considerable margin, up to a factor of two in some cases.
My large matrix example:
import numpy as np
from numpy.core.umath_tests import inner1d
a = np.random.randn(100, 1000000) # 800 MB each
b = np.random.randn(100, 1000000) # pretty big.
def loop_dot(a, b):
result = np.empty((a.shape[1],))
for i, (row1, row2) in enumerate(zip(a, b)):
result[i] = np.dot(row1, row2)
%timeit inner1d(a, b)
# 128 ms ± 523 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit np.einsum('ij,ij->i', a, b)
# 121 ms ± 402 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit np.sum(a*b, axis=1)
# 411 ms ± 1.99 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit loop_dot(a, b) # note the function call took negligible time
# 123 ms ± 342 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
So einsum is still the fastest on very large matrices, but by a tiny amount. It appears to be a statistically significant (tiny) amount though!