Range of Beta in Kaiser Window - numpy

Why is the biggest value for Beta in the Kaiser Window function 709? I get errors when I use values of 710 or more (in Matlab and NumPy).
Seems like a stupid question but I'm still not able to find an answer...
Thanks in advance!
This is the Kaiser window with a length of M=64 and Beta = 710.

For large x, i0(x) is approximately exp(x), and exp(x) overflows at x near 709.79.
If you are programming in Python and you don't mind the dependency on SciPy, you can use the function scipy.special.i0e to implement the function in a way that avoids the overflow:
In [47]: from scipy.special import i0e
In [48]: def i0_ratio(x, y):
...: return i0e(x)*np.exp(x - y)/i0e(y)
...:
Verify that the function returns the same value as np.i0(x)/np.i0(y):
In [49]: np.i0(3)/np.i0(4)
Out[49]: 0.4318550956673735
In [50]: i0_ratio(3, 4)
Out[50]: 0.43185509566737346
An example where the naive implementation overflows, but i0_ratio does not:
In [51]: np.i0(650)/np.i0(720)
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/numpy/lib/function_base.py:3359: RuntimeWarning: overflow encountered in exp
return exp(x) * _chbevl(32.0/x - 2.0, _i0B) / sqrt(x)
Out[51]: 0.0
In [52]: i0_ratio(650, 720)
Out[52]: 4.184118428217628e-31
In Matlab, to get the same scaled Bessel function as i0e(x), you can use besseli(0, x, 1).

Related

Use of plt.plot vs plt.scatter with two variables (x and f(x,y))

I am new in Python and stack overflow so please bear with me.
I was trying to plot using plt.plot and plt.scatter. The former works perfectly alright while the latter not. Down below is the relevant part of code:
enter code here
def vis_cal(u, a):
return np.exp(2*np.pi*1j*u*np.cos(a))
u = np.array([[1, 2, 3, 4]])
u = u.reshape((4,1))
a = a([[-np.pi, -np.pi/6]])
plt.figure(figsize=(10, 8))
plt.xlabel("Baseline")
plt.ylabel("Vij (Visibility)")
plt.scatter(u, vis_cal(u, a), 'o', color='blue', label="Vij_ind")
plt.legend(loc="lower left")
plt.show()
This returns an error: ValueError: x and y must be the same size
My questions here are
Why the different array size doesn't matter to plt.plot but it does matter to plt.scatter?
Does this mean that if I want to use plt.scatter I always need to make sure that they arrays must have the same size otherwise I need to use plt.plot?
Thank you very much

How to control the display precision of a NumPy float64 scalar?

I'm writing a teaching document that uses lots of examples of Python code and includes the resulting numeric output. I'm working from inside IPython and a lot of the examples use NumPy.
I want to avoid print statements, explicit formatting or type conversions. They clutter the examples and detract from the principles I'm trying to explain.
What I know:
From IPython I can use %precision to control the displayed precision of any float results.
I can use np.set_printoptions() to control the displayed precision of elements within a NumPy array.
What I'm looking for is a way to control the displayed precision of a NumPy float64 scalar which doesn't respond to either of the above. These get returned by a lot of NumPy functions.
>>> x = some_function()
Out[2]: 0.123456789
>>> type(x)
Out[3]: numpy.float64
>>> %precision 2
Out[4]: '%.2f'
>>> x
Out[5]: 0.123456789
>>> float(x) # that precision works for regular floats
Out[6]: 0.12
>>> np.set_printoptions(precision=2)
>>> x # but doesn't work for the float64
Out[8]: 0.123456789
>>> np.r_[x] # does work if it's in an array
Out[9]: array([0.12])
What I want is
>>> # some formatting command
>>> x = some_function() # that returns a float64 = 0.123456789
Out[2]: 0.12
but I'd settle for:
a way of telling NumPy to give me float scalars by default, rather than float64.
a way of telling IPython how to handling a float64, kind of like what I can do with a repr_pretty for my own classes.
IPython has formatters (core/formatters.py) which contain a dict that maps a type to a format method. There seems to be some knowledge of NumPy in the formatters but not for the np.float64 type.
There are a bunch of formatters, for HTML, LaTeX etc. but text/plain is the one for consoles.
We first get the IPython formatter for console text output
plain = get_ipython().display_formatter.formatters['text/plain']
and then set a formatter for the float64 type, we use the same formatter as already exists for float since it already knows about %precision
plain.for_type(np.float64, plain.lookup_by_type(float))
Now
In [26]: a = float(1.23456789)
In [28]: b = np.float64(1.23456789)
In [29]: %precision 3
Out[29]: '%.3f'
In [30]: a
Out[30]: 1.235
In [31]: b
Out[31]: 1.235
In the implementation I also found that %precision calls np.set_printoptions() with a suitable format string. I didn't know it did this, and potentially problematic if the user has already set this. Following the example above
In [32]: c = np.r_[a, a, a]
In [33]: c
Out[33]: array([1.235, 1.235, 1.235])
we see it is doing the right thing for array elements.
I can do this formatter initialisation explicitly in my own code, but a better fix might to modify IPython code/formatters.py line 677
#default('type_printers')
def _type_printers_default(self):
d = pretty._type_pprinters.copy()
d[float] = lambda obj,p,cycle: p.text(self.float_format%obj)
# suggested "fix"
if 'numpy' in sys.modules:
d[numpy.float64] = lambda obj,p,cycle: p.text(self.float_format%obj)
# end suggested fix
return d
to also handle np.float64 here if NumPy is included. Happy for feedback on this, if I feel brave I might submit a PR.

NumPy vectorization with integration

I have a vector and wish to make another vector of the same length whose k-th component is
The question is: how can we vectorize this for speed? NumPy vectorize() is actually a for loop, so it doesn't count.
Veedrac pointed out that "There is no way to apply a pure Python function to every element of a NumPy array without calling it that many times". Since I'm using NumPy functions rather than "pure Python" ones, I suppose it's possible to vectorize, but I don't know how.
import numpy as np
from scipy.integrate import quad
ws = 2 * np.random.random(10) - 1
n = len(ws)
integrals = np.empty(n)
def f(x, w):
if w < 0: return np.abs(x * w)
else: return np.exp(x) * w
def temp(x): return np.array([f(x, w) for w in ws]).sum()
def integrand(x, w): return f(x, w) * np.log(temp(x))
## Python for loop
for k in range(n):
integrals[k] = quad(integrand, -1, 1, args = ws[k])[0]
## NumPy vectorize
integrals = np.vectorize(quad)(integrand, -1, 1, args = ws)[0]
On a side note, is a Cython for loop always faster than NumPy vectorization?
The function quad executes an adaptive algorithm, which means the computations it performs depend on the specific thing being integrated. This cannot be vectorized in principle.
In your case, a for loop of length 10 is a non-issue. If the program takes long, it's because integration takes long, not because you have a for loop.
When you absolutely need to vectorize integration (not in the example above), use a non-adaptive method, with the understanding that precision may suffer. These can be directly applied to a 2D NumPy array obtained by evaluating all of your functions on some regularly spaced 1D array (a linspace). You'll have to choose the linspace yourself since the methods aren't adaptive.
numpy.trapz is the simplest and least precise
scipy.integrate.simps is equally easy to use and more precise (Simpson's rule requires an odd number of samples, but the method works around having an even number, too).
scipy.integrate.romb is in principle of higher accuracy than Simpson (for smooth data) but it requires the number of samples to be 2**n+1 for some integer n.
#zaq's answer focusing on quad is spot on. So I'll look at some other aspects of the problem.
In recent https://stackoverflow.com/a/41205930/901925 I argue that vectorize is of most value when you need to apply the full broadcasting mechanism to a function that only takes scalar values. Your quad qualifies as taking scalar inputs. But you are only iterating on one array, ws. The x that is passed on to your functions is generated by quad itself. quad and integrand are still Python functions, even if they use numpy operations.
cython improves low level iteration, stuff that it can convert to C code. Your primary iteration is at a high level, calling an imported function, quad. Cython can't touch or rewrite that.
You might be able to speed up integrand (and on down) with cython, but first focus on getting the most speed from that with regular numpy code.
def f(x, w):
if w < 0: return np.abs(x * w)
else: return np.exp(x) * w
With if w<0 w must be scalar. Can it be written so it works with an array w? If so, then
np.array([f(x, w) for w in ws]).sum()
could be rewritten as
fn(x, ws).sum()
Alternatively, since both x and w are scalar, you might get a bit of speed improvement by using math.exp etc instead of np.exp. Same for log and abs.
I'd try to write f(x,w) so it takes arrays for both x and w, returning a 2d result. If so, then temp and integrand would also work with arrays. Since quad feeds a scalar x, that may not help here, but with other integrators it could make a big difference.
If f(x,w) can be evaluated on a regular nx10 grid of x=np.linspace(-1,1,n) and ws, then an integral (of sorts) just requires a couple of summations over that space.
You can use quadpy for fully vectorized computation. You'll have to adapt your function to allow for vector inputs first, but that is done rather easily:
import numpy as np
import quadpy
np.random.seed(0)
ws = 2 * np.random.random(10) - 1
def f(x):
out = np.empty((len(ws), *x.shape))
out0 = np.abs(np.multiply.outer(ws, x))
out1 = np.multiply.outer(ws, np.exp(x))
out[ws < 0] = out0[ws < 0]
out[ws >= 0] = out1[ws >= 0]
return out
def integrand(x):
return f(x) * np.log(np.sum(f(x), axis=0))
val, err = quadpy.quad(integrand, -1, +1, epsabs=1.0e-10)
print(val)
[0.3266534 1.44001826 0.68767868 0.30035222 0.18011948 0.97630376
0.14724906 2.62169217 3.10276876 0.27499376]

Enforcing compatibility between numpy 1.8 and 1.9 nansum?

I have code that needs to behave identically independent of numpy version, but the underlying np.nansum function has changed behavior such that np.nansum([np.nan,np.nan]) is 0.0 in 1.9 and NaN in 1.8. The <=1.8 behavior is the one I would prefer, but the more important thing is that my code be robust against the numpy version.
The tricky thing is, the code applies an arbitrary numpy function (generally, a np.nan[something] function) to an ndarray. Is there any way to force the new or old numpy nan[something] functions to conform to the old or new behavior shy of monkeypatching them?
A possible solution I can think of is something like outarr[np.allnan(inarr, axis=axis)] = np.nan, but there is no np.allnan function - if this is the best solution, is the best implementation np.all(np.isnan(arr), axis=axis) (which would require only supporting np>=1.7, but that's probably OK)?
In Numpy 1.8, nansum was defined as:
a, mask = _replace_nan(a, 0)
if mask is None:
return np.sum(a, axis=axis, dtype=dtype, out=out, keepdims=keepdims)
mask = np.all(mask, axis=axis, keepdims=keepdims)
tot = np.sum(a, axis=axis, dtype=dtype, out=out, keepdims=keepdims)
if np.any(mask):
tot = _copyto(tot, np.nan, mask)
warnings.warn("In Numpy 1.9 the sum along empty slices will be zero.",
FutureWarning)
return tot
in Numpy 1.9, it is:
a, mask = _replace_nan(a, 0)
return np.sum(a, axis=axis, dtype=dtype, out=out, keepdims=keepdims)
I don't think there is a way to make the new nansum behave the old way, but given that the original nansum code isn't that long, can you just include a copy of that code (without the warning) if you care about preserving the pre-1.8 behavior?
Note that _copyto can be imported numpy.lib.nanfunctions

Specify the spherical covariance in numpy's multivariate_normal random sampling

In numpy manual, it is said:
Instead of specifying the full covariance matrix, popular approximations include:
Spherical covariance (cov is a multiple of the identity matrix)
Has anybody ever specified spherical covariance? I am trying to make it work to avoid building the full covariance matrix, which is too much memory-consuming.
If you just have a diagonal covariance matrix, it is usually easier (and more efficient) to just scale standard normal variates yourself instead of using multivariate_normal().
>>> import numpy as np
>>> stdevs = np.array([3.0, 4.0, 5.0])
>>> x = np.random.standard_normal([100, 3])
>>> x.shape
(100, 3)
>>> x *= stdevs
>>> x.std(axis=0)
array([ 3.23973255, 3.40988788, 4.4843039 ])
While #RobertKern's approach is correct, you can let numpy handle all of that for you, as np.random.normal will do broadcasting on multiple means and standard deviations:
>>> np.random.normal(0, [1,2,3])
array([ 0.83227999, 3.40954682, -0.01883329])
To get more than a single random sample, you have to give it an appropriate size:
>>> x = np.random.normal(0, [1, 2, 3], size=(1000, 3))
>>> np.std(x, axis=0)
array([ 1.00034817, 2.07868385, 3.05475583])