I am running ActiveState Python 3.2, and getting this cryptic error:
D:\code>python
ActivePython 3.2.1.2 (ActiveState Software Inc.) based on
Python 3.2.1 (default, Jul 18 2011, 14:31:09) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> x = np.array([[1, 1], [2, 1], [3, 1]])
>>> y = np.array([3, 4, 5])
>>> be = np.linalg.lstsq(x,y)
MKL ERROR: Parameter 5 was incorrect on entry to DGELSD
MKL ERROR: Parameter 5 was incorrect on entry to DGELSD
>>>
Does anyone know what might be going on?
There seems to be no answer to that. The best I can do is provide the link to my bug report to ActiveState which is now being looked into.
Related
This question already has answers here:
numpy array dtype is coming as int32 by default in a windows 10 64 bit machine
(5 answers)
Closed 9 months ago.
The community reviewed whether to reopen this question 9 months ago and left it closed:
Original close reason(s) were not resolved
Given the following code:
import numpy as np
c = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
c = np.array(c)
print((c * c.transpose()).prod())
On my windows machine it returns "-1462091776" (Not sure how it got a negative from all those positives).
On ubuntu it returns "131681894400"
Anyone know what's going on here?
Edit: Apparently this is an overflow problem. (Thanks #rafaelc !)
But it is reproducible (Also thanks to #richardec for testing that)
So now the question becomes.. is this a bug I should report? Who do I report it to?
I have enough comments that I think an "answer" is warranted.
What happened?
Not sure how it got a negative from all those positives
As #rafaelc points out, you ran into an integer overflow. You can read more details at the wikipedia link that was provided.
What caused the overflow?
According to this thread, numpy uses the operating system's C long type as the default dtype for integers. So when you write this line of code:
c = np.array(c)
The dtype defaults to numpy's default integer data type, which is the operating system's C long. The size of a long in Microsoft's C implementation for Windows is 4 bytes (x8 bits/byte = 32 bits), so your dtype defaults to a 32-bit integer.
Why did this calculation overflow?
In [1]: import numpy as np
In [2]: np.iinfo(np.int32)
Out[2]: iinfo(min=-2147483648, max=2147483647, dtype=int32)
The largest number a 32-bit, signed integer data type can represent is 2147483647. If you take a look at your product across just one axis:
In [5]: c * c.T
Out[5]:
array([[ 1, 8, 21],
[ 8, 25, 48],
[21, 48, 81]])
In [6]: (c * c.T).prod(axis=0)
Out[6]: array([ 168, 9600, 81648])
In [7]: 168 * 9600 * 81648
Out[7]: 131681894400
You can see that 131681894400 >> 2147483647 (in mathematics, the notation >> means "is much, much larger"). Since 131681894400 is much larger than the maximum integer the 32-bit long can represent, an overflow occurs.
But it's fine in Linux
In Linux, a long is 8 bytes (x8 bits/byte = 64 bits). Why? Here's an SO thread that discusses this in the comments.
"Is it a bug?"
No, although it's pretty annoying, I'll admit.
For what it's worth, it's usually a good idea to be explicit about your data types, so next time:
c = np.array(c, dtype='int64')
# or
c = np.array(c, dtype=np.int64)
Who do I report a bug to?
Again, this isn't a bug, but if it were, you'd open an issue on the numpy github (where you can also peruse the source code). Somewhere in there is proof of how numpy uses the operating system's default C long, but I don't have it in me to go digging around to find it.
I am working in Ubuntu 18.04. - Linux distro.
When I use Python I have no problem producing my graphs, tables and plots output.
When I switch to IPython instead of the expected table I get
Figure size 432x288 with 1 Axes
This is the script I am using from Dr. Hilpisch Python for finance O'Reilly books
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.18.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import matplotlib as mpl
In [2]: mpl.version
Out[2]: '3.3.2'
In [3]: import matplotlib.pyplot as plt
In [4]: plt.style.use('seaborn')
In [5]: mpl.rcParams['font.family'] = 'serif'
In [6]: %matplotlib inline
In [7]: import numpy as np
In [8]: np.random.seed(1000)
In [9]: y = np.random.standard_normal(20)
In [10]: x = np.arange(len(y))
In [11]: plt.plot(x, y);
Figure size 432x288 with 1 Axes
Thank You for your help
I'm trying to accelerate some numpy code with cupy, but I'm getting some unexpected results.
I'm running this on a Mac Pro Late 2013, OSX 10.13.6 using a NVIDIA GeForce GTX 1080 Ti.
I have been able to reproduce an error in ipython shown below. When determining a norm the multiplication of the conjugate with itself should give a real number. In numpy this is as expected, but using cupy I end up with an imaginary part.
In [54]: import numpy as np
In [55]: import cupy as cp
In [56]: q = np.arange(4)
In [57]: q.shape=[2,2]
In [58]: q=(0.23+0.33j)*(q+0.43)
In [59]: np.dot(np.conj(q).flatten(),q.flatten())
Out[59]: (3.21975528+0j)
In [60]: q_gpu = cp.asarray(q)
In [61]: cp.dot(cp.conj(q_gpu).flatten(),q_gpu.flatten())
Out[61]: array(3.21975528-1.93612215e-17j)
In [62]: cp.sum(cp.abs(q_gpu)**2)
Out[62]: array(3.21975528)
In [63]: sys.version
Out[63]: '3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 14:38:56) \n[Clang 4.0.1 (tags/RELEASE_401/final)]'
In [64]: sys.version_info
Out[64]: sys.version_info(major=3, minor=7, micro=3, releaselevel='final', serial=0)
I have realized other inconsistencies in precision between running code in cupy vs. numpy.
What am I doing wrong?
import numpy as np
import math
print -1/2*np.log2(1/2)-1/2*np.log2(1/2)
prints nan
Can you explain?
change python version as well
The first is python 2.7, the second is python 3.5
>>> import numpy as np
>>> print(-1/2*np.log2(1/2)-1/2*np.log2(1/2))
nan
>>> print(-1/2*np.log2(1/2)-1/2*np.log2(1/2))
1.0
More information requested...
>>> import numpy as np
>>> print(-1/2*np.log2(1/2)-1/2*np.log2(1/2))
__main__:1: RuntimeWarning: divide by zero encountered in log2
__main__:1: RuntimeWarning: invalid value encountered in double_scalars
nan
Now this can be avoided by floating your terms... the easiest ways is to do it directly...
>>> import numpy as np
>>> print(-1/2.*np.log2(1/2.)-1/2.*np.log2(1/2.))
1.0
Same numpy version, just python has changed between 2.7 and 3.5
In python 2.x division between ints is euclidean division, so 1/2 is 0, and np.log(0) returns nan.
Using python 3:
Python 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 2 2016, 17:53:06)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
>>> 1/2
0.5
whereas in python 2:
Python 2.7.12 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:42:40)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
>>> 1/2
0
>>> 1./2
0.5
>>> from __future__ import division
>>> 1/2
0.5
I have included two ways to get ordinary division in python 2: using a float (1. instead of 1) or importing division from __future __
I try to do the following
from scipy import *
from numpy import *
import scipy as s
import numpy as np
import math
import scipy.sparse as l
from plot import Graph3DSolution
import numpy.linalg as lin
currentSol=s.sparse.linalg.inv(I-C)*A*lastSol
Im missing out some code but the issue is this
Traceback (most recent call last):
File "explict1wave.py", line 62, in <module>
currentSol=s.sparse.linalg.inv(I-C)*A*lastSol
AttributeError: 'module' object has no attribute 'linalg'
Python 2.7.6 |Anaconda 1.9.1 (x86_64)| (default, Jan 10 2014, 11:23:15)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
im>>> import scipy
>>> scipy.__version__
'0.14.0'
>>>
I look up the documentation and it seems these libraries existed since .12 . I dont know what the issue is, but im sure its something simple im not seeing.
>>> import scipy as s
>>> s.sparse
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'sparse'
>>>
>>> from scipy.sparse import linalg
>>> linalg.inv
<function inv at 0x19b1758>
>>>
General recommendations for importing functions from scipy .
On a side note, best avoid star imports. These from scipy import *, from numpy import * are not recommended and not needed here. Same for import scipy as s.