undefined symbol: clapack_sgesv - numpy

I have this little code:
from numpy import *
from scipy import signal, misc
import matplotlib.pyplot as plt
path="~/pics/"
band_1 = misc.imread(path + "foo.tif");
H = array((1/2.0, 1/4.0, 1/2.0));
signal.convolve2d(band_1.flatten(), H)
plt.figure()
plt.imshow(band_1)
plt.show()
then I execute this code python foo.py and it throws this error:
Traceback (most recent call last):
File "foo.py", line 2, in <module>
from scipy import signal
File "/usr/lib/python2.6/site-packages/scipy/signal/__init__.py", line 10, in <module>
from filter_design import *
File "/usr/lib/python2.6/site-packages/scipy/signal/filter_design.py", line 12, in <module>
from scipy import special, optimize
File "/usr/lib/python2.6/site-packages/scipy/optimize/__init__.py", line 14, in <module>
from nonlin import *
File "/usr/lib/python2.6/site-packages/scipy/optimize/nonlin.py", line 113, in <module>
from scipy.linalg import norm, solve, inv, qr, svd, lstsq, LinAlgError
File "/usr/lib/python2.6/site-packages/scipy/linalg/__init__.py", line 9, in <module>
from basic import *
File "/usr/lib/python2.6/site-packages/scipy/linalg/basic.py", line 14, in <module>
from lapack import get_lapack_funcs
File "/usr/lib/python2.6/site-packages/scipy/linalg/lapack.py", line 15, in <module>
from scipy.linalg import clapack
ImportError: /usr/lib/python2.6/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgesv
What is wrong? It seems to be from scipy import signal but I do not know clearly.
I have check another sources and forums but there is no reasons yet:
http://old.nabble.com/scipy.interpolate-imports---%3E-lapack-errors-td30343730.html
http://permalink.gmane.org/gmane.comp.python.scientific.user/27290
Thank you

On Debian, you can use update-alternatives, assuming you have more than reference implementation installed.
From debian wiki
update-alternatives --config liblapack.so.3
update-alternatives --config libblas.so.3

I can't be certain since you didn't specify what distribution you're using, but I ran into the same issue on Gentoo.
/usr/lib and /usr/lib64 have symlinks to the actual libraries. By default, it links to the reference implementation of libblas, libcblas, and liblapack -- which doesn't export symbols for clapack_sgesv, and many other routines.
To resolve this in Gentoo:
sudo emerge blas-atlas
eselect blas list
eselect cblas list
sudo eselect blas set X # Grab X from the result of
sudo eselect cblas set X # the 'list' lines above
sudo emerge lapack-atlas
eselect lapack list
sudo eselect lapack set X
sudo emerge --unmerge scipy numpy matplotlib
sudo emerge scipy numpy matplotlib (... whatever else ...)

I got this problem after an upgrade from Ubuntu 12.04 to 12.10. The problem was that I had two versions of scipy installed in /usr/local/lib/python2.7/dist-packages. To fix the problem I did:
sudo apt-get remove python-scipy
sudo rm -fr /usr/local/lib/python2.7/dist-packages/scipy*
sudo apt-get install python-scipy

Related

pipenv installed pip does not work with specified python version

On a Raspberry Pi OS Bullseye system, I tried to install numpy with pipenv using a specific python version and got this:
$ pipenv --python /opt/python/3.7/bin/python3 install numpy --verbose
Creating a virtualenv for this project…
Using /opt/python/3.7/bin/python3 (3.7.9) to create virtualenv…
⠋created virtual environment CPython3.7.9.final.0-32 in 410ms
creator CPython3Posix(dest=/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/pi/.local/share/virtualenv)
added seed packages: pip==20.3.4, pkg_resources==0.0.0, setuptools==44.1.1, wheel==0.34.2
activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator
Virtualenv location: /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC
Installing numpy…
⠙Installing 'numpy'
$ "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip" install --verbose "numpy" -i https://pypi.org/simple --exists-action w
⠙
Error: An error occurred while installing numpy!
Traceback (most recent call last):
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip", line 5, in <module>
from pip._internal.cli.main import main
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/main.py", line 10, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/autocompletion.py", line 9, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/main_parser.py", line 7, in <module>
from pip._internal.cli import cmdoptions
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/cmdoptions.py", line 23, in <module>
from pip._vendor.packaging.utils import canonicalize_name
ModuleNotFoundError: No module named 'pip._vendor.packaging'
Looking at the verbose output i see that the path to pip used by pipenv is /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip.
Calling this pip directly indeed leads to the same error:
$ /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip --version
Traceback (most recent call last):
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip", line 5, in <module>
from pip._internal.cli.main import main
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/main.py", line 10, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/autocompletion.py", line 9, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/main_parser.py", line 7, in <module>
from pip._internal.cli import cmdoptions
File "/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip/_internal/cli/cmdoptions.py", line 23, in <module>
from pip._vendor.packaging.utils import canonicalize_name
ModuleNotFoundError: No module named 'pip._vendor.packaging'
Which python is used in that case? Looking at the shebang line it would seem it's the one I passed to pipenv initially:
$ head -n 1 /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip
#!/home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/python
$ ls -l /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/python
lrwxrwxrwx 1 pi pi 27 Dec 11 11:00 /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/python -> /opt/python/3.7/bin/python3
But when I explicitly use that exact interpreter there is no error:
$ /opt/python/3.7/bin/python3 /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/bin/pip --version
pip 20.1.1 from /opt/python/3.7/lib/python3.7/site-packages/pip (python 3.7)
The difference seems to be that in the case it goes wrong, the pip installation in /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip is used while in the working case it's the one in /opt/python/3.7/lib/python3.7/site-packages/pip.
But why? My understanding of the shebang is that it points to the interpreter that's to be used. In the working example all i do is call that interpreter explicitly myself. Why is there a difference in behaviour?
And also, why did pipenv even install its own pip in /home/pi/.local/share/virtualenvs/deep-dregs-eaJke9eC/lib/python3.7/site-packages/pip ? Why didn't it reuse the pip that comes with the python version I passed? And if that's just how pipenv works, why is its pip broken? What's going on? And how can I fix it?
EDIT
When i use my system python 3.9 installation it works fine.

ModuleNotFoundError: No module named 'pandas._libs.interval' | Installed pandas from git in Docker container

This is not a duplicate of existing questions because:
I'm contributing to the pandas repository itself.
I've installed pandas using the git repo and not pip.
I've used a Docker container as suggested by pandas to create the development environment.
The pandas installation is successful & any file is not missing. I've manually verified that pandas._libs.interval is present.
When I tried to import from pandas, I'd get this error:
ImportError while loading conftest '/workspaces/pandas/pandas/conftest.py'.
../../../__init__.py:22: in <module>
from pandas.compat import is_numpy_dev as _is_numpy_dev
../../../compat/__init__.py:15: in <module>
from pandas.compat.numpy import (
../../../compat/numpy/__init__.py:7: in <module>
from pandas.util.version import Version
../../../util/__init__.py:1: in <module>
from pandas.util._decorators import ( # noqa
../../../util/_decorators.py:14: in <module>
from pandas._libs.properties import cache_readonly # noqa
../../../_libs/__init__.py:13: in <module>
from pandas._libs.interval import Interval
E ModuleNotFoundError: No module named 'pandas._libs.interval'
The solution is to rebuild the c extensions.
python setup.py clean (optional, use if 2. doesn't work)
python setup.py build_ext -j 4
Credits: #MarcoGorelli from the pandas community on Gitter.
More on why this solution works:
I suspect that while docker was building the remote container, there were some issues due to an unreliable internet connection.
As all modules were indeed present, one of the only possibilities would be that they couldn't be accessed by Python. The most plausible reason is an issue with the C compiler, something related to cython (interval is a.pyx file).
Also see: https://pandas.pydata.org/docs/development/contributing_environment.html#creating-a-python-environment

from numba import cuda, numpy_support and ImportError: cannot import name 'numpy_support' from 'numba'

I am changing pandas into cudf to make faster aggregating and reduce the processing speed. I figure out one library which works on GPU with pandas.
"CUDF LINK" https://github.com/rapidsai/cudf
When I entered the below to install in my project it gives an error and I also tried many version of numba.
conda install -c rapidsai -c nvidia -c numba -c conda-forge \
cudf=0.13 python=3.7 cudatoolkit=10.2
Traceback
Traceback (most recent call last):
File "/home/khawar/deepface/tests/Ensemble-Face-Recognition.py", line 5, in <module>
import cudf
File "/home/khawar/anaconda3/envs/deepface/lib/python3.7/site-packages/cudf/__init__.py", line 7, in <module>
from cudf import core, datasets
File "/home/khawar/anaconda3/envs/deepface/lib/python3.7/site-packages/cudf/core/__init__.py", line 3, in <module>
from cudf.core import buffer, column
File "/home/khawar/anaconda3/envs/deepface/lib/python3.7/site-packages/cudf/core/column/__init__.py", line 1, in <module>
from cudf.core.column.categorical import CategoricalColumn # noqa: F401
File "/home/khawar/anaconda3/envs/deepface/lib/python3.7/site-packages/cudf/core/column/categorical.py", line 11, in <module>
import cudf._libxx as libcudfxx
File "/home/khawar/anaconda3/envs/deepface/lib/python3.7/site-packages/cudf/_libxx/__init__.py", line 5, in <module>
from . import (
File "cudf/_libxx/aggregation.pxd", line 9, in init cudf._libxx.reduce
File "cudf/_libxx/aggregation.pyx", line 11, in init cudf._libxx.aggregation
File "/home/khawar/anaconda3/envs/deepface/lib/python3.7/site-packages/cudf/utils/cudautils.py", line 7, in <module>
from numba import cuda, numpy_support
ImportError: cannot import name 'numpy_support' from 'numba' (/home/khawar/anaconda3/envs/deepface/lib/python3.7/site-packages/numba/__init__.py)
When trying to install cuDF 0.13, conda is apparently finding a numba version that is incompatible with that cuDF 0.13.
cuDF 0.13 is out of date. The current stable release is 0.17 and the nightly is 0.18. We'll update the README, as it should provide installation instructions for the current version.
We recommend creating a fresh conda environment. Please try the following conda install command, found here:
conda create -n rapids-0.17 -c rapidsai -c nvidia -c conda-forge \
-c defaults rapids-blazing=0.17 python=3.7 cudatoolkit=10.2

Python3: Namespace AppIndicator3 not available

OS: Kubuntu 18.04
I have a Python program (program.py) that has this at the beginning:
import shlex
import sys
import notify2
import os
import gi
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk
gi.require_version("AppIndicator3", "0.1")
from gi.repository import AppIndicator3
When I run it, this is what happens:
$ python3 /path/to/program.py
Traceback (most recent call last):
File "/path/to/program.py", line 34, in <module>
gi.require_version('AppIndicator3', '0.1')
File "/home/linuxbrew/.linuxbrew/Cellar/python#3.8/3.8.5/lib/python3.8/site-packages/gi/__init__.py", line 129, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace AppIndicator3 not available
Relevant info:
$ python3 --version
Python 3.8.5
$ pip3 freeze
dbus-python==1.2.16
docutils==0.16
formiko==1.4.3
libxml2-python==2.9.10
meson==0.55.1
notify2==0.3.1
pycairo==1.19.1
PyGObject==3.36.1
$ brew install gobject-introspection
Warning: gobject-introspection 1.64.1_2 is already installed and up-to-date
To reinstall 1.64.1_2, run `brew reinstall gobject-introspection`
I also have the following installed:
libappindicator3-1 is already the newest version (12.10.1+18.04.20180322.1-0ubuntu1).
gir1.2-appindicator3-0.1 is already the newest version (12.10.1+18.04.20180322.1-0ubuntu1).
python3-gi is already the newest version (3.26.1-2ubuntu1).
What might be keeping AppIndicator3 from being found?
Try taking off line 9 and import it directly.

Sklearn will not run/compile due to numpy errors

I would not be posting this question if I had not researched this problem thoroughly. I run python server.py (it uses sklearn). Which gives me
Traceback (most recent call last):
File "server.py", line 34, in <module>
from lotusApp.lotus import lotus
File "/Users/natumyers/Desktop/.dev/qq/lotusApp/lotus.py", line 2, in <module>
from sklearn import datasets
File "/Library/Python/2.7/site-packages/sklearn/__init__.py", line 57, in <module>
from .base import clone
File "/Library/Python/2.7/site-packages/sklearn/base.py", line 11, in <module>
from .utils.fixes import signature
File "/Library/Python/2.7/site-packages/sklearn/utils/__init__.py", line 10, in <module>
from .murmurhash import murmurhash3_32
File "numpy.pxd", line 155, in init sklearn.utils.murmurhash (sklearn/utils/murmurhash.c:5029)
ValueError: numpy.dtype has the wrong size, try recompiling
I next do everything I can, nothing helps.
I ran:
sudo -H pip uninstall numpy
sudo -H pip uninstall pandas
sudo -H pip install numpy
sudo -H pip install pandas
All which give me errors such as OSError: [Errno 1] Operation not permitted:
I try sudo -H easy_install --upgrade numpy
I get a list of errors like
_configtest.c:13:5: note: 'ctanl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'cpowl' [-Wincompatible-library-redeclaration]
int cpowl (void);
^
Edit: Perhaps part of the issue was that I wasn't running the virtual environment. So I get that going, and when I type python server.py, I get error:
from sklearn import datasets
ImportError: No module named sklearn
sudo -H pip install -U scikit-learn Doesn't install because of another error....
I was using depreciated python. I updated everything to python 3, and used pip3.