cblas ImportError after installing sklearn using dokku - numpy

I'm currently using this Heroku buildpack to install numpy and scipy. Then, I am able to just pip install sklearn from my requirements.txt file during the push deployment.
It appears to install fine, but I run into various errors in files due to imports.
They all ultimately yield an ImportError of:
undefined symbol: cblas_dscal
or
undefined symbol: cblas_dasum
Does anyone have any insight on how to resolve this?

Related

rqt_graph pyqt binding of qt_gui_cpp library

I am using ROS-melodic in ubuntu 18.04 LTS. I am getting this error:
Could not import "pyqt" bindings of qt_gui_cpp library - so C++ plugins will not be available:
Traceback (most recent call last):
File "/opt/ros/melodic/lib/python2.7/dist-packages/qt_gui_cpp/cpp_binding_helper.py", line 43, in <module>
from . import libqt_gui_cpp_sip
ImportError: dynamic module does not define module export function (PyInit_libqt_gui_cpp_sip)
however I have installed pyqt5 and pydot manually, rqt_graph shows up but with this warning. I want to know what can I do to get it right? Is this going to be problem in future? Now rqt_graph is loading but I don't know if this will mess up/create lacking in the graph?
Try uninstalling double packages if any using
pip3 uninstall PyQt5-sip PyQt5
Then try importing it in python3 to check if any other version is there.
If the import is successful then try running the command again
rosrun rqt_graph rqt_graph
if still some error exists install
pip3 install PyQt5==5.12

numpy installation into pypy3 virtual env : `undefined symbol: cblas_sgemm`

I am trying to install numpy into a pypy3 virtualenv, but I'm stuck with that error (at importing) :
venv_pypy/site-packages/numpy-1.16.0.dev0+1d38e41-py3.5-linux-x86_64.egg/numpy/core/_multiarray_umath.pypy3-60-x86_64-linux-gnu.so: undefined symbol: cblas_sgemm
I am on an up to date archlinux, numpy works fine with CPython, but I have a project using pandas (which depends on numpy) that I need to test on pypy.
I first tried the recommended method (pip install numpy in the venv) but didn't work. (install is fine, but still the same error at execution).
I then, I tried what is suggested https://stackoverflow.com/a/14391693/1745291 (linked from Numpy multiarray.so: undefined symbol: cblas_sgemm ), since I didn't installed ATLAS (aur package on arch I don't want to install), to try building with OpenBias. But still not working (same error, and the method could be outdated since it's from 2013)
...And finally, I tried a build with no accelerations (at least, that is claimed) following : https://docs.scipy.org/doc/numpy-1.15.0/user/building.html#disabling-atlas-and-other-accelerated-libraries
...But still the same result...
What am i doing wrong ?
You can try uninstall it from pip and install from apt (if you are using ubuntu etc.)
This approach solved my problem
pip3 uninstall numpy
sudo apt-get install python3-numpy

TensorFlow pip installation issue: cannot import name 'descriptor'

I'm seeing the following error when installing TensorFlow:
ImportError: Traceback (most recent call last):
File ".../graph_pb2.py", line 6, in
from google.protobuf import descriptor as _descriptor
ImportError: cannot import name 'descriptor'
This error signals a mismatch between protobuf and TensorFlow versions.
Take the following steps to fix this error:
Uninstall TensorFlow.
Uninstall protobuf (if protobuf is installed).
Reinstall TensorFlow, which will also install the correct protobuf dependency.
I faced the similar issue, after trial and error, I used the below logic to run the program:
pip install --upgrade --no-deps --force-reinstall tensorflow
This will make sure to uninstall and reinstall the program from fresh. It works!
I would be extra careful before uninstalling/reinstalling other packages such as protobuf. What I think would most likely be the issue is difference in versions. As of writing this, the most recent release of python is 3.7 while tensorflow is only compatible up to 3.6.
If you're using a 3rd party distribution like Anaconda, this can get hidden from you. In this case I would recommend creating a new environment in Anaconda, with python 3.6 and then installing tensorflow: https://conda.io/projects/conda/en/latest/user-guide/getting-started.html#managing-python
Try this:
pip uninstall protobuf
brew install protobuf
mkdir -p
/Users/alexeibendebury/Library/Python/2.7/lib/python/site-packages
echo 'import site;
site.addsitedir("/usr/local/lib/python2.7/site-packages")' >>
/Users/alexeibendebury/Library/Python/2.7/lib/python/site-packages/homebrew.pth

lxml on python-3.3.0 ImportError: undefined symbol: xmlBufContent

I am having a hard time installing lxml(3.1.0) on python-3.3.0. It installs without errors and I can see the lxml-3.1.0-py3.3-linux-i686.egg in the correct folder (/usr/local/lib/python3.3/site-packages/), but when I try to import etree, I get this:
from lxml import etree
Traceback (most recent call last):
File "", line 1, in
ImportError: /usr/local/lib/python3.3/site-packages/lxml-3.1.0-py3.3-linux-i686.egg/lxml/etree.cpython-33m.so: undefined symbol: xmlBufContent
I did try to install with apt-get, I tried "python3 setup.py install" and I did via easy_install. I have to mention that I have 3 versions installed (2.7, 3.2.3 and 3.3.0.), but I am too much of a beginner to tell if this has to do with it.
I did search all over, but I could not find any solution to this.
Any help is greatly appreciated!
best,
Uhru
You should probably mention the specific operating system you're trying to install on, but I'll assume it's some form of Linux, perhaps Ubuntu or Debian since you mention apt-get.
The error message you mention is typical on lxml when the libxml2 and/or libxslt libraries are not installed for it to link with. For whatever reason, the install procedure does not detect when these are not present and can give the sense the install has succeeded even though those dependencies are not satisfied.
If you issue apt-get install libxml2 libxml2-dev libxslt libxslt-dev that should eliminate this error.

Theano fails due to NumPy Fortran mixup under Ubuntu

I installed Theano on my machine, but the nosetests break with a Numpy/Fortran related error message. For me it looks like Numpy was compiled with a different Fortran version than Theano. I already reinstalled Theano (sudo pip uninstall theano + sudo pip install --upgrade --no-deps theano) and Numpy / Scipy (apt-get install --reinstall python-numpy python-scipy), but this did not help.
What steps would you recommend?
Complete error message:
ImportError: ('/home/Nick/.theano/compiledir_Linux-2.6.35-31-generic-x86_64-with-Ubuntu-10.10-maverick--2.6.6/tmpIhWJaI/0c99c52c82f7ddc775109a06ca04b360.so: undefined symbol: _gfortran_st_write_done'
My research:
The Installing SciPy / BuildingGeneral page about the undefined symbol: _gfortran_st_write_done' error:
If you see an error message
ImportError: /usr/lib/atlas/libblas.so.3gf: undefined symbol: _gfortran_st_write_done
when building SciPy, it means that NumPy picked up the wrong Fortran compiler during build (e.g. ifort).
Recompile NumPy using:
python setup.py build --fcompiler=gnu95
or whichever is appropriate (see python setup.py build --help-fcompiler).
But:
Nick#some-serv2:/usr/local/lib/python2.6/dist-packages/numpy$ python setup.py build --help-fcompiler
This is the wrong setup.py file to run
Used software versions:
scipy 0.10.1 (scipy.test() works)
NumPy 1.6.2 (numpy.test() works)
theano 0.5.0 (several tests fails with undefined symbol: _gfortran_st_write_done')
python 2.6.6
Ubuntu 10.10
[UPDATE]
So I removed numpy and scipy from my system with apt-get remove and using find -name XXX -delete of what was left.
Than I installed numpy and scipy from the github sources with sudo python setpy.py install.
Afterwards I entered again sudo pip uninstall theano and sudo pip install --upgrade --no-deps theano.
Error persists :/
I also tried the apt-get source ... + apt-get build-dep ... approach, but for my old Ubuntu (10.10) it installs too old version of numpy and scipy for theano: ValueError: numpy >= 1.4 is required (detected 1.3.0 from /usr/local/lib/python2.6/dist-packages/numpy/__init__.pyc)
I had the same problem, and after reviewing the source code, user212658's answer seemed like it would work (I have not tried it). I then looked for a way to deploy user212658's hack without modifying the source code.
Put these lines in your theanorc file:
[blas]
ldflags = -lblas -lgfortran
This worked for me.
Have you tried to recompile NumPy from the sources?
I'm not familiar with the Ubuntu package system, so I can't check what's in your dist-packages/numpy. With a clean archive of the NumPy sources, you should have a setup.py at the same level as the directories numpy, tools and benchmarks (among others). I'm pretty sure that's the one you want to use for a python setup.py build.
[EDIT]
Now that you have recompiled numpy with the proper --fcompiler option, perhaps could you try to do the same with Theano, that is, compiling directly from sources without relying on a apt-get or even pip. You should have a better control on the build process that way, which will make debugging/trying to find a solution easier.
I had the same problem. The solution I found is to add a hack in theano/gof/cmodule.py to link against gfortran whenever 'blas' is in the libs. That fixed it.
class GCC_compiler(object):
...
#staticmethod
def compile_str(module_name, src_code, location=None,
include_dirs=None, lib_dirs=None, libs=None,
preargs=None):
...
cmd.extend(['-l%s' % l for l in libs])
if 'blas' in libs:
cmd.append('-lgfortran')
A better fix is to remove atlas and install openblas. openblas is faster then atlas. Also, openblas don't request gfortran and is the one numpy was linked with. So it will work out of the box.