Julia cant find module even though it knows it is installed - module

I am trying to run a simulator called FLOWUnsteady. At one point, Julia complains:
ERROR: LoadError: InitError: PyError ($(Expr(:escape, :(ccall(#= C:\Users\dsfjk\.julia\packages\PyCall\L0fLP\src\pyfncall.jl:43 =# #pysym(:PyObject_Call), PyPtr, (PyPtr, PyPtr, PyPtr), o, pyargsptr, kw))))) <class 'ModuleNotFoundError'>
ModuleNotFoundError("No module named 'scipy'")
but at the same time:
'''
julia> Pkg.add("SciPy")
Resolving package versions...
No Changes to C:\Users\dsfjk\.julia\environments\v1.7\Project.toml
No Changes to C:\Users\dsfjk\.julia\environments\v1.7\Manifest.toml
'''
How does it not see the package it itself installed?

Looking at the error Julia tries to load scipy via the PyCall and does not see it.
The easiest way to replicate it would be:
using PyCall
pyimport("scipy")
Assuming you will see the same error, the problem is that the module SciPy.jl does not install Python scipy until it is used for the first time. This can be solved easily by loading the module:
julia> using SciPy
[ Info: Installing scipy via the Conda scipy package...
[ Info: Running `conda install -y scipy` in root environment
...
Another option is to add Python scipy manually to your Julia installation:
using Conda
Conda.add("scipy")

Related

numpy typing works locally but not in circleci

I have numpy 1.22.4 installed locally and I changed the output of the expected methods from
def _predict(self, X: pd.DataFrame) -> np.array:
to
def _predict(self, X: pd.DataFrame) -> np.typing.ndarray:
When I execute locally mypy feature_engine the tests pass. But when committing the code to the repo, and the tests run in circle ci, they do not pass, because of the following error:
AttributeError: module 'numpy' has no attribute 'typing'
When I check the circleci environment, it seems to be working with numpy 1.22.4 as well:
Any idea why this could be happening?
For reference, this is the PR in question. These are the versions I have locally:
Python version: 3.10.3
Numpy version: 1.22.4
Pandas version: 1.4.2
Scikit-learn version: 1.1.1
Scipy version: 1.8.1
Statsmodels version: 0.13.2
Mypy version: 0.961
And the error thrown locally when I do not update from np.array to np.typing.ndarray can be seen here:
I found that I have to do something like: import numpy.typing as npt
in addition to my standard numpy import. I can then use the npt alias for all my type hints. Before I made that change I would get the ImportError whenever I tried to run my code.

Failed to install Numpy 1.20.2 with Poetry on Python 3.9

When I try to install Numpy 1.20.2 module with Python Poetry 1.1.4 package manager (poetry add numpy) in a Python 3.9.0 virtual environment, I get:
ERROR: Failed building wheel for numpy
Failed to build numpy
ERROR: Could not build wheels for numpy which use PEP 517 and cannot be installed directly
I read a few threads like this one, but since then it seems the latest Numpy versions are supposed to be able to be built with 3.9 (see this official Numpy release doc, and this answer).
Did I miss something?
EDIT: using pip 21.0.1 (latest)

Could not install pandas in MacOs Catalina due to error "most likely due to using a buggy Accelerate backend"

I'm using MacOS catalina and try to install pandas by
pip3 install pandas
But while I tried to import pandas, I got this error
python3 -c "import pandas"
This is error:
RuntimeError: Polyfit sanity test emitted a warning, most likely due to using a buggy Accelerate backend. If you compiled yourself, see site.cfg.example for information. Otherwise report this to the vendor that provided NumPy.
RankWarning: Polyfit may be poorly conditioned
I've been facing the same issue. There quite a bit of info here:
https://github.com/numpy/numpy/issues/15947
As I understand, its because your python version is detecting issues with the 'buggy Accelerate backend' in NumPy (NumPy is installed with Pandas).
I was running Python 3.9.0 and I was not able to fix. However I have been able to bypass the issue by using version Python 3.8.6. I used this guide to install 3.8.6
https://opensource.com/article/19/5/python-3-default-mac
See the part about
pyenv install
which I modified to:
pyenv install 3.8.6
After I restart, the debug in visual code now shows 3.8.6. I added all required modules again inluding NumPy and its working for me.
good luck

numpy installation into pypy3 virtual env : `undefined symbol: cblas_sgemm`

I am trying to install numpy into a pypy3 virtualenv, but I'm stuck with that error (at importing) :
venv_pypy/site-packages/numpy-1.16.0.dev0+1d38e41-py3.5-linux-x86_64.egg/numpy/core/_multiarray_umath.pypy3-60-x86_64-linux-gnu.so: undefined symbol: cblas_sgemm
I am on an up to date archlinux, numpy works fine with CPython, but I have a project using pandas (which depends on numpy) that I need to test on pypy.
I first tried the recommended method (pip install numpy in the venv) but didn't work. (install is fine, but still the same error at execution).
I then, I tried what is suggested https://stackoverflow.com/a/14391693/1745291 (linked from Numpy multiarray.so: undefined symbol: cblas_sgemm ), since I didn't installed ATLAS (aur package on arch I don't want to install), to try building with OpenBias. But still not working (same error, and the method could be outdated since it's from 2013)
...And finally, I tried a build with no accelerations (at least, that is claimed) following : https://docs.scipy.org/doc/numpy-1.15.0/user/building.html#disabling-atlas-and-other-accelerated-libraries
...But still the same result...
What am i doing wrong ?
You can try uninstall it from pip and install from apt (if you are using ubuntu etc.)
This approach solved my problem
pip3 uninstall numpy
sudo apt-get install python3-numpy

Theano fails due to NumPy Fortran mixup under Ubuntu

I installed Theano on my machine, but the nosetests break with a Numpy/Fortran related error message. For me it looks like Numpy was compiled with a different Fortran version than Theano. I already reinstalled Theano (sudo pip uninstall theano + sudo pip install --upgrade --no-deps theano) and Numpy / Scipy (apt-get install --reinstall python-numpy python-scipy), but this did not help.
What steps would you recommend?
Complete error message:
ImportError: ('/home/Nick/.theano/compiledir_Linux-2.6.35-31-generic-x86_64-with-Ubuntu-10.10-maverick--2.6.6/tmpIhWJaI/0c99c52c82f7ddc775109a06ca04b360.so: undefined symbol: _gfortran_st_write_done'
My research:
The Installing SciPy / BuildingGeneral page about the undefined symbol: _gfortran_st_write_done' error:
If you see an error message
ImportError: /usr/lib/atlas/libblas.so.3gf: undefined symbol: _gfortran_st_write_done
when building SciPy, it means that NumPy picked up the wrong Fortran compiler during build (e.g. ifort).
Recompile NumPy using:
python setup.py build --fcompiler=gnu95
or whichever is appropriate (see python setup.py build --help-fcompiler).
But:
Nick#some-serv2:/usr/local/lib/python2.6/dist-packages/numpy$ python setup.py build --help-fcompiler
This is the wrong setup.py file to run
Used software versions:
scipy 0.10.1 (scipy.test() works)
NumPy 1.6.2 (numpy.test() works)
theano 0.5.0 (several tests fails with undefined symbol: _gfortran_st_write_done')
python 2.6.6
Ubuntu 10.10
[UPDATE]
So I removed numpy and scipy from my system with apt-get remove and using find -name XXX -delete of what was left.
Than I installed numpy and scipy from the github sources with sudo python setpy.py install.
Afterwards I entered again sudo pip uninstall theano and sudo pip install --upgrade --no-deps theano.
Error persists :/
I also tried the apt-get source ... + apt-get build-dep ... approach, but for my old Ubuntu (10.10) it installs too old version of numpy and scipy for theano: ValueError: numpy >= 1.4 is required (detected 1.3.0 from /usr/local/lib/python2.6/dist-packages/numpy/__init__.pyc)
I had the same problem, and after reviewing the source code, user212658's answer seemed like it would work (I have not tried it). I then looked for a way to deploy user212658's hack without modifying the source code.
Put these lines in your theanorc file:
[blas]
ldflags = -lblas -lgfortran
This worked for me.
Have you tried to recompile NumPy from the sources?
I'm not familiar with the Ubuntu package system, so I can't check what's in your dist-packages/numpy. With a clean archive of the NumPy sources, you should have a setup.py at the same level as the directories numpy, tools and benchmarks (among others). I'm pretty sure that's the one you want to use for a python setup.py build.
[EDIT]
Now that you have recompiled numpy with the proper --fcompiler option, perhaps could you try to do the same with Theano, that is, compiling directly from sources without relying on a apt-get or even pip. You should have a better control on the build process that way, which will make debugging/trying to find a solution easier.
I had the same problem. The solution I found is to add a hack in theano/gof/cmodule.py to link against gfortran whenever 'blas' is in the libs. That fixed it.
class GCC_compiler(object):
...
#staticmethod
def compile_str(module_name, src_code, location=None,
include_dirs=None, lib_dirs=None, libs=None,
preargs=None):
...
cmd.extend(['-l%s' % l for l in libs])
if 'blas' in libs:
cmd.append('-lgfortran')
A better fix is to remove atlas and install openblas. openblas is faster then atlas. Also, openblas don't request gfortran and is the one numpy was linked with. So it will work out of the box.