How to fix Illegal instruction (core dumped) - tensorflow

Hi i am trying to fix this issue when i run python3 brain.py below i get this error
Illegal instruction (core dumped)
from imageai.Prediction import ImagePrediction
import os
execution_path=os.getcwd()
prediction = ImagePrediction()
prediction.setModelTypeAsSqueezeNet()
prediction.setModelPath(os.path.join(execution_path, "squeezenet_weights_tf_dim_ordering_tf_kernels.h5"))
prediction.loadModel()
predictions, probabilities = prediction.predictImage(os.path.join(execution_path, "giraffe.jpg"), result_count=5 )
for eachPrediction, eachProbability in zip(predictions, probabilities):
print(eachPrediction , " : " , eachProbability)
I have tried to downgrade Tensorflow to 1.5.0 but then after i run that i get these errors
[ons mar 25 23:11:45] Jonathan#Whats next?:~/ReallySmartBrain$ pip3 install tensorflow==1.5.0
Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement tensorflow==1.5.0 (from versions: 1.13.0rc1, 1.13.0rc2, 1.13.1, 1.13.2, 1.14.0rc0, 1.14.0rc1, 1.14.0, 1.15.0rc0, 1.15.0rc1, 1.15.0rc2, 1.15.0rc3, 1.15.0, 1.15.2, 2.0.0a0, 2.0.0b0, 2.0.0b1, 2.0.0rc0, 2.0.0rc1, 2.0.0rc2, 2.0.0, 2.0.1, 2.1.0rc0, 2.1.0rc1, 2.1.0rc2, 2.1.0, 2.2.0rc0, 2.2.0rc1)
ERROR: No matching distribution found for tensorflow==1.5.0
The other solution is to compile it from source code but i'don't have any idea to compile it from source code.
Can i fix this anyway?

I had the same problem. It seems that this problem is for older CPUs. As you said, one solution is with a downgrade to tensorflow 1.5.0.
The other solution (that one that worked for me) is to build tensorflow from source.
I compiled the version 2.1.0, it took me around 25 hours with a Intel(R) Pentium(R) Dual CPU T2370 # 1.73GHz and 2GB RAM.
You would need to install the proper version of Bazel. Find below the complete instructions from tensorflow:
https://www.tensorflow.org/install/source
I needed to add a swap file of 4GB. Otherwise you will go out of memory during the compilation.
Anyway, I have uploaded my .whl file in case you don't want to expend 25 hours (or more) to compile your own file:
https://drive.google.com/open?id=1ISgMcDiCw5W5MFvS5Zbme6pNBbA7xWMH

I've got a similar problem with an old CPU on a vintage Mac OS now running Linux because no recent Mac OS can run on it. I've tried to compile from sources and got a bunch of problems (Bazel, compiler, flags, dependencies, ...). Finally, I've lost few hours to learn that could be a real nightmare. Good advice, don't even try it!

Related

Library not loaded: #rpath/libtbb.dylib in Prophet / Python

I'm on a Mac X1, Monterey.
I've installed prophet and run into this issue when trying to fit a model.
RuntimeError: Error during optimization: console log output:
dyld[90668]: Library not loaded: #rpath/libtbb.dylib
Referenced from: /Users/{username}/opt/anaconda3/lib/python3.9/site-packages/prophet/stan_model/prophet_model.bin
Reason: tried: '/private/var/folders/cd/dfrqgp4s4ll55cwb7rtgccbw0000gq/T/pip-install-rjpuj450/prophet_d7e4cce10e414c89a572fe3605ae9269/build/lib.macosx-11.1-arm64-cpython-39/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/private/var/folders/cd/dfrqgp4s4ll55cwb7rtgccbw0000gq/T/pip-install-rjpuj450/prophet_d7e4cce10e414c89a572fe3605ae9269/build/lib.macosx-11.1-arm64-cpython-39/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/libtbb.dylib' (no such file), '/usr/local/lib/libtbb.dylib' (no such file), '/usr/lib/libtbb.dylib' (no such file)
I know this has to do with the wrong paths being searched. I can find the dylib in
/Users/{user}/opt/anaconda3/lib/python3.9/site-packages/prophet/stan_model/cmdstan-2.26.1/stan/lib/stan_math/lib/tbb/
But, it seems prophet doesn't know to look there. I'm curious how I can update/fix either the rpath variable or find another solution?
I tried to create a symbolic link with sudo ln -s, but don't have permissions on the laptop.
TIA!
I got it to work on Apple Silicon (M1 Max in my case) by installing older versions of both pystan and prophet:
pip install pystan==2.19.1.1
pip install prophet==1.0
The other important piece of the puzzle is that you should use Python 3.8 to get it working.
Installing older versions of the libraries and using Python 3.8 are both talked about in issue #2002 on Github, but there's not really an explanation of the libtbb.dylib error message.

Error in Power BI while importing pandas library in python scrip

Below are the mentioned error while importing pandas library in Power BI in python script.
Details: "ADO.NET: Python script error.
C:\USERS\YADAVP\ANACONDA3\lib\site-packages\numpy\__init__.py:140: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service
from . import _distributor_init
Traceback (most recent call last):
File "PythonScriptWrapper.PY", line 2, in <module>
import os, pandas, matplotlib
File "C:\USERS\YADAVP\ANACONDA3\lib\site-packages\pandas\__init__.py", line 17, in <module>
"Unable to import required dependencies:\n" + "\n".join(missing_dependencies)
ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
1. Check that you expected to use Python3.7 from "C:\USERS\YADAVP\ANACONDA3\python.exe",
and that you have no directories in your PATH or PYTHONPATH that can
interfere with the Python and numpy version "1.18.1" you're trying to use.
2. If (1) looks fine, you can open a new issue at
https://github.com/numpy/numpy/issues. Please include details on:
- how you installed Python
- how you installed numpy
- your operating system
- whether or not you have multiple versions of Python installed
- if you built from source, your compiler versions and ideally a build log
- If you're working with a numpy git repository, try `git clean -xdf`
(removes all files not under version control) and rebuild numpy.
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: DLL load failed: The specified module could not be found.
What is the resolution to sort this kind of error in Power BI?
Forget Anaconda and use WinPython.
I tried Anaconda for days with all the workarounds available in StackOverflow and other forums, and they took me nowhere.
Then I tried WinPython, and it worked immediately. Of course, you will need to change the PowerBI options accordingly.
To install WinPython: https://github.com/winpython/winpython
To change the detected Python home directory: https://learn.microsoft.com/en-us/power-bi/connect-data/desktop-python-scripts#enable-python-scripting
If you consider my answer, you won't need to downgrade Python, PBI, or anything else.
I had the same error. Unfortunately, PowerBI wont work with Jupyter Notebook Python.
So you have to install a "normal" Python: https://www.python.org/downloads/
And configure the Python you want to use in PowerBI and install your needed Python libraries via pip
Edit: Please use Python 3.8 because 3.9 doesnt support NumPy for now

Python 3.7 + Visual Studio 2107 + boost 1.69

I am trying to get boost 1.69 working with visual studio 2107. My goal is to use Numpy in C++
When I include #include boost/python/numpy.hpp
The error I am getting is:
Searching C:\boost_1_69_0\stage\lib\boost_python37-vc141-mt-gd-x32-1_69.lib:
1>LINK : fatal error LNK1104: cannot open file 'boost_numpy37-vc141-mt-gd-x32-1_69.lib'
I am pretty sure that I have this file in this directory.
My architucture in the project is x86, 32-bit addrressing
I built boost as follows:
.\bbotstarp.bat
.\b2 -j8 --toolset=msvc-14.1 --build-type=complete link=static runtime-link=static architecture=x86 address-model=32 stage --with-python
I added the include and link folders to the project.
I do not use precompiled headers
Is there anything that I am missing?
Thanks
I had the same problem. It seems like Boost python is not supported by python 3.7 vert well.
Using python 3.6 will solve this problem.
I have been looking at this issue for months and finally figure out the root cause and solutions. The root cause that boost numpy is not built is because numpy is unable to be imported when ./b2 checks for numpy. As a clue from this post Using boost numpy with visual studio 2019 and python 3.8, you can append --debug-configuration to see the debugging information of boost python building process like this in my PC
notice: [python-cfg] Checking for NumPy...
notice: [python-cfg] running command 'C:/Anaconda3_Install_Root/envs/my_envs/python -c "import sys; sys.stderr = sys.stdout; import numpy; print(numpy.get_include())"'
And, the error comes from ImportError for some reason:
ImportError: DLL load failed while importing _multiarray_umath: The specified module could not be found.
After looking at this post numpy is already installed with Anaconda but I get an ImportError (DLL load failed: The specified module could not be found), I found this import process has to be under python environment such as under conda environment or PyCharm terminal (They both work in my PC) with all the required PATH to be imported. Now I can generate numpy static library with Python 3.8, VS 2019, boost v1.74, Windows 10. The command I use to build boost python is .\b2 --with-python python-debugging=off threading=multi variant=release li
nk=static address-model=64 stage --debug-configuration. Hopefully, that will work in yours.

How to let TensorFlow XLA know the CUDA path

I installed TensorFlow nightly build version via the command
pip install tf-nightly-gpu --prefix=/tf/install/path
When I tried to run any XLA example, TensorFlow has error "Unable to find libdevice dir. Using '.' Failed to compile ptx to cubin. Will attempt to let GPU driver compile the ptx. Not found: /usr/local/cuda-10.0/bin/ptxas not found".
So apparently TensorFlow cannot find my CUDA path. In my system, the CUDA is installed in /cm/shared/apps/cuda/toolkit/10.0.130. Since I didn't build TensorFlow from source, by default XLA searches the folder /user/local/cuda-*. But since I do not have this folder, it will issue an error.
Currently my workaround is to create a symbolic link. I checked the TensorFlow source code in tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc. There is a comment in the file "// CUDA location explicitly specified by user via --xla_gpu_cuda_data_dir has highest priority." So how to pass values to this flag? I tried the following two environment variables, but neither of them works:
export XLA_FLAGS="--xla_gpu_cuda_data_dir=/cm/shared/apps/cuda10.0/toolkit/10.0.130/"
export TF_XLA_FLAGS="--xla_gpu_cuda_data_dir=/cm/shared/apps/cuda10.0/toolkit/10.0.130/"
So how to use the flag "--xla_gpu_cuda_data_dir"? Thanks.
you can run export XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda in terminal
There is a code change for this issue, but not clear how to use. Check here https://github.com/tensorflow/tensorflow/issues/23783

bazel build error when following tf tutorial 'usage with c++ api'

I have the following error messages
~/tensorflow$ bazel build tensorflow/examples/label_image/...
ERROR: /home/dooseop/tensorflow/tensorflow/tensorflow.bzl:497:19: name
'DATA_CFG' is not defined.
ERROR: error loading package '': Extension 'tensorflow/tensorflow.bzl'
has errors.
INFO: Elapsed time: 0.144s
when I try to follow the tutorial privided in https://www.tensorflow.org/tutorials/image_recognition
Some people (who receive the same error messages when build tensorflow with bazel) say 'upgrade bazel and try again'.
However, the advise doesn't work for me..Is there anyone who can tell me how to solve the problem?
Note that I installed 1) bazel 0.5.0 2)tensorflow 1.1.0 under ubuntu 16.04.
That is indeed strange, HOST_CFG was removed from Bazel 0.4.4, so a while ago. As far as I know Tensorflow already fixed its uses.
Maybe if you used older version of Bazel before, try running bazel clean --expunge just in case old Bazel left an inconsistent output tree. We did fix a bug there recently.
Or try latest Tensorflow 1.2rc0 or github HEAD.