I want to call my Python script through pybind11 in gem5. The kafka-python library will be used in my Python script. How can I install the kafka-python library when building gem5?
Related
I am using a MacBook Pro with M1 processor, macOS version 11.0.1, Python 3.8 in PyCharm, Tensorflow version 2.4.0rc4 (also tried 2.3.0, 2.3.1, 2.4.0rc0). I am trying to run the following code:
import tensorflow
This causes the error message:
Process finished with exit code 132 (interrupted by signal 4: SIGILL)
The code runs fine on my Windows and Linux machines.
What does the error message mean and how can I fix it?
Seems that this problem happens when you have multiple python interpreters installed, and some of them are for differente architectuers (x86_64 vs arm64). You need to make sure that the correct python interpreter is being used, if you installed Apple's version of tensorflow, then that probably requires an arm64 interpreter.
If you use rosetta (Apple's x86_64 emulator) then you need to use a x86_64 python interpreter, if you somehow load the arm64 python interpreter, you will get the illegal instruction error (which totally makes sense).
If you use any script that installs new python interpreters, then you need to make sure the correct interpreter for the architecture is installed (most likely arm64).
Overalll I think this problem happens because the python environment setup is not made for systems that can run multiple instruction sets/architectures, pip does check the architecture of packages and the host system but seems you can run a x86_64 interpreter to load a package meant for arm64 and this produces the problem.
For reference there is an issue in tensorflow_macos that people can check.
For M1 Macs, From Apple developer page the following worked:
First, download Conda Env from here and then follow these instructions (assuming the script is downloaded to ~/Downloads folder)
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
reload the shell and do
python -m pip uninstall tensorflow-macos
python -m pip uninstall tensorflow-metal
conda install -c apple tensorflow-deps
python -m pip install tensorflow-macos
python -m pip install tensorflow-metal
If the above doesn't work for some reason, there are some edge cases and additional information provided at the Apple developer page
Installing Tensorflow version 1.15 fixed this for me.
$ conda install tensorflow==1.15
I have been able to resolve this issue by using Miniforge instead of Anaconda as the Python environment. Anaconda doesn't support the arm64 architecture, yet.
I had the same issue
This is because of M1 chip. Now there is a pre-release that delivers hardware-accelerated TensorFlow and TensorFlow Addons for macOS 11.0+. Native hardware acceleration is supported on M1 Macs and Intel-based Macs through Apple’s ML Compute framework.
You need to install the TensorFlow that supports M1 chip Simply pull this tensorflow macos repository and run the ./scripts/download_and_install.sh
I want to build tensorflow with python libraries. Currently I have tensorflow installed but I do not see python packages in /usr/local/lib/python3.6/dist-packages so when I try to import tensorflow module in python terminal, it fails. However, there is library in /usr/lib and C++ programs work.
What is flag/target needed in bazel build?
You can get TensorFlow in python package by:
Directly doing pip install tensorflow: It will install the precompiled version by directly downloading a wheel file.
Build from source code in GitHub using bazel build.
For the second approach, the following steps are needed:
Take the source code from GitHub.
Compile the code using bazel build. It will give the ".whl" file.
Do pip install /tmp/TensorFlow-version-tags.whl
For detailed installation, steps follow this.
I am trying to compile Tensorflow (tried both: full & lite) on Odroid XU4 (16GB eMMc, Ubuntu 16) but I am getting errors shown in figures: https://www.dropbox.com/sh/j86ysncze1q0eka/AAB8RZtUTkaytqfEGivbev_Ga?dl=0
I am using FlytOS as OS(http://docs.flytbase.com/docs/FlytOS/GettingStarted/OdroidGuide.html). Its customized Ubuntu 16 with OpenCV and ROS setup, makes 11GB after installation. So, I got only 2.4GB free. Therefore, I added 16GB USB as swap memory.
I have installed Bazel without using swap memory. Tried tensorflow full version and lite but failed to compile. However, I downloaded compiled tensorflow lite for Pi and successfully installed on Odroid. Since, Odroid is Octacore, therefore, to make best use of available processing power I need to compile tensorflow on Odroid.
Please let me know if any one has tensorflow compiled on Odroid XU4.
Regards,
Check this guide out. Build Tensorflow on Odroid
IT gives a detailed step by step guide and also has some troubleshooting procedures.
Summarizing the steps here:
Install prerequisites including g++, gcc-4.8, python-pip, python-dev, numpy and Oracle Java (not OpenJDK)
Use a USB/ Flash drive and add some swap memory
Build Bazel. In the compile.sh shell script, modify the run line to add memory flags
run “${JAVAC}” -J-Xms256m -J-Xmx384m -classpath “${classpath}” -sourcepath “${sourcepath}”
Get Tensorflow v1.4 specifically and run ./configure and select relevant options. Disable XLA as it's causing some problems.
Finally run Bazel command.
bazel build -c opt --copt="-funsafe-math-optimizations" --copt="-ftree-vectorize" --copt="-fomit-frame-pointer" --local_resources 8192,8.0,1.0 --verbose_failures tensorflow/tools/pip_package:build_pip_package
Now install it.
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
sudo pip2 install /tmp/tensorflow_pkg/tensorflow-1.4.0-cp27-cp27mu-linux_armv7l.whl --upgrade --ignore-installed
Test the install
python
import tensorflow
print(tensorflow.__version__)
1.4.0
I was able to compile it successfully by following the steps given there.
Just to say it upfront, I'm aware of all the answers that require bazel and they didn't work for me. I'm using virtualenv as the tensorflow website recommends to.
(tensorflow27)name#computersname:~$ bazel build --linkopt='-lrt' -c opt --copt=-mavx --copt=-msse4.2 --copt=-msse4.1 --copt=-msse3-k //tensorflow/tools/pip_package:build_pip_package
will output
ERROR: The 'build' command is only supported from within a workspace.
Basically I followed all steps from here
But when I run this validation I get
2017-09-02 11:46:52.613368: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-02 11:46:52.613396: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-02 11:46:52.613416: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
I DO NOT want to surpress the warnings, I actually want to use SSE 4.2 and AVX (my processor supports both) However I wasn't able to find any instructions anywhere how to compile tensorflow inside a virutal environment such that the support for SSE and AVX is enabled from scratch. It's not even listed in their common installation problems section.
Btw. the system I use for this is Ubuntu 14.04 and I don't have an nvidia graphics card (so no cuda for now), but I plan to get one in the future.
I'm a little bit disappointed that tensorflow doesn't detect CPU capabilities before compilation.
Edit: Untrue, later I figured out it actually does
PS: I have setup two virtual environments, one for python 2.7 and the other one for python 3.0. Ideally I would hope that the solution works for both, as I didn't decide yet which python version I will end up using.
OK, so it turned out my problems were pretty much independent of which virtual environment I choose. The bazel build failed simply because I was in the wrong directory. I needed to pull tensorflow from git and then cd into it. Then I could build from there, using the default build option -march=native, which already detects my CPU capabilities.
Setting up two different virtual environments for the two different python versions in advance was useful. I used those environments to compile directly inside of them. This results in automatic python version detection. So building in the 2.7 environment will result in a build for python 2.7 etc.
To be somewhat more detailed:
First I installed bazel. Then I installed the dependencies mentioned on the tensorflow page like this:
sudo apt-get install python3-numpy python3-dev python3-pip python3-wheel
sudo apt-get install python-numpy python-dev python-pip python-wheel
In order to use OpenCL in the build later, I had to download (and complile if I remember correctly) ComputeCpp-CE-0.3.1-Ubuntu.14.04-64bit.tar.gz (For ubuntu 16 it would be ComputeCpp-CE-0.3.1-Ubuntu.16.04-64bit.tar.gz). After compilation I had to move the build result to /usr/local/computecpp:
sudo mkdir /usr/local/computecpp
sudo cp -r ./Downloads/ComputeCpp*/* /usr/local/computecpp
If I remember correctly at this point I then activated my virtual environment with the python version I want to compile against, so the desired python version would be recognized by ./configure.
Then I pulled tensorflow from git and configured it:
git clone https://github.com/tensorflow/tensorflow
cd tensorflow
./configure
During the configuration routine I only answered yes to jemalloc and OpenCL as I don't have a CUDA card right now. When saying yes to OpenCL I was prompted for the ComputeCpp path which was at that location I created under /usr/local/computecpp
Then, while staying in the tensorflow directory I did
bazel build --config=opt --config=mkl //tensorflow/tools/pip_package:build_pip_package
People who activated cuda during ./configure, should also add a "--config=cuda" to this bazel build command. If you have gcc > 5 you will also need to add '--cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0'.
The '--config=mkl' part activates some additional libraries provided by intel specifically to speed up some calculations on their processors, so tensorflow can make use of them. If you don't have an Intel processor, it's probably wise to remove that option.
Btw, at first I also compiled mkl by hand, but that's not necessary it turns out. The bazel build will automatically pull mkl from an online source if it's missing.
Then I created the final package at /tmp/tensorflow_pkg:
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
Depending on the python version in your environment you will see a file name reflecting it:
/tmp/tensorflow_pkg/tensorflow-1.3.0-cp27-cp27mu-linux_x86_64.whl
/tmp/tensorflow_pkg/tensorflow-1.3.0-cp36-cp36m-linux_x86_64.whl
Now I could go to the appropriate environment and install it using pip. If you are using conda, the tensorflow website still recommends to use "pip install" and not "conda install". So if I were in a virtual environment with python 2.7 (doesn't matter if conda or not) I would type:
pip install --ignore-installed --upgrade /tmp/tensorflow_pkg/tensorflow-1.3.0-cp27-cp27mu-linux_x86_64.whl
And if I were under python 3.6 I would type:
pip install --ignore-installed --upgrade /tmp/tensorflow_pkg/tensorflow-1.3.0-cp36-cp36m-linux_x86_64.whl
There was one last hoop I needed to jump over: If you stay in the tensorflow directory (which you pulled from git) and then launch python, you won't be able to import tensorflow. I think it's some kind of a bug. So it's important to escape the tensorflow directory before starting python.
cd ..
Now you can start your python inside your virtual environment and import tensorflow. Hopefully without further errors or any warnings about not using the SSE or AVX capabilities of your processor. At least in my case it worked.
The gtk version of my linux PC is 2.18.
But I can't find corresponding pytgtk from http://ftp.gnome.org/pub/GNOME/sources/pygtk/
The GTK library is one thing, the PyGTK interface to that library is another thing.
The GTK lib is living http://www.gtk.org/download/index.php and it's usually just enough to compile it in a folder like ./usr/GTK and add that folder to your path.
The PyGTK lives in http://www.gtk.org/download/index.php but I prefer to install it from synaptic : sudo apt-get install python-gtk2
There is no need to have the pyGTK of the same version number as the GTK.