`conda search PKG --info` shows different dependencies than what conda wants to install? - numpy

I'm building a new conda environment using python=3.9 for the
osx-arm64 architecture.
conda create -n py39 python=3.9 numpy
conda list
...
numpy 1.21.1 py39h1a24bff_2
...
python 3.9.7 hc70090a_1
So far so good: numpy=1.21.1 is the one i want. Now I want to add
scipy, and the first one seems to fit the bill:
conda search scipy --info
scipy 1.7.1 py39h2f0f56f_2
--------------------------
file name : scipy-1.7.1-py39h2f0f56f_2.conda
name : scipy
version : 1.7.1
build : py39h2f0f56f_2
build number: 2
size : 14.8 MB
license : BSD 3-Clause
subdir : osx-arm64
url : https://repo.anaconda.com/pkgs/main/osx-arm64/scipy-1.7.1-py39h2f0f56f_2.conda
md5 : edbd5a5399e973d1d0325147b7118f79
timestamp : 2021-08-25 16:12:39 UTC
dependencies:
- blas * openblas
- libcxx >=12.0.0
- libgfortran 5.*
- libgfortran5 >=11.1.0
- libopenblas >=0.3.17,<1.0a0
- numpy >=1.19.5,<2.0a0
- python >=3.9,<3.10.0a0
in particular, python >=3.9 and numpy >=1.19 seems just right.
but when i try the install
conda install scipy
...
The following packages will be DOWNGRADED:
numpy 1.21.1-py39h1a24bff_2 --> 1.19.5-py39habd9f23_3
(I have bumped into various constraints with numpy=1.19 (numba,
pandas,) and am trying to avoid it.)
Why isn't the scipy package happy with the numpy=1.21 version I
have?!
The only possible clue is that conda reports a different python
version (3.8.11) than the v3.9 I specified for this environment:
conda info
active environment : py39
active env location : .../miniconda3/envs/py39
shell level : 1
user config file : .../.condarc
populated config files : .../.condarc
conda version : 4.11.0
conda-build version : not installed
python version : 3.8.11.final.0 <-------------------
virtual packages : __osx=12.1=0
...
but all the environment's pointers seem to be set correctly:
(py39) % which python
.../miniconda3/envs/py39/bin/python
(py39) % python
Python 3.9.7 (default, Sep 16 2021, 23:53:23)
[Clang 12.0.0 ] :: Anaconda, Inc. on darwin
Thanks, any hints as to what's broken will be greatly appreciated!

I now have things working, but I'm afraid I can't point to a satisfying "answer." Others (eg #merv) seem to not be having the same problems and I can't identify the difference.
The one thing that I did find that seemed to create issues in my install was what seems to be some mislabeling of the pandas package: pandas v1.3.5 breaks a numpy==1.19.5 requirement that is the only way i've been able to push it thru. i posted a pandas issue comment

Related

ValueError: invalid literal for int() with base 10: '' while building tensorflow from source with gpu support [duplicate]

When I install tensorflow-gpu through Conda; it gives me the following output:
conda install tensorflow-gpu
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /home/psychotechnopath/anaconda3/envs/DeepLearning3.6
added / updated specs:
- tensorflow-gpu
The following packages will be downloaded:
package | build
---------------------------|-----------------
_tflow_select-2.1.0 | gpu 2 KB
cudatoolkit-10.1.243 | h6bb024c_0 347.4 MB
cudnn-7.6.5 | cuda10.1_0 179.9 MB
cupti-10.1.168 | 0 1.4 MB
tensorflow-2.1.0 |gpu_py36h2e5cdaa_0 4 KB
tensorflow-base-2.1.0 |gpu_py36h6c5654b_0 155.9 MB
tensorflow-gpu-2.1.0 | h0d30ee6_0 3 KB
------------------------------------------------------------
Total: 684.7 MB
The following NEW packages will be INSTALLED:
cudatoolkit pkgs/main/linux-64::cudatoolkit-10.1.243-h6bb024c_0
cudnn pkgs/main/linux-64::cudnn-7.6.5-cuda10.1_0
cupti pkgs/main/linux-64::cupti-10.1.168-0
tensorflow-gpu pkgs/main/linux-64::tensorflow-gpu-2.1.0-h0d30ee6_0
I see that installing tensorflow-gpu automatically triggers the installation of the cudatoolkit and cudnn. Does this mean that I no longer need to install CUDA and CUDNN manually anymore to be able to use tensorflow-gpu? Where does this conda installation of CUDA reside?
I first installed CUDA and CuDNN the old way (e.g. by following these installation instructions: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html )
And then I noticed that tensorflow-gpu was also installing cuda and cudnn
Do i now have two versions of CUDA/CuDNN installed and how do I check this?
Do i now have two versions of CUDA installed and how do I check this?
No.
conda installs the bare minimum redistributable library components required to support the CUDA accelerated packages they offer. The package name cudatoolkit is a complete misnomer. It is nothing of the sort. Even though it is now greatly expanded in scope from what it used to be (literally 5 files -- I think at some point they must have gotten a licensing deal from NVIDIA because some of this wasn't/isn't on the official "freely redistributable" list AFAIK), it still is basically just a handful of libraries.
You can check this for yourself:
cat /opt/miniconda3/conda-meta/cudatoolkit-10.1.168-0.json
{
"build": "0",
"build_number": 0,
"channel": "https://repo.anaconda.com/pkgs/main/linux-64",
"constrains": [],
"depends": [],
"extracted_package_dir": "/opt/miniconda3/pkgs/cudatoolkit-10.1.168-0",
"features": "",
"files": [
"lib/cudatoolkit_config.yaml",
"lib/libcublas.so",
"lib/libcublas.so.10",
"lib/libcublas.so.10.2.0.168",
"lib/libcublasLt.so",
"lib/libcublasLt.so.10",
"lib/libcublasLt.so.10.2.0.168",
"lib/libcudart.so",
"lib/libcudart.so.10.1",
"lib/libcudart.so.10.1.168",
"lib/libcufft.so",
"lib/libcufft.so.10",
"lib/libcufft.so.10.1.168",
"lib/libcufftw.so",
"lib/libcufftw.so.10",
"lib/libcufftw.so.10.1.168",
"lib/libcurand.so",
"lib/libcurand.so.10",
"lib/libcurand.so.10.1.168",
"lib/libcusolver.so",
"lib/libcusolver.so.10",
"lib/libcusolver.so.10.1.168",
"lib/libcusparse.so",
"lib/libcusparse.so.10",
"lib/libcusparse.so.10.1.168",
"lib/libdevice.10.bc",
"lib/libnppc.so",
"lib/libnppc.so.10",
"lib/libnppc.so.10.1.168",
"lib/libnppial.so",
"lib/libnppial.so.10",
"lib/libnppial.so.10.1.168",
"lib/libnppicc.so",
"lib/libnppicc.so.10",
"lib/libnppicc.so.10.1.168",
"lib/libnppicom.so",
"lib/libnppicom.so.10",
"lib/libnppicom.so.10.1.168",
"lib/libnppidei.so",
"lib/libnppidei.so.10",
"lib/libnppidei.so.10.1.168",
"lib/libnppif.so",
"lib/libnppif.so.10",
"lib/libnppif.so.10.1.168",
"lib/libnppig.so",
"lib/libnppig.so.10",
"lib/libnppig.so.10.1.168",
"lib/libnppim.so",
"lib/libnppim.so.10",
"lib/libnppim.so.10.1.168",
"lib/libnppist.so",
"lib/libnppist.so.10",
"lib/libnppist.so.10.1.168",
"lib/libnppisu.so",
"lib/libnppisu.so.10",
"lib/libnppisu.so.10.1.168",
"lib/libnppitc.so",
"lib/libnppitc.so.10",
"lib/libnppitc.so.10.1.168",
"lib/libnpps.so",
"lib/libnpps.so.10",
"lib/libnpps.so.10.1.168",
"lib/libnvToolsExt.so",
"lib/libnvToolsExt.so.1",
"lib/libnvToolsExt.so.1.0.0",
"lib/libnvblas.so",
"lib/libnvblas.so.10",
"lib/libnvblas.so.10.2.0.168",
"lib/libnvgraph.so",
"lib/libnvgraph.so.10",
"lib/libnvgraph.so.10.1.168",
"lib/libnvjpeg.so",
"lib/libnvjpeg.so.10",
"lib/libnvjpeg.so.10.1.168",
"lib/libnvrtc-builtins.so",
"lib/libnvrtc-builtins.so.10.1",
"lib/libnvrtc-builtins.so.10.1.168",
"lib/libnvrtc.so",
"lib/libnvrtc.so.10.1",
"lib/libnvrtc.so.10.1.168",
"lib/libnvvm.so",
"lib/libnvvm.so.3",
"lib/libnvvm.so.3.3.0"
]
.....
i.e. what you get is (keeping in mind most of those "files" above are just symlinks)
CUBLAS runtime
The CUDA runtime library
CUFFT runtime
CUrand runtime
CUsparse rutime
CUsolver runtime
NPP runtime
nvblas runtime
NVTX runtime
NVgraph runtime
NVjpeg runtime
NVRTC/NVVM runtime
The CUDNN package that conda installs is the redistributable binary distribution which is identical to what NVIDIA distribute -- which is exactly two files, a header file and a library.
You would still require a supported NVIDIA driver installation to make the tensorflow which conda installs work.
If you want to actually compile and build CUDA code, you need to install a separate CUDA toolkit which contains all the the development components which conda deliberately omits from their distribution.

Conda package bug? binary incompatability

I'm working in a remote Jupyter notebook on a system where I don't have root access, or even a shell in which to make many adjustments. I can retrieve packages from Conda's archive and run functions in notebook cells that install packages like this
!conda install /path/to/package-vvv.tar.bz2
I've run into situations where I guess wrong on the version number, install something that is incompatible. The error messages are like the one I produce below, binary incompatability in numpy or mkl.
Now I'm re-tracing problem on an Ubuntu 20.10 notebook where I have admin access. I have a reproducible problem to show and share.
Create an environment with same version of python, numpy and pandas, as we have on remote machine:
$ conda create -n cenv-py368 python=3.6.8 pandas=1.1.2 numpy=1.15.4
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.5.12
latest version: 4.9.2
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /home/pauljohn/LinuxDownloads/miniconda3/envs/cenv-py368
added / updated specs:
- numpy=1.15.4
- pandas=1.1.2
- python=3.6.8
The following packages will be downloaded:
package | build
---------------------------|-----------------
libffi-3.2.1 | hf484d3e_1007 52 KB
python-3.6.8 | h0371630_0 34.4 MB
libgcc-ng-9.1.0 | hdf63c60_0 8.1 MB
libstdcxx-ng-9.1.0 | hdf63c60_0 4.0 MB
blas-1.0 | mkl 6 KB
_libgcc_mutex-0.1 | main 3 KB
------------------------------------------------------------
Total: 46.6 MB
The following NEW packages will be INSTALLED:
_libgcc_mutex: 0.1-main
blas: 1.0-mkl
ca-certificates: 2021.1.19-h06a4308_0
certifi: 2020.12.5-py36h06a4308_0
intel-openmp: 2020.2-254
libedit: 3.1.20191231-h14c3975_1
libffi: 3.2.1-hf484d3e_1007
libgcc-ng: 9.1.0-hdf63c60_0
libgfortran-ng: 7.3.0-hdf63c60_0
libstdcxx-ng: 9.1.0-hdf63c60_0
mkl: 2020.2-256
mkl-service: 2.3.0-py36he8ac12f_0
mkl_fft: 1.2.0-py36h23d657b_0
mkl_random: 1.1.1-py36h0573a6f_0
ncurses: 6.2-he6710b0_1
numpy: 1.15.4-py36h7e9f1db_0
numpy-base: 1.15.4-py36hde5b4d6_0
openssl: 1.1.1i-h27cfd23_0
pandas: 1.1.2-py36he6710b0_0
pip: 20.3.3-py36h06a4308_0
python: 3.6.8-h0371630_0
python-dateutil: 2.8.1-pyhd3eb1b0_0
pytz: 2021.1-pyhd3eb1b0_0
readline: 7.0-h7b6447c_5
setuptools: 52.0.0-py36h06a4308_0
six: 1.15.0-pyhd3eb1b0_0
sqlite: 3.33.0-h62c20be_0
tk: 8.6.10-hbc83047_0
wheel: 0.36.2-pyhd3eb1b0_0
xz: 5.2.5-h7b6447c_0
zlib: 1.2.11-h7b6447c_3
Proceed ([y]/n)? y
Downloading and Extracting Packages
libffi-3.2.1 | 52 KB | ##################################### | 100%
python-3.6.8 | 34.4 MB | ##################################### | 100%
libgcc-ng-9.1.0 | 8.1 MB | ##################################### | 100%
libstdcxx-ng-9.1.0 | 4.0 MB | ##################################### | 100%
blas-1.0 | 6 KB | ##################################### | 100%
_libgcc_mutex-0.1 | 3 KB | ##################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate cenv-py368
#
# To deactivate an active environment, use
#
# $ conda deactivate
activate that environment.
Install, for example, the package called "fastparquet":
(cenv-py368) $ conda install fastparquet
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.5.12
latest version: 4.9.2
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /home/pauljohn/LinuxDownloads/miniconda3/envs/cenv-py368
added / updated specs:
- fastparquet
The following packages will be downloaded:
package | build
---------------------------|-----------------
pyparsing-2.4.7 | pyhd3eb1b0_0 59 KB
packaging-20.9 | pyhd3eb1b0_0 35 KB
------------------------------------------------------------
Total: 95 KB
The following NEW packages will be INSTALLED:
fastparquet: 0.5.0-py36h6323ea4_1
libllvm10: 10.0.1-hbcb73fb_5
llvmlite: 0.34.0-py36h269e1b5_4
numba: 0.51.2-py36h0573a6f_1
packaging: 20.9-pyhd3eb1b0_0
pyparsing: 2.4.7-pyhd3eb1b0_0
thrift: 0.11.0-py36hf484d3e_0
Proceed ([y]/n)? y
Downloading and Extracting Packages
pyparsing-2.4.7 | 59 KB | ##################################### | 100%
packaging-20.9 | 35 KB | ##################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Observe failure of import
(cenv-py368) $ python
Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import fastparquet
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pauljohn/LinuxDownloads/miniconda3/envs/cenv-py368/lib/python3.6/site-packages/fastparquet/__init__.py", line 5, in <module>
from .core import read_thrift
File "/home/pauljohn/LinuxDownloads/miniconda3/envs/cenv-py368/lib/python3.6/site-packages/fastparquet/core.py", line 9, in <module>
from . import encoding
File "/home/pauljohn/LinuxDownloads/miniconda3/envs/cenv-py368/lib/python3.6/site-packages/fastparquet/encoding.py", line 13, in <module>
from .speedups import unpack_byte_array
File "fastparquet/speedups.pyx", line 1, in init fastparquet.speedups
ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject
>>> AA
Do you agree I found a bug?
Seems like either Conda should work, or it should say there is no compatible version of fastparquet.
That error usually indicates that the NumPy is older than is compatible with the library that is using it, in this case fastparquet. Try updating the Python version to 3.7 or 3.8; Python 3.6 and NumPy 1.15 are not within the recommended versions today. (Updating Python to 3.7+ should also update NumPy; this is not usually done when you do conda update ...). Some recipes pin to >= some minimum version, this one did not seem to.
https://numpy.org/neps/nep-0029-deprecation_policy.html#support-table
It is a flaw in the preparation of some Python libraries you are importing. When the authors of a package like fastparquet do not correctly set the minimum compatible version of numpy or python for their package, the Conda environment reconciliation has no way to know that the package is incorrect. Conda offers up the package as a solution, although in fact it is not.
In a larger sense, this is a flaw in the way Conda finds compatible packages. Perhaps it is working as intended, so it is not a bug. But it is a flaw, in the sense that when the user pegs numpy=1.15, then the correct answer from Conda should be "there is no compatible package". However, because Conda relies on the version dependencies of contributed packages, it is not able to do so.
I've not encountered the same problem with packaging for RedHat or Debian Linux systems, they tend to report "nothing" rather than providing an inaccurate match.

python tensorflow module dependency on glibc

I successfully build bazel and tensorflow from the source code, but when using the tensorflow module I am getting the following error:
./new_python/bin/python
>>>import tensorflow as tf
Error MSG: File "/home/niraj/Ansible/new_python/lib/python2.7/site-packages/‌​tensorflow/python/py‌​wrap_tensorflow.py", line 28, in <module> _pywrap_tensorflow = swig_import_helper()
ImportError: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /home/niraj/Ansible/new_python/lib/python2.7/site-packages/t‌​ensorflow/python/_py‌​wrap_tensorflow.so)
I am using RHEL6 machine. Any idea how to fix this ?
I found two bug reports on github regarding this very problem
https://github.com/tensorflow/tensorflow/issues/110
https://github.com/bazelbuild/bazel/issues/760
At least I get the impression that getting tensorflow to work on RHEL 6 is at least 'difficult' - as some claim in those two bugreports that they got it to work, with some limitations - if not, at least for now, impossible.
At least for Ubuntu 12.04 and CentOS 6.7 there are solutions. The 2nd answer (mentions CentOS) should work on RHEL 6 as well.
Old/First answer:
According to the link I gathered from this answer, RHEL 6 ships with libc 2.12, not 2.14.
You would have to compile the tensorflow stuff again and link it to an existing libc 2.14 on your system. I'm not quite sure how you were able to compile it without already having libc 2.14 somewhere on your system.
What made the trick for me was updating glibc (in my case to 2.17 version) by:
wget http://copr-be.cloud.fedoraproject.org/results/mosquito/myrepo-el6/epel-6-x86_64/glibc-2.17-55.fc20/glibc-2.17-55.el6.x86_64.rpm
wget http://copr-be.cloud.fedoraproject.org/results/mosquito/myrepo-el6/epel-6-x86_64/glibc-2.17-55.fc20/glibc-common-2.17-55.el6.x86_64.rpm
wget http://copr-be.cloud.fedoraproject.org/results/mosquito/myrepo-el6/epel-6-x86_64/glibc-2.17-55.fc20/glibc-devel-2.17-55.el6.x86_64.rpm
wget http://copr-be.cloud.fedoraproject.org/results/mosquito/myrepo-el6/epel-6-x86_64/glibc-2.17-55.fc20/glibc-headers-2.17-55.el6.x86_64.rpm
sudo rpm -Uvh glibc-2.17-55.el6.x86_64.rpm \
glibc-common-2.17-55.el6.x86_64.rpm \
glibc-devel-2.17-55.el6.x86_64.rpm \
glibc-headers-2.17-55.el6.x86_64.rpm --force --nodeps
I link original answer

Unable to import matplotlib._png (pylab)

I am unable to import matplotlib._png:
import matplotlib._png as _png ImportError:
/home/james/opt/python/virtualenvs/work/lib/python2.7/site-packages/matplotlib-1.3.x-py2.7-linux-x86_64.egg/matplotlib/_png.so:
undefined symbol: png_set_longjmp_fn
This error prevents me from running import pylab (sincce this ultimately imports matplotlib._png).
I installed matplotlib from source, and made sure to add the path with local installations (/home/james/local) to basedir in setupext.py before running python setup.py install.
REQUIRED DEPENDENCIES AND EXTENSIONS
numpy: yes [version 1.7.1]
dateutil: yes [using dateutil version 2.1]
tornado: yes [using tornado version 3.0.1]
pyparsing: yes [using pyparsing version 1.5.7]
pycxx: yes [Couldn't import. Using local copy.]
libagg: yes [pkg-config information for 'libagg' could not
be found Using local copy.]
freetype: yes [version 16.0.10]
png: yes [version 1.2.10]
My research so far:
As can be seen above, matplotlib seems to find version 1.2.10 even though the version that I have under /home/james/local is 1.6.2:
$ find . -iname '*libpng*'
./libpng16.so.16.1.0
./libpng16.so
./libpng16.so.16
./libpng16.a
./libpng.a
./libpng.so
./libpng16.la
./pkgconfig/libpng.pc
./pkgconfig/libpng16.pc
./libpng.la
More specifically, I modified the following line in setupext.py with:
return basedir_map.get(sys.platform, ['/home/james/local', '/usr/local', '/usr'])
but matplotlib seems to have found the system version:
$ locate libpng
/usr/lib/libpng.so
/usr/lib/libpng.so.3
/usr/lib/libpng.so.3.10.0
/usr/lib/libpng12.a
/usr/lib/libpng12.so
/usr/lib/libpng12.so.0
/usr/lib/libpng12.so.0.10.0
Could this be the problem? Why am I unable to import matplotlib._png?
Update:
Looking at setupext.py, it looks like python setup install queries pkg-config through the SetupPackage method _check_for_pkg_config to determine the version of libpng I have installed. It turns out that pkg-config is returning the system installation:
$ pkg-config --libs libpng
-lpng12
even though I have updated basedir in matplotlib's setupext.py, and LD_LIBRARY_PATH to make them point to the the more recent version of libpng that I have locally installed.
Any ideas on how to have pkg-config return the right version?
It's a pkg-config issue; matplotlib's installation is (unfortunately, or perhaps not) relying too much on pkg-config's output.
Assuming you have build libpng the normal way, there should be a pkgconfig subdirectory in your /home/james/local/lib, which contains libpng.pc (and libpng16.pc). When setupext.py runs pkg-config, the latter should of course try and pick up the correct .pc file for libpng. For that, use the PKG_CONFIG_PATH variable and point it to the pkgconfig subdirectory:
$ export PKG_CONFIG_PATH=/home/james/local/lib/pkgconfig
Then, install matplotlib again, and see that it now finds the correct libpng version:
$ python setup.py build
basedirlist is: ['/usr/local', '/usr']
============================================================================
BUILDING MATPLOTLIB
matplotlib: 1.1.0
python: 2.7.4 (default, Apr 8 2013, 16:36:47) [GCC 4.4.5]
platform: linux2
REQUIRED DEPENDENCIES
numpy: 1.7.0
freetype2: 12.0.6
OPTIONAL BACKEND DEPENDENCIES
libpng: 1.6.1
Tkinter: Tkinter: 81008, Tk: 8.4, Tcl: 8.4
(For me, with a different PKG_CONFIG_PATH of course. Yes, I may want to upgrade some dependencies.)
Note that I didn't even alter basedirlist; it's just at its default.
In case pkg-config fails to now pick up some other package, just add more directories to PKG_CONFIG_PATH with colons in between. But I guess this should be enough.
Try
export LD_LIBRARY_PATH=/home/james/local/lib
and then execute Matplotlib... that would point matplotlib to your local version.

Eye of Gnome Python plugins won't autogen because check for PYGTK fails

Presenting symptom: autogen disables the build of slideshowshuffle and pythonconsole, reporting "no python support." Platform is Ubuntu 9.04, Jaunty Jackalope; Gnome 2.26.1.
Log extract:
checking for a Python interpreter with version >= 2.3... python
checking for python... /usr/bin/python
checking for python version... 2.6
checking for python platform... linux2
checking for python script directory... ${prefix}/lib/python2.6/site-packages
checking for python extension module directory... ${exec_prefix}/lib/python2.6/site-packages
checking for PYGTK... no
configure: WARNING: Python not found, disabling python support
Evidence that both python and pygtk are installed:
Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41)
[GCC 4.3.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pygtk
>>>
I note the capitalization of PYGTK, which is common for environment variables. There is no PYGTK environment variable.
Your search - "PYGTK environment
variable" - did not match any
documents.
A grep for PYGTK in the tree rooted from /usr/share/doc/python-gtk2-doc/html returned no rows.
Try installing "python-gtk2-dev" package. You can make sure you have it with
pkg-config --list-all | grep pygtk-2.0
I think, the one you're using from python is "python-gtk2".