pip install tensorflow 10rc.0 failed - message about non-existant easy-install.pth file? - tensorflow

I was not able to install tensorflow 10rc.0, I got an error, it may be from pip, suggesting that the install assumed that there was a easy_install.pth file, but I don't have one. Is this a bug in tensorflow? I am using pip to install into a new conda environment, since most everything else is a conda package, I guess there is no need for the easy_install.pth files. I think I need to easy install something, or make a dummy easy_install.pth file.
Gory details below:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl
(psreldev) psel701: /reg/g/psdm/sw/conda/inst $ pip install --upgrade $TF_BINARY_URL
Collecting tensorflow==0.10.0rc0 from https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl
Downloading https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl (33.3MB)
100% |################################| 33.3MB 20kB/s
Requirement already up-to-date: numpy>=1.8.2 in ./miniconda2-mlearn/lib/python2.7/site-packages (from tensorflow==0.10.0rc0)
Collecting mock>=2.0.0 (from tensorflow==0.10.0rc0)
Downloading mock-2.0.0-py2.py3-none-any.whl (56kB)
100% |################################| 61kB 1.5MB/s
Collecting protobuf==3.0.0b2 (from tensorflow==0.10.0rc0)
Using cached protobuf-3.0.0b2-py2.py3-none-any.whl
Requirement already up-to-date: wheel in ./miniconda2-mlearn/lib/python2.7/site-packages (from tensorflow==0.10.0rc0)
Requirement already up-to-date: six>=1.10.0 in ./miniconda2-mlearn/lib/python2.7/site-packages (from tensorflow==0.10.0rc0)
Collecting funcsigs>=1; python_version < "3.3" (from mock>=2.0.0->tensorflow==0.10.0rc0)
Downloading funcsigs-1.0.2-py2.py3-none-any.whl
Collecting pbr>=0.11 (from mock>=2.0.0->tensorflow==0.10.0rc0)
Downloading pbr-1.10.0-py2.py3-none-any.whl (96kB)
100% |################################| 102kB 1.2MB/s
Collecting setuptools (from protobuf==3.0.0b2->tensorflow==0.10.0rc0)
Downloading setuptools-25.1.1-py2.py3-none-any.whl (442kB)
100% |################################| 450kB 1.1MB/s
Installing collected packages: funcsigs, pbr, mock, setuptools, protobuf, tensorflow
Found existing installation: setuptools 23.0.0
Cannot remove entries from nonexistent file /reg/g/psdm/sw/conda/inst/miniconda2-mlearn/lib/python2.7/site-packages/easy-install.pth

From the documentation on Tensor Flow -
If using pip make sure to use the --ignore-installed flag to prevent errors about easy_install.

Related

How to view .mdb dataset on kaggle?

I load this dataset in this way:
Add data > Search for the dataset name (clova deep text ...) > Add
After the dataset is loaded and is visible in the sidebar, I found a data.mdb and lock.mdb inside every subfolder. I need to examine the contents, view images, view labels ... What should I do to open / view contents in / modify this weird format?
Based on Luke's suggestion I tried apt install mbtools, the installation starts and I'm prompted to enter y/n and unable to because the cell doesn't let you. If I try passing -y I get a unrecognized argument thing. Then I tried the following which complains about the missing package.
pip install mbtools meza
from meza import io
records = io.read('/kaggle/input/clova-deeptext/clova_deeptext/data_lmdb_release/training/ST/data.mdb')
print(next(records))
result:
Collecting mbtools
Downloading mbtools-0.1.1-py3-none-any.whl (3.7 kB)
Collecting meza
Downloading meza-0.42.5-py2.py3-none-any.whl (55 kB)
|████████████████████████████████| 55 kB 165 kB/s eta 0:00:01
Collecting ijson<3.0.0,>=2.3
Downloading ijson-2.6.1-cp37-cp37m-manylinux1_x86_64.whl (65 kB)
|████████████████████████████████| 65 kB 352 kB/s eta 0:00:01
Collecting python-slugify<2.0.0,>=1.2.5
Downloading python-slugify-1.2.6.tar.gz (6.8 kB)
Collecting dbfread==2.0.4
Downloading dbfread-2.0.4-py2.py3-none-any.whl (19 kB)
Requirement already satisfied: beautifulsoup4<5.0.0,>=4.6.0 in /opt/conda/lib/python3.7/site-packages (from meza) (4.10.0)
Collecting pygogo<0.15.0,>=0.13.2
Downloading pygogo-0.13.2-py2.py3-none-any.whl (20 kB)
Requirement already satisfied: requests<3.0.0,>=2.18.4 in /opt/conda/lib/python3.7/site-packages (from meza) (2.25.1)
Collecting xlrd<2.0.0,>=1.1.0
Downloading xlrd-1.2.0-py2.py3-none-any.whl (103 kB)
|████████████████████████████████| 103 kB 1.0 MB/s eta 0:00:01
Requirement already satisfied: PyYAML<6.0.0,>=4.2b1 in /opt/conda/lib/python3.7/site-packages (from meza) (5.4.1)
Collecting chardet<4.0.0,>=3.0.4
Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB)
|████████████████████████████████| 133 kB 987 kB/s eta 0:00:01
Requirement already satisfied: python-dateutil<3.0.0,>=2.7.2 in /opt/conda/lib/python3.7/site-packages (from meza) (2.8.0)
Requirement already satisfied: soupsieve>1.2 in /opt/conda/lib/python3.7/site-packages (from beautifulsoup4<5.0.0,>=4.6.0->meza) (2.2.1)
Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.7/site-packages (from python-dateutil<3.0.0,>=2.7.2->meza) (1.15.0)
Requirement already satisfied: Unidecode>=0.04.16 in /opt/conda/lib/python3.7/site-packages (from python-slugify<2.0.0,>=1.2.5->meza) (1.2.0)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0,>=2.18.4->meza) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0,>=2.18.4->meza) (2021.5.30)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0,>=2.18.4->meza) (1.26.6)
Building wheels for collected packages: python-slugify
Building wheel for python-slugify (setup.py) ... done
Created wheel for python-slugify: filename=python_slugify-1.2.6-py2.py3-none-any.whl size=4609 sha256=8c4763108a666b347806541ae6fa0fb59656f9ea38406507f7c83fd06d7621e9
Stored in directory: /root/.cache/pip/wheels/c5/02/83/9904a9436aa0205c8daa9127109e9ed50d3eab25a5ea2fcb9f
Successfully built python-slugify
Installing collected packages: chardet, xlrd, python-slugify, pygogo, ijson, dbfread, meza, mbtools
Attempting uninstall: chardet
Found existing installation: chardet 4.0.0
Uninstalling chardet-4.0.0:
Successfully uninstalled chardet-4.0.0
Attempting uninstall: python-slugify
Found existing installation: python-slugify 5.0.2
Uninstalling python-slugify-5.0.2:
Successfully uninstalled python-slugify-5.0.2
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
caip-notebooks-serverextension 1.0.0 requires google-cloud-bigquery-storage, which is not installed.
jupyterlab-git 0.11.0 requires nbdime<2.0.0,>=1.1.0, but you have nbdime 3.1.0 which is incompatible.
gcsfs 2021.7.0 requires fsspec==2021.07.0, but you have fsspec 2021.8.1 which is incompatible.
earthengine-api 0.1.283 requires google-api-python-client<2,>=1.12.1, but you have google-api-python-client 1.8.0 which is incompatible.
aiobotocore 1.4.1 requires botocore<1.20.107,>=1.20.106, but you have botocore 1.21.44 which is incompatible.
Successfully installed chardet-3.0.4 dbfread-2.0.4 ijson-2.6.1 mbtools-0.1.1 meza-0.42.5 pygogo-0.13.2 python-slugify-1.2.6 xlrd-1.2.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
You must install [mdbtools](http://sourceforge.net/projects/mdbtools/) in order to use this function
Try using the following snippet:
!sudo apt install mdbtools # Library that allows access to MDB files programatically
!pip install meza # A Python toolkit for processing tabular data
from meza import io
records = io.read('database.mdb') # Use only file path, not file objects
print(next(records))

Building wheel for pandas on Ubuntu 20.04 takes more than 20 minutes, but not on 18.04

I have an installation script for ERPNext that works just fine on Ubuntu 18.04.
When I run the same script on 20.04 I am obliged to wait more than 20 minutes for it to complete where it takes around 30 secs on 18.04.
My script includes these two lines:
./env/bin/pip install numpy==1.18.5
./env/bin/pip install pandas==0.24.2
Their output is:
Collecting numpy==1.18.5
Downloading numpy-1.18.5-cp38-cp38-manylinux1_x86_64.whl (20.6 MB)
|████████████████████████████████| 20.6 MB 138 kB/s
Installing collected packages: numpy
Successfully installed numpy-1.18.5
Collecting pandas==0.24.2
Downloading pandas-0.24.2.tar.gz (11.8 MB)
|████████████████████████████████| 11.8 MB 18.0 MB/s
Requirement already satisfied: python-dateutil>=2.5.0 in ./env/lib/python3.8/site-packages (from pandas==0.24.2) (2.8.1)
Requirement already satisfied: pytz>=2011k in ./env/lib/python3.8/site-packages (from pandas==0.24.2) (2019.3)
Requirement already satisfied: numpy>=1.12.0 in ./env/lib/python3.8/site-packages (from pandas==0.24.2) (1.18.5)
Requirement already satisfied: six>=1.5 in ./env/lib/python3.8/site-packages (from python-dateutil>=2.5.0->pandas==0.24.2) (1.13.0)
Building wheels for collected packages: pandas
Building wheel for pandas (setup.py) ... done
Created wheel for pandas: filename=pandas-0.24.2-cp38-cp38-linux_x86_64.whl size=43655329 sha256=0067caf3a351f263bec1f4aaa3e11c5857d0434db7f56bec7135f3c3f16c8c2b
Stored in directory: /home/erpdev/.cache/pip/wheels/3d/17/1e/85f3aefe44d39a0b4055971ba075fa082be49dcb831db4e4ae
Successfully built pandas
Installing collected packages: pandas
Successfully installed pandas-0.24.2
The line "Building wheel for pandas (setup.py) ... /" is where the 20 min delay occurs.
This is all run from within the Frappe/ERPnext command directory, which has an embedded copy of pip3, like this:
erpdev#erpserver:~$ cd ~/frappe-bench/
erpdev#erpserver:~/frappe-bench$ ./env/bin/pip --version
pip 20.1.1 from /home/erpdev/frappe-bench/env/lib/python3.8/site-packages/pip (python 3.8)
erpdev#erpserver:~/frappe-bench$
I would be most grateful for any suggestions how to speed it up.
I just update pip using pip install --upgrade pip and it solved.
Your issue may be less to do with your distribution and more to be with the Python version in your virtualenv. Ubuntu 20.04 has its default Python pointing to 3.8.
From the pandas project listing on PyPI, your pip searches for a version that's compatible with your system, as provided by the project maintainers.
It seems you're using CPython3.8. pandas==0.24.2 does not wheels built for your version, so your system builds them for itself each time. You can check the available download files from here.
Possible Solutions:
While creating your env, check out this answer to generate a virtual environment for a different version. Seems like your options are between 3.5, 3.6 and 3.7.
Build a wheel for CPython3.8 and ship it along with your script. You can install your package from using that.

pandas installation error using pip installer

I am getting following error repeatedly while installing pandas through pip installer for python 3.7 in command prompt
Using cached https://files.pythonhosted.org/packages/26/fc/d0509d445d2724fbc5f9c9a6fc9ce7da794873469739b6c94afc166ac2a2/pandas-0.23.4-cp37-cp37m-win32.whl
Collecting pytz>=2011k (from pandas)
Downloading https://files.pythonhosted.org/packages/61/28/1d3920e4d1d50b19bc5d24398a7cd85cc7b9a75a490570d5a30c57622d34/pytz-2018.9-py2.py3-none-any.whl (510kB)
100% |████████████████████████████████| 512kB 204kB/s
Collecting python-dateutil>=2.5.0 (from pandas)
Downloading https://files.pythonhosted.org/packages/74/68/d87d9b36af36f44254a8d512cbfc48369103a3b9e474be9bdfe536abfc45/python_dateutil-2.7.5-py2.py3-none-any.whl (225kB)
100% |████████████████████████████████| 235kB 187kB/s
Collecting numpy>=1.9.0 (from pandas)
Downloading https://files.pythonhosted.org/packages/94/b5/f4bdf7bce5f8b35a2a83a0b70c545ca061a50b54724b5287505064906b14/numpy-1.16.0-cp37-cp37m-win32.whl (10.0MB)
100% |████████████████████████████████| 10.0MB 139kB/s
Could not install packages due to an EnvironmentError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\LENOVO\\AppData\\Local\\Temp\\pip-req-tracker-hwub07hg\\29e06807d5aed8dd372ea37c64d1e88dc172ee212d473a412d5e638c'
Consider using the `--user` option or check the permissions.
I have run the command through administrator but it didnt workout.
Try to upgrade as Jonny suggested.Restart your computer,Upgrade pip and try installing pandas with Unofficial Windows Binaries for Python Extension Packages or pip install --user <package_name>.
Also take a look at PermissionError: [WinError 32] The process cannot access the file because it is being used by another process.
Hope it was helpful.

Make colab use the latest installation of a library

I am trying to use the bleeding edge version of sklearn installing it from their github as shown on line 2 in the below image. Line 5 imports some functions from this version of sklearn. This line works in my local and not on Google Colab. Am I missing something to hint the tool to use the latest installed version and not its cached version?
I am not sure why that's happening but if you uninstall scikit-learn before installing the latest dev, it would work:
[1] !pip uninstall scikit-learn -y
Uninstalling scikit-learn-0.19.1:
Successfully uninstalled scikit-learn-0.19.1
[2]!pip install Cython
!pip install git+git://github.com/scikit-learn/scikit-learn.git
Requirement already satisfied: Cython in /usr/local/lib/python3.6/dist-packages (0.28.2)
Collecting git+git://github.com/scikit-learn/scikit-learn.git
Cloning git://github.com/scikit-learn/scikit-learn.git to /tmp/pip-req-build-d59ukisw
Requirement already satisfied: numpy>=1.8.2 in /usr/local/lib/python3.6/dist-packages (from scikit-learn==0.20.dev0) (1.14.3)
Requirement already satisfied: scipy>=0.13.3 in /usr/local/lib/python3.6/dist-packages (from scikit-learn==0.20.dev0) (0.19.1)
Building wheels for collected packages: scikit-learn
Running setup.py bdist_wheel for scikit-learn ... done
Stored in directory: /tmp/pip-ephem-wheel-cache-is88dk15/wheels/a1/50/0e/316ef2ff8d4cfade292bd20b49efda94727688a153382745a6
Successfully built scikit-learn
Installing collected packages: scikit-learn
Successfully installed scikit-learn-0.20.dev0
[3] !pip freeze | grep scikit
scikit-image==0.13.1
scikit-learn==0.20.dev0
[4] from sklearn.preprocessing import CategoricalEncoder
[5] import sklearn
sklearn.__version__
'0.20.dev0'

Whl file for tensorflow installation on windows

I installed pip via get-pip.py script
$ python get-pip.py --proxy="proxy.intranet.com:8080"
Collecting pip
Downloading pip-8.1.2-py2.py3-none-any.whl (1.2MB)
100% |████████████████████████████████| 1.2MB 559kB/s
Collecting wheel
Downloading wheel-0.29.0-py2.py3-none-any.whl (66kB)
100% |████████████████████████████████| 71kB 2.9MB/s
Installing collected packages: pip, wheel
Successfully installed pip-8.1.2 wheel-0.29.0
it worked fine. On upgrade, it seemed to be the latest version.
$ python -m pip install -U pip
Requirement already up-to-date: pip in /usr/lib/python2.7/site-packages
now, when I try to install tensorflow on windows using the below command, it doesn't work.
$ pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl
**tensorflow-0.5.0-cp27-none-linux_x86_64.whl is not a supported wheel on this platform**.
I tried to search a wheel file for windows, but couldn't find it. Anyone knows the locations to the whl file? Thanks in advance!
TensorFlow is only compatible with 64bit. Ensure that the Python installation is not 32bit.
TensorFlow is now available on Windows, from version 0.12 onwards. You can install the PIP package from PyPI, using the following command (for the CPU-only build):
C:\> pip install tensorflow
...or the following command if you have a CUDA 8.0-compatible GPU:
C:\> pip install tensorflow-gpu