How to view .mdb dataset on kaggle? - kaggle

I load this dataset in this way:
Add data > Search for the dataset name (clova deep text ...) > Add
After the dataset is loaded and is visible in the sidebar, I found a data.mdb and lock.mdb inside every subfolder. I need to examine the contents, view images, view labels ... What should I do to open / view contents in / modify this weird format?
Based on Luke's suggestion I tried apt install mbtools, the installation starts and I'm prompted to enter y/n and unable to because the cell doesn't let you. If I try passing -y I get a unrecognized argument thing. Then I tried the following which complains about the missing package.
pip install mbtools meza
from meza import io
records = io.read('/kaggle/input/clova-deeptext/clova_deeptext/data_lmdb_release/training/ST/data.mdb')
print(next(records))
result:
Collecting mbtools
Downloading mbtools-0.1.1-py3-none-any.whl (3.7 kB)
Collecting meza
Downloading meza-0.42.5-py2.py3-none-any.whl (55 kB)
|████████████████████████████████| 55 kB 165 kB/s eta 0:00:01
Collecting ijson<3.0.0,>=2.3
Downloading ijson-2.6.1-cp37-cp37m-manylinux1_x86_64.whl (65 kB)
|████████████████████████████████| 65 kB 352 kB/s eta 0:00:01
Collecting python-slugify<2.0.0,>=1.2.5
Downloading python-slugify-1.2.6.tar.gz (6.8 kB)
Collecting dbfread==2.0.4
Downloading dbfread-2.0.4-py2.py3-none-any.whl (19 kB)
Requirement already satisfied: beautifulsoup4<5.0.0,>=4.6.0 in /opt/conda/lib/python3.7/site-packages (from meza) (4.10.0)
Collecting pygogo<0.15.0,>=0.13.2
Downloading pygogo-0.13.2-py2.py3-none-any.whl (20 kB)
Requirement already satisfied: requests<3.0.0,>=2.18.4 in /opt/conda/lib/python3.7/site-packages (from meza) (2.25.1)
Collecting xlrd<2.0.0,>=1.1.0
Downloading xlrd-1.2.0-py2.py3-none-any.whl (103 kB)
|████████████████████████████████| 103 kB 1.0 MB/s eta 0:00:01
Requirement already satisfied: PyYAML<6.0.0,>=4.2b1 in /opt/conda/lib/python3.7/site-packages (from meza) (5.4.1)
Collecting chardet<4.0.0,>=3.0.4
Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB)
|████████████████████████████████| 133 kB 987 kB/s eta 0:00:01
Requirement already satisfied: python-dateutil<3.0.0,>=2.7.2 in /opt/conda/lib/python3.7/site-packages (from meza) (2.8.0)
Requirement already satisfied: soupsieve>1.2 in /opt/conda/lib/python3.7/site-packages (from beautifulsoup4<5.0.0,>=4.6.0->meza) (2.2.1)
Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.7/site-packages (from python-dateutil<3.0.0,>=2.7.2->meza) (1.15.0)
Requirement already satisfied: Unidecode>=0.04.16 in /opt/conda/lib/python3.7/site-packages (from python-slugify<2.0.0,>=1.2.5->meza) (1.2.0)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0,>=2.18.4->meza) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0,>=2.18.4->meza) (2021.5.30)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0,>=2.18.4->meza) (1.26.6)
Building wheels for collected packages: python-slugify
Building wheel for python-slugify (setup.py) ... done
Created wheel for python-slugify: filename=python_slugify-1.2.6-py2.py3-none-any.whl size=4609 sha256=8c4763108a666b347806541ae6fa0fb59656f9ea38406507f7c83fd06d7621e9
Stored in directory: /root/.cache/pip/wheels/c5/02/83/9904a9436aa0205c8daa9127109e9ed50d3eab25a5ea2fcb9f
Successfully built python-slugify
Installing collected packages: chardet, xlrd, python-slugify, pygogo, ijson, dbfread, meza, mbtools
Attempting uninstall: chardet
Found existing installation: chardet 4.0.0
Uninstalling chardet-4.0.0:
Successfully uninstalled chardet-4.0.0
Attempting uninstall: python-slugify
Found existing installation: python-slugify 5.0.2
Uninstalling python-slugify-5.0.2:
Successfully uninstalled python-slugify-5.0.2
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
caip-notebooks-serverextension 1.0.0 requires google-cloud-bigquery-storage, which is not installed.
jupyterlab-git 0.11.0 requires nbdime<2.0.0,>=1.1.0, but you have nbdime 3.1.0 which is incompatible.
gcsfs 2021.7.0 requires fsspec==2021.07.0, but you have fsspec 2021.8.1 which is incompatible.
earthengine-api 0.1.283 requires google-api-python-client<2,>=1.12.1, but you have google-api-python-client 1.8.0 which is incompatible.
aiobotocore 1.4.1 requires botocore<1.20.107,>=1.20.106, but you have botocore 1.21.44 which is incompatible.
Successfully installed chardet-3.0.4 dbfread-2.0.4 ijson-2.6.1 mbtools-0.1.1 meza-0.42.5 pygogo-0.13.2 python-slugify-1.2.6 xlrd-1.2.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
You must install [mdbtools](http://sourceforge.net/projects/mdbtools/) in order to use this function

Try using the following snippet:
!sudo apt install mdbtools # Library that allows access to MDB files programatically
!pip install meza # A Python toolkit for processing tabular data
from meza import io
records = io.read('database.mdb') # Use only file path, not file objects
print(next(records))

Related

Getting an error install a package on the Terminal to use Hugging Face In VS Cod

I am using the steps from the Hugging Face website (https://huggingface.co/docs/transformers/installation) in order to start using hugging face in Visual Studio Code and install all the transformers.
I was on the last process, where I had to type "pip install transformers[flax]", then I got an error, so I installed rust-land, however, I still ended up getting an error;
Requirement already satisfied: transformers[flax] in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (4.22.2)
Requirement already satisfied: filelock in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from transformers[flax]) (3.8.0)
Requirement already satisfied: requests in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from transformers[flax]) (2.28.1)
Requirement already satisfied: tokenizers!=0.11.3,<0.13,>=0.11.1 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from transformers[flax]) (0.12.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.9.0 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from transformers[flax]) (0.10.0)
Requirement already satisfied: packaging>=20.0 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from transformers[flax]) (21.3)
Requirement already satisfied: tqdm>=4.27 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from transformers[flax]) (4.64.1)
Requirement already satisfied: regex!=2019.12.17 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from
transformers[flax]) (2022.9.13)
Requirement already satisfied: numpy>=1.17 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from transformers[flax]) (1.23.3)
Requirement already satisfied: pyyaml>=5.1 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from transformers[flax]) (6.0)
Collecting transformers[flax]
Using cached transformers-4.22.1-py3-none-any.whl (4.9 MB)
Using cached transformers-4.22.0-py3-none-any.whl (4.9 MB)
Using cached transformers-4.21.3-py3-none-any.whl (4.7 MB)
Using cached transformers-4.21.2-py3-none-any.whl (4.7 MB)
Using cached transformers-4.21.1-py3-none-any.whl (4.7 MB)
Using cached transformers-4.21.0-py3-none-any.whl (4.7 MB)
Using cached transformers-4.20.1-py3-none-any.whl (4.4 MB)
Using cached transformers-4.20.0-py3-none-any.whl (4.4 MB)
Using cached transformers-4.19.4-py3-none-any.whl (4.2 MB)
Using cached transformers-4.19.3-py3-none-any.whl (4.2 MB)
Using cached transformers-4.19.2-py3-none-any.whl (4.2 MB)
Using cached transformers-4.19.1-py3-none-any.whl (4.2 MB)
Using cached transformers-4.19.0-py3-none-any.whl (4.2 MB)
Using cached transformers-4.18.0-py3-none-any.whl (4.0 MB)
Collecting sacremoses
Using cached sacremoses-0.0.53-py3-none-any.whl
Collecting jax!=0.3.2,>=0.2.8
Using cached jax-0.3.21.tar.gz (1.1 MB)
Preparing metadata (setup.py) ... done
Collecting flax>=0.3.5
Using cached flax-0.6.1-py3-none-any.whl (185 kB)
Collecting optax>=0.0.8
Using cached optax-0.1.3-py3-none-any.whl (145 kB)
Collecting transformers[flax]
Using cached transformers-4.17.0-py3-none-any.whl (3.8 MB)
Using cached transformers-4.16.2-py3-none-any.whl (3.5 MB)
Using cached transformers-4.16.1-py3-none-any.whl (3.5 MB)
Using cached transformers-4.16.0-py3-none-any.whl (3.5 MB)
Using cached transformers-4.15.0-py3-none-any.whl (3.4 MB)
Collecting tokenizers<0.11,>=0.10.1
Using cached tokenizers-0.10.3.tar.gz (212 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting transformers[flax]
Using cached transformers-4.14.1-py3-none-any.whl (3.4 MB)
Using cached transformers-4.13.0-py3-none-any.whl (3.3 MB)
Using cached transformers-4.12.5-py3-none-any.whl (3.1 MB)
Using cached transformers-4.12.4-py3-none-any.whl (3.1 MB)
Using cached transformers-4.12.3-py3-none-any.whl (3.1 MB)
Using cached transformers-4.12.2-py3-none-any.whl (3.1 MB)
Using cached transformers-4.12.1-py3-none-any.whl (3.1 MB)
Using cached transformers-4.12.0-py3-none-any.whl (3.1 MB)
Using cached transformers-4.11.3-py3-none-any.whl (2.9 MB)
Using cached transformers-4.11.2-py3-none-any.whl (2.9 MB)
Using cached transformers-4.11.1-py3-none-any.whl (2.9 MB)
Using cached transformers-4.11.0-py3-none-any.whl (2.9 MB)
Using cached transformers-4.10.3-py3-none-any.whl (2.8 MB)
Using cached transformers-4.10.2-py3-none-any.whl (2.8 MB)
Using cached transformers-4.10.1-py3-none-any.whl (2.8 MB)
Using cached transformers-4.10.0-py3-none-any.whl (2.8 MB)
Using cached transformers-4.9.2-py3-none-any.whl (2.6 MB)
Collecting huggingface-hub==0.0.12
Using cached huggingface_hub-0.0.12-py3-none-any.whl (37 kB)
Collecting transformers[flax]
Using cached transformers-4.9.1-py3-none-any.whl (2.6 MB)
Using cached transformers-4.9.0-py3-none-any.whl (2.6 MB)
Using cached transformers-4.8.2-py3-none-any.whl (2.5 MB)
Using cached transformers-4.8.1-py3-none-any.whl (2.5 MB)
Using cached transformers-4.8.0-py3-none-any.whl (2.5 MB)
Using cached transformers-4.7.0-py3-none-any.whl (2.5 MB)
Collecting huggingface-hub==0.0.8
Using cached huggingface_hub-0.0.8-py3-none-any.whl (34 kB)
Collecting transformers[flax]
Using cached transformers-4.6.1-py3-none-any.whl (2.2 MB)
Using cached transformers-4.6.0-py3-none-any.whl (2.3 MB)
Using cached transformers-4.5.1-py3-none-any.whl (2.1 MB)
Using cached transformers-4.5.0-py3-none-any.whl (2.1 MB)
Using cached transformers-4.4.2-py3-none-any.whl (2.0 MB)
Using cached transformers-4.4.1-py3-none-any.whl (2.1 MB)
Using cached transformers-4.4.0-py3-none-any.whl (2.1 MB)
Using cached transformers-4.3.3-py3-none-any.whl (1.9 MB)
Using cached transformers-4.3.2-py3-none-any.whl (1.8 MB)
Using cached transformers-4.3.1-py3-none-any.whl (1.8 MB)
Using cached transformers-4.3.0-py3-none-any.whl (1.8 MB)
Using cached transformers-4.2.2-py3-none-any.whl (1.8 MB)
Collecting tokenizers==0.9.4
Using cached tokenizers-0.9.4.tar.gz (184 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting transformers[flax]
Using cached transformers-4.2.1-py3-none-any.whl (1.8 MB)
Using cached transformers-4.2.0-py3-none-any.whl (1.8 MB)
Using cached transformers-4.1.1-py3-none-any.whl (1.5 MB)
Using cached transformers-4.1.0-py3-none-any.whl (1.5 MB)
Using cached transformers-4.0.1-py3-none-any.whl (1.4 MB)
Collecting flax==0.2.2
Using cached flax-0.2.2-py3-none-any.whl (148 kB)
Collecting transformers[flax]
Using cached transformers-4.0.0-py3-none-any.whl (1.4 MB)
Using cached transformers-3.5.1-py3-none-any.whl (1.3 MB)
Requirement already satisfied: protobuf in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from transformers[flax]) (3.19.6)
Collecting sentencepiece==0.1.91
Using cached sentencepiece-0.1.91.tar.gz (500 kB)
Preparing metadata (setup.py) ... done
Collecting tokenizers==0.9.3
Using cached tokenizers-0.9.3.tar.gz (172 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting transformers[flax]
Using cached transformers-3.5.0-py3-none-any.whl (1.3 MB)
Using cached transformers-3.4.0-py3-none-any.whl (1.3 MB)
Collecting tokenizers==0.9.2
Using cached tokenizers-0.9.2.tar.gz (170 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting sentencepiece!=0.1.92
Using cached sentencepiece-0.1.97-cp310-cp310-win_amd64.whl (1.1 MB)
Collecting transformers[flax]
Using cached transformers-3.3.1-py3-none-any.whl (1.1 MB)
WARNING: transformers 3.3.1 does not provide the extra 'flax'
Collecting tokenizers==0.8.1.rc2
Using cached tokenizers-0.8.1rc2.tar.gz (97 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: colorama in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from tqdm>=4.27->transformers[flax]) (0.4.5)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from packaging>=20.0->transformers[flax]) (3.0.9)
Requirement already satisfied: idna<4,>=2.5 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from requests->transformers[flax]) (3.4)
Requirement already satisfied: charset-normalizer<3,>=2 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from requests->transformers[flax]) (2.1.1)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from requests->transformers[flax]) (1.26.12)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from requests->transformers[flax]) (2022.9.24)
Collecting joblib
Using cached joblib-1.2.0-py3-none-any.whl (297 kB)
Requirement already satisfied: six in c:\users\user\desktop\artificial intelligence\.env\lib\site-packages (from sacremoses->transformers[flax]) (1.16.0)
Collecting click
Using cached click-8.1.3-py3-none-any.whl (96 kB)
Building wheels for collected packages: tokenizers
Building wheel for tokenizers (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for tokenizers (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [48 lines of output]
C:\Users\user\AppData\Local\Temp\pip-build-env-hhrbpvks\overlay\Lib\site-packages\setuptools\dist.py:530: UserWarning: Normalizing '0.8.1.rc2' to '0.8.1rc2'
warnings.warn(tmpl.format(**locals()))
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-310
creating build\lib.win-amd64-cpython-310\tokenizers
copying tokenizers\__init__.py -> build\lib.win-amd64-cpython-310\tokenizers
creating build\lib.win-amd64-cpython-310\tokenizers\models
copying tokenizers\models\__init__.py -> build\lib.win-amd64-cpython-310\tokenizers\models
creating build\lib.win-amd64-cpython-310\tokenizers\decoders
copying tokenizers\decoders\__init__.py -> build\lib.win-amd64-cpython-310\tokenizers\decoders
creating build\lib.win-amd64-cpython-310\tokenizers\normalizers
copying tokenizers\normalizers\__init__.py -> build\lib.win-amd64-cpython-310\tokenizers\normalizers
creating build\lib.win-amd64-cpython-310\tokenizers\pre_tokenizers
copying tokenizers\pre_tokenizers\__init__.py -> build\lib.win-amd64-cpython-310\tokenizers\pre_tokenizers
creating build\lib.win-amd64-cpython-310\tokenizers\processors
copying tokenizers\processors\__init__.py -> build\lib.win-amd64-cpython-310\tokenizers\processors
creating build\lib.win-amd64-cpython-310\tokenizers\trainers
copying tokenizers\trainers\__init__.py -> build\lib.win-amd64-cpython-310\tokenizers\trainers
creating build\lib.win-amd64-cpython-310\tokenizers\implementations
copying tokenizers\implementations\base_tokenizer.py -> build\lib.win-amd64-cpython-310\tokenizers\implementations
copying tokenizers\implementations\bert_wordpiece.py -> build\lib.win-amd64-cpython-310\tokenizers\implementations
copying tokenizers\implementations\byte_level_bpe.py -> build\lib.win-amd64-cpython-310\tokenizers\implementations
copying tokenizers\implementations\char_level_bpe.py -> build\lib.win-amd64-cpython-310\tokenizers\implementations
copying tokenizers\implementations\sentencepiece_bpe.py -> build\lib.win-amd64-cpython-310\tokenizers\implementations
copying tokenizers\implementations\__init__.py -> build\lib.win-amd64-cpython-310\tokenizers\implementations
copying tokenizers\__init__.pyi -> build\lib.win-amd64-cpython-310\tokenizers
copying tokenizers\models\__init__.pyi -> build\lib.win-amd64-cpython-310\tokenizers\models
copying tokenizers\decoders\__init__.pyi -> build\lib.win-amd64-cpython-310\tokenizers\decoders
copying tokenizers\normalizers\__init__.pyi -> build\lib.win-amd64-cpython-310\tokenizers\normalizers
copying tokenizers\pre_tokenizers\__init__.pyi -> build\lib.win-amd64-cpython-310\tokenizers\pre_tokenizers
copying tokenizers\processors\__init__.pyi -> build\lib.win-amd64-cpython-310\tokenizers\processors
copying tokenizers\trainers\__init__.pyi -> build\lib.win-amd64-cpython-310\tokenizers\trainers
running build_ext
running build_rust
error: can't find Rust compiler
If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler.
To update pip, run:
pip install --upgrade pip
and then retry package installation.
If you did intend to build this package from source, try installing a Rust compiler from your system package manager and
ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to
download and update the Rust compiler toolchain.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects
Do you know how I can successfully install this into VS Code and use Hugging Face properly?
If you did intend to build this package from source, try installing a Rust compiler from your system package manager and
ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to
download and update the Rust compiler toolchain.
[end of output]
That's the primary error that you're having. You're going to need to install the rust-lang compiler in order to finish the install.

No module named horovod

I have already used
pip install horovod in cmd but the package failed to install.
How to deal with it guys. Please Help.
Using cached horovod-0.25.0.tar.gz (3.4 MB)
Requirement already satisfied: cloudpickle in c:\users\hp\anaconda3\lib\site-packages (from horovod) (2.0.0)
Requirement already satisfied: psutil in c:\users\hp\anaconda3\lib\site-packages (from horovod) (5.8.0)
Requirement already satisfied: pyyaml in c:\users\hp\anaconda3\lib\site-packages (from horovod) (6.0)
Requirement already satisfied: cffi>=1.4.0 in c:\users\hp\anaconda3\lib\site-packages (from horovod) (1.15.0)
Requirement already satisfied: pycparser in c:\users\hp\anaconda3\lib\site-packages (from cffi>=1.4.0->horovod) (2.21)
Building wheels for collected packages: horovod
Building wheel for horovod (setup.py) ... error
ERROR: Command errored out with exit status 1:

Building wheel for pandas on Ubuntu 20.04 takes more than 20 minutes, but not on 18.04

I have an installation script for ERPNext that works just fine on Ubuntu 18.04.
When I run the same script on 20.04 I am obliged to wait more than 20 minutes for it to complete where it takes around 30 secs on 18.04.
My script includes these two lines:
./env/bin/pip install numpy==1.18.5
./env/bin/pip install pandas==0.24.2
Their output is:
Collecting numpy==1.18.5
Downloading numpy-1.18.5-cp38-cp38-manylinux1_x86_64.whl (20.6 MB)
|████████████████████████████████| 20.6 MB 138 kB/s
Installing collected packages: numpy
Successfully installed numpy-1.18.5
Collecting pandas==0.24.2
Downloading pandas-0.24.2.tar.gz (11.8 MB)
|████████████████████████████████| 11.8 MB 18.0 MB/s
Requirement already satisfied: python-dateutil>=2.5.0 in ./env/lib/python3.8/site-packages (from pandas==0.24.2) (2.8.1)
Requirement already satisfied: pytz>=2011k in ./env/lib/python3.8/site-packages (from pandas==0.24.2) (2019.3)
Requirement already satisfied: numpy>=1.12.0 in ./env/lib/python3.8/site-packages (from pandas==0.24.2) (1.18.5)
Requirement already satisfied: six>=1.5 in ./env/lib/python3.8/site-packages (from python-dateutil>=2.5.0->pandas==0.24.2) (1.13.0)
Building wheels for collected packages: pandas
Building wheel for pandas (setup.py) ... done
Created wheel for pandas: filename=pandas-0.24.2-cp38-cp38-linux_x86_64.whl size=43655329 sha256=0067caf3a351f263bec1f4aaa3e11c5857d0434db7f56bec7135f3c3f16c8c2b
Stored in directory: /home/erpdev/.cache/pip/wheels/3d/17/1e/85f3aefe44d39a0b4055971ba075fa082be49dcb831db4e4ae
Successfully built pandas
Installing collected packages: pandas
Successfully installed pandas-0.24.2
The line "Building wheel for pandas (setup.py) ... /" is where the 20 min delay occurs.
This is all run from within the Frappe/ERPnext command directory, which has an embedded copy of pip3, like this:
erpdev#erpserver:~$ cd ~/frappe-bench/
erpdev#erpserver:~/frappe-bench$ ./env/bin/pip --version
pip 20.1.1 from /home/erpdev/frappe-bench/env/lib/python3.8/site-packages/pip (python 3.8)
erpdev#erpserver:~/frappe-bench$
I would be most grateful for any suggestions how to speed it up.
I just update pip using pip install --upgrade pip and it solved.
Your issue may be less to do with your distribution and more to be with the Python version in your virtualenv. Ubuntu 20.04 has its default Python pointing to 3.8.
From the pandas project listing on PyPI, your pip searches for a version that's compatible with your system, as provided by the project maintainers.
It seems you're using CPython3.8. pandas==0.24.2 does not wheels built for your version, so your system builds them for itself each time. You can check the available download files from here.
Possible Solutions:
While creating your env, check out this answer to generate a virtual environment for a different version. Seems like your options are between 3.5, 3.6 and 3.7.
Build a wheel for CPython3.8 and ship it along with your script. You can install your package from using that.

TFCOREML install erorr

When I try and install tfcoreml (package to conver tensorflow files to coreml) it gives me this error:
I have tried installing the coremltools seperatly in a python virtual enviroment...still doesn't work.
Rorys-MBP:~ roryhodgson$ cd tf-coreml
Rorys-MBP:tf-coreml roryhodgson$ pip install -e .
Obtaining file:///Users/roryhodgson/tf-coreml
Requirement already satisfied: numpy>=1.6.2 in /Users/roryhodgson/anaconda3/lib/python3.7/site-packages (from tfcoreml==0.3.0) (1.15.4)
Requirement already satisfied: protobuf>=3.1.0 in /Users/roryhodgson/anaconda3/lib/python3.7/site-packages (from tfcoreml==0.3.0) (3.7.0)
Requirement already satisfied: six>=1.10.0 in /Users/roryhodgson/anaconda3/lib/python3.7/site-packages (from tfcoreml==0.3.0) (1.12.0)
Requirement already satisfied: tensorflow>=1.5.0 in /Users/roryhodgson/anaconda3/lib/python3.7/site-packages (from tfcoreml==0.3.0) (1.13.1)
Collecting coremltools>=0.8 (from tfcoreml==0.3.0)
ERROR: Could not find a version that satisfies the requirement coremltools>=0.8 (from tfcoreml==0.3.0) (from versions: none)
ERROR: No matching distribution found for coremltools>=0.8 (from tfcoreml==0.3.0)
At this point it appears you need to use Python 3.6 with coremltools. It's not working with Python 3.7 yet.
To solve this issue, it's easiest if you install Anaconda (the latest version, which is for Python 3.7) and then create a new virtual environment that uses Python 3.6. Now you can install coremltools / tfcoreml into this virtual environment.

pip install tensorflow 10rc.0 failed - message about non-existant easy-install.pth file?

I was not able to install tensorflow 10rc.0, I got an error, it may be from pip, suggesting that the install assumed that there was a easy_install.pth file, but I don't have one. Is this a bug in tensorflow? I am using pip to install into a new conda environment, since most everything else is a conda package, I guess there is no need for the easy_install.pth files. I think I need to easy install something, or make a dummy easy_install.pth file.
Gory details below:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl
(psreldev) psel701: /reg/g/psdm/sw/conda/inst $ pip install --upgrade $TF_BINARY_URL
Collecting tensorflow==0.10.0rc0 from https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl
Downloading https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.10.0rc0-cp27-none-linux_x86_64.whl (33.3MB)
100% |################################| 33.3MB 20kB/s
Requirement already up-to-date: numpy>=1.8.2 in ./miniconda2-mlearn/lib/python2.7/site-packages (from tensorflow==0.10.0rc0)
Collecting mock>=2.0.0 (from tensorflow==0.10.0rc0)
Downloading mock-2.0.0-py2.py3-none-any.whl (56kB)
100% |################################| 61kB 1.5MB/s
Collecting protobuf==3.0.0b2 (from tensorflow==0.10.0rc0)
Using cached protobuf-3.0.0b2-py2.py3-none-any.whl
Requirement already up-to-date: wheel in ./miniconda2-mlearn/lib/python2.7/site-packages (from tensorflow==0.10.0rc0)
Requirement already up-to-date: six>=1.10.0 in ./miniconda2-mlearn/lib/python2.7/site-packages (from tensorflow==0.10.0rc0)
Collecting funcsigs>=1; python_version < "3.3" (from mock>=2.0.0->tensorflow==0.10.0rc0)
Downloading funcsigs-1.0.2-py2.py3-none-any.whl
Collecting pbr>=0.11 (from mock>=2.0.0->tensorflow==0.10.0rc0)
Downloading pbr-1.10.0-py2.py3-none-any.whl (96kB)
100% |################################| 102kB 1.2MB/s
Collecting setuptools (from protobuf==3.0.0b2->tensorflow==0.10.0rc0)
Downloading setuptools-25.1.1-py2.py3-none-any.whl (442kB)
100% |################################| 450kB 1.1MB/s
Installing collected packages: funcsigs, pbr, mock, setuptools, protobuf, tensorflow
Found existing installation: setuptools 23.0.0
Cannot remove entries from nonexistent file /reg/g/psdm/sw/conda/inst/miniconda2-mlearn/lib/python2.7/site-packages/easy-install.pth
From the documentation on Tensor Flow -
If using pip make sure to use the --ignore-installed flag to prevent errors about easy_install.