Rapids on colab - google-colaboratory

I have always used following commands to install Rapids on Colab (from https://colab.research.google.com/drive/1rY7Ln6rEE1pOlfSHCYOVaqt8OvDO35J0#forceEdit=true&offline=true&sandboxMode=true)
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!python rapidsai-csp-utils/colab/env-check.py
!bash rapidsai-csp-utils/colab/update_gcc.sh
import os
os._exit(00)
import condacolab
condacolab.install()
import condacolab
condacolab.check()
# Installing RAPIDS is now 'python rapidsai-csp-utils/colab/install_rapids.py <release> <packages>'
# The <release> options are 'stable' and 'nightly'. Leaving it blank or adding any other words will default to stable.
!python rapidsai-csp-utils/colab/install_rapids.py stable
import os
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
os.environ['CONDA_PREFIX'] = '/usr/local'
it always worked, but lately I get
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
CondaHTTPError: HTTP 403 FORBIDDEN for url <https://conda.anaconda.org/rapidsai/linux-64/ucx-1.11.2 gef2bbcf-cuda11.2_0.tar.bz2>
Elapsed: 00:00.358595
I have retried several times but it doesnt work, how can I solve it?

Related

Problem Using UCREL USAS in Google Colaboratory

I want to use USAS semantic tagger in Google Colab using the instructions here
pip install https://github.com/UCREL/pymusas-models/releases/download/en_dual_none_contextual-0.3.1/en_dual_none_contextual-0.3.1-py3-none-any.whl
python -m spacy download en_core_web_sm
import spacy
I install the package using the above code. And then run the following:
# We exclude the following components as we do not need them.
nlp = spacy.load('en_core_web_sm', exclude=['parser', 'ner'])
# Load the English PyMUSAS rule based tagger in a separate spaCy pipeline
english_tagger_pipeline = spacy.load('en_dual_none_contextual')
# Adds the English PyMUSAS rule based tagger to the main spaCy pipeline
nlp.add_pipe('pymusas_rule_based_tagger', source=english_tagger_pipeline)
I encounter the following error for the second line:
TypeError: load_model_from_init_py() got an unexpected keyword
argument 'enable'
Note that this error doesn't show up when running the same code on my local machine. Only in Google Colab...
My sapCy version was too high. That's how I solved it:
!pip3 install spacy==3.2.3

Unable to load fasttext-wiki-news-subswords-300

Starting with the gensim api:
import gensim.downloader as api
api.load('fasttext-wiki-news-subwords-300')
I get the error:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/user.name/gensim-data/fasttext-wiki-news-subwords-300/fasttext-wiki-news-subwords-300.gz'
I also tried the cli:
python3 -m gensim.downloader --download fasttext-wiki-news-subwords-300
and when I check the ~/gensim-data/fasttext-wiki-news-subwords-300 folder it only contains:
__init__.py
__pycache__
Have there been any changes to the to api or the dataset in the last few months?
Note
I am using Python3.8 and gensim==4.2.0
I have checked that the Certificates are Installed ('Install Certificates.command').
I ended up deleting the ~/gensim-data folder and downgraded gensim to 3.8.3, seems to be working now. Leaving the question and answer here as (1) error message was a red herring and (2) solution was not straightforward.

Version mismatch: this is the 'cffi' RAPIDS on Colab

# Install RAPIDS
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!bash rapidsai-csp-utils/colab/rapids-colab.sh stable
import sys, os, shutil
sys.path.append('/usr/local/lib/python3.7/site-packages/')
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
os.environ["CONDA_PREFIX"] = "/usr/local"
for so in ['cudf', 'rmm', 'nccl', 'cuml', 'cugraph', 'xgboost', 'cuspatial']:
fn = 'lib'+so+'.so'
source_fn = '/usr/local/lib/'+fn
dest_fn = '/usr/lib/'+fn
if os.path.exists(source_fn):
print(f'Copying {source_fn} to {dest_fn}')
shutil.copyfile(source_fn, dest_fn)
# fix for BlazingSQL import issue
# ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by /usr/local/lib/python3.7/site-packages/../../libblazingsql-engine.so)
if not os.path.exists('/usr/lib64'):
os.makedirs('/usr/lib64')
for so_file in os.listdir('/usr/local/lib'):
if 'libstdc' in so_file:
shutil.copyfile('/usr/local/lib/'+so_file, '/usr/lib64/'+so_file)
shutil.copyfile('/usr/local/lib/'+so_file, '/usr/lib/x86_64-linux-gnu/'+so_file)
Im successfully able to install RAPIDS using the above script but I simply cant get ride of the following error:
Exception: Version mismatch: this is the 'cffi' package version 1.14.5, located in '/usr/local/lib/python3.7/dist-packages/cffi/api.py'. When we import the top-level '_cffi_backend' extension module, we get version 1.14.3, located in '/usr/local/lib/python3.7/site-packages/_cffi_backend.cpython-37m-x86_64-linux-gnu.so'. The two versions should be equal; check your installation.
I have tried everything from here and enter link description here, upgraded, downgraded, uninstalled, installed but nothing works. Any help would be greatly appreciated.

Launching Tensorboard: bad interpreter: No such file or directory

I am unable to run tensorboard, and get the message:
bad interpreter: No such file or directory
Steps to reproduce:
Installed TF on Ubuntu, using a virtenv, and pip as per instructions install instructions
Confirmed TF was correctly installed by running the mnist example. Output was as expected
Attempted to run tensorboard using:
tensorboard --logdir=/tmp/tensorflow/mnist/logs/mnist_with_summaries/
Checked that this location does contain the summary files within the "test" and "train" directories
Command and error:
(tensorflow_1_4_0) js#pchome01:~$ tensorboard --logdir=/tmp/tensorflow/mnist/logs/mnist_with_summaries/
bash: /home/js/tensorflow_1_4_0/bin/tensorboard: /home/js/tensorflow_1_3/bin/python3: bad interpreter: No such file or directory
In my virtenv folder for tensorflow_1_4_0, a tensorboard script exists:
#!/home/js/tensorflow_1_3/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from tensorboard.main import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(main())
When I run the following from the terminal, no errors are reported:
from tensorboard.main import main
Thank you
Just spotted my silly mistake and posting the resolution in case others encounter this.
The meaning of the error message is that the interpreter of the code (in this case python3) cannot be found.
The first line of the tensorboard script:
#!/home/js/tensorflow_1_3/bin/python3
This tells the compiler to look for python3 at this location, however this path is incorrect and the virtual environment is actually called tensorflow_1_4_0.
Therefore changing this line to the following fixed the error:
#!/home/js/tensorflow_1_4_0/bin/python3

python -m pip install urllib having systax error while installation of this module

here see what happened
when I run the above command cmd I get an error while installing. As you can see in the above image.
s.connect((base64.b64decode(rip),17620)
I get syntax error: invalid token in line 191
and it is also giving me problems on some other modules also.
s.connect((base64.b64decode(rip),17620)
I get syntax error: invalid token in line 191
(I ran into this myself using jupyter notebook)
As you are using python 3 you don't need to install URL lib as it is part of core https://github.com/python/cpython/tree/3.6/Lib/urllib/
It's submodules are restructured so you need to change python 2 code like
import urllib
...
urllib.urlopen
into
import urllib.request
...
urllib.request.urlopen