How to install Bob using Conda on Google Colaboratory - google-colaboratory

I have tried installing conda and then Bob in Google colab but it still fails to import Bob. Here is a notebook that demonstrates the issue: https://colab.research.google.com/drive/1ns_pjEN0fFY5dNAO0FNLAG422APFu-NI
The steps are:
%%bash
# based on https://towardsdatascience.com/conda-google-colab-75f7c867a522
MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
!conda install --channel defaults conda python=3.6 --yes --quiet
!conda install --yes --quiet --override-channels \
-c https://www.idiap.ch/software/bob/conda -c defaults \
python=3.6 bob.io.image bob.bio.face # add more Bob packages here as needed
import sys
sys.path.append("/usr/lib/python3.6/site-packages")
import pkg_resources
import bob.extension, bob.io.base, bob.io.image
which produces:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-5-8e8041c9e60e> in <module>()
1 import pkg_resources
----> 2 import bob.extension, bob.io.base, bob.io.image
ModuleNotFoundError: No module named 'bob'

Related

Unable to run training command for LaBSE

I am trying to reproduce the fine-tuning stage of LaBSE[https://github.com/tensorflow/models/tree/master/official/projects/labse] model.
I have cloned the tensorflow/models repository. Set the enironment path as follows.
%env PYTHONPATH='/env/python:/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models'
!echo $PYTHONPATH
output: '/env/python:/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models'
installed pre-requisites
!pip install -U tensorflow
!pip install -U "tensorflow-text==2.10.*"
!pip install tf-models-official
Then I try to run the labse training command in readme.md.
python3 /content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/train.py \
--experiment=labse/train \
--config_file=/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/experiments/labse_bert_base.yaml \
--config_file=/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/experiments/labse_base.yaml \
--params_override=${PARAMS} \
--model_dir=/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models \
--mode=train_and_eval
Issue
I get the following error.
File "/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/train.py", line 23, in
from official.projects.labse import config_labse
ModuleNotFoundError: No module named 'official.projects.labse'
The import statement python from official.projects.labse import config_labse fails
System information
I executed this on colab as well as in a GPU machine. However in both environments I get the same error.
I need to know why the import statement failed and what corrective action should be taken for this.

Exception: ERROR: Unrecognized fix style 'shake' is part of the RIGID package which is not enabled in this LAM

I'm trying to run a LAMMPS script in a Colab environment. However, whenever I try to use the fix command with arguments from the RIGID package the console gives me the same error.
I tested different installation and build methods, such as through apt repository and CMAKE:
Method 1
!add-apt-repository -y ppa:gladky-anton/lammps
!add-apt-repository -y ppa:openkim/latest
!apt update
!apt install -y lammps-stable
Method 2
!apt install -y cmake build-essential git ccache
!rm -rf lammps
!git clone -b release https://github.com/lammps/lammps.git mylammps
!rm -rf build
!mkdir build
!apt install -y fftw3
!cd build; cmake ../mylammps/cmake/presets/most.cmake -D CMAKE_INSTALL_PREFIX=/usr \
-D CMAKE_CXX_COMPILER_LAUNCHER=ccache -D BUILD_SHARED_LIBS=on\
-D PKG_GPU=on -D GPU_API=cuda -D LAMMPS_EXCEPTIONS=on -D PKG_BODY=on\
-D PKG_KSPACE=on -D PKG_MOLECULE=on -D PKG_MANYBODY=on -D PKG_ASPHERE=on\
-D PKG_EXTRA-MOLECULE=on -D PKG_KSPACE=on -D PKG_CG-DNA=on -D PKG_ATC=on\
-D PKG_ EXTRA-FIX=on -D PKG_FIX=on\
-D PKG_RIGID=on -D PKG_POEMS=on -D PKG_MACHDYN=on -D PKG_DPD-SMOOTH=on\
-D PKG_PYTHON=on ../mylammps/cmake
!cd build; make -j 2
!cd build; make install; make install-python
I also tested it by calling an external script using the file() method and writing commands to LAMMPS using the command() method.
The part of my code in the python script that returns an error is::
!pip install --user mpi4py
#!pip install lammps-cython
from mpi4py import MPI
from lammps import lammps
#from lammps import PyLammps
#from lammps import Lammps
import glob
from google.colab import files
from google.colab import drive
L = lammps()
...
L.command("fix freeze teflon setforce 0.0 0.0 0.0")
L.command("fix fSHAKE water shake 0.0001 10 0 b 1 a 1")
L.command("fix fRIGID RIG rigid molecule")
Or the same rows in LAMMPS script called:
#Using an external script
!pip install --user mpi4py
#!pip install lammps-cython
from mpi4py import MPI
from lammps import lammps
#from lammps import PyLammps
#from lammps import Lammps
import glob
from google.colab import files
from google.colab import drive
L = lammps()
...
L.file(input[0])
me = MPI.COMM_WORLD.Get_rank()
nprocs = MPI.COMM_WORLD.Get_size()
print("Proc %d out of %d procs has" % (me,nprocs),L)
MPI.Finalize()
The output error is:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-60-a968719e9bc6> in <module>()
1 L = lammps()
----> 2 L.file(input[0])
3 me = MPI.COMM_WORLD.Get_rank()
4 nprocs = MPI.COMM_WORLD.Get_size()
5 print("Proc %d out of %d procs has" % (me,nprocs),L)
1 frames
/usr/local/lib/python3.7/site-packages/lammps/core.py in file(self, path)
559
560 with ExceptionCheck(self):
--> 561 self.lib.lammps_file(self.lmp, path)
562
563 # -------------------------------------------------------------------------
/usr/local/lib/python3.7/site-packages/lammps/core.py in __exit__(self, exc_type, exc_value, traceback)
47 def __exit__(self, exc_type, exc_value, traceback):
48 if self.lmp.has_exceptions and self.lmp.lib.lammps_has_error(self.lmp.lmp):
---> 49 raise self.lmp._lammps_exception
50
51 # -------------------------------------------------------------------------
Exception: ERROR: Unrecognized fix style 'shake' is part of the RIGID package which is not enabled in this LAM
Any suggestions what I can do?
You need to enable the RIGID package. For better flexibility on LAMMPS features it is better to build it from source code:
Download the latest stable version, and change the directory into it
git clone -b stable https://github.com/lammps/lammps.git mylammps
cd mylammps/src
Inclued all packes you need, obviously you need RIGID to use fix shake
make yes-rigid
Now you will need to build using MPI and share its library. This may take a while, you can add '-j N' flag to do it in parallel with N cores, I do it with 8
make mpi mode=shlib -j 8
Finally install Pylammps
make install-python
Now you should be able to use PyLammps through Colab successfully.

Can't use GPU on Google colab in conda environment (Tensorflow)

I made a short Colab Notebook to simulate the situation on my local PC where I use conda / mamba and can't get any GPU working for tensorflow. I know colab can do this out of the box, but that's not, what I want to do. I'm also quite new to colab and still trying to figure out, how it's used :D.
So here's my Code for colab:
# Install Miniconda
!curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!sh Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local
# Set channels
!conda config --add channels defaults
!conda config --add channels bioconda
!conda config --add channels conda-forge
# Install Mamba
!conda install -y mamba
next cell:
# Create Environment
!mamba create -y -n stats-gpu tensorflow-gpu cudatoolkit cudnn
I also tried:
!mamba create -n stats-gpu python=3.8 tensorflow-gpu=2.4 cudnn=8.0 cudatoolkit=11.0
next cell:
# simulate "conda activate" by setting PYTHONPATH and PATH
import sys
import os
PYTHONPATH = os.environ['PYTHONPATH']
sys.path = ['',
PYTHONPATH,
'/usr/local/envs/stats-gpu/lib/python37.zip',
'/usr/local/envs/stats-gpu/lib/python3.7',
'/usr/local/envs/stats-gpu/lib/python3.7/lib-dynload',
'/usr/local/envs/stats-gpu/lib/python3.7/site-packages']
os.environ['PATH'] = ('/usr/local/envs/stats-gpu/bin:' +
'/usr/local/condabin:' +
os.environ['PATH'])
next cell:
# Executing directly in cell will ignore python installed by conda.
# Writing Python script in .py file and executing with !python will use
# Python from conda.
filename = "tensortest.py"
with open(filename, 'w') as file:
file.write("""import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))""")
!python tensortest.py
The output is
Num GPUs Available: 0
but it should be 1. Why is that? The GPU in colab is active and works when I just use
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
in an empty Notebook.

ImportError run python project from Google Drive in Google Colab

I mouth my Google Drive with Google Colab (follow the post by https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d)
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!mkdir -p drive
!google-drive-ocamlfuse drive
Then I place Tensorflow nmt into drive/colab/ (drive is google drive root directory.
I run the command in Colab Cell
!python drive/colab/nmt/nmt.py\
--attention=scaled_luong \
--src=src --tgt=tgt \
--vocab_prefix=drive/colab/data/vi-vi/vocab \
--train_prefix=drive/colab/data/vi-vi/small_train \
--dev_prefix=drive/colab/data/vi-vi/val \
--test_prefix=drive/colab/data/vi-vi/test \
--out_dir=drive/colab/data/vi-vi/nmt_attention_model \
--num_train_steps=12000 \
--steps_per_stats=100 \
--num_layers=2 \
--num_units=128 \
--dropout=0.2 \
--metrics=bleu
With error
/usr/local/lib/python3.6/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "drive/colab/nmt/nmt.py", line 31, in <module>
from . import inference
ImportError: cannot import name 'inference'
What should I do ?
It's happening because you are using relative import in nmt.py and your current directory is not the same as where nmt.py is located. And because of that python is not able to find the files. You can fix it by changing your directory to mnt using
%cd drive/colab/nmt/
and then running your command.

How to install xvfb on Scrapinghub for using Selenium?

I use Python-Selenium in my spider (Scrapy), for using Selenium i should install xvfb on Scrapinghub.
when i use apt-get for installing xvfb i have this error message:
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
Is there any other way for installing xvfb on Scrapinghub?
UPDATE 1
I read this, I tried to use docker, I am stuck at this stage
shub-image init --requirements path/to/requirements.txt
i read this
If you are getting an ImportError like this while running shub-image init:
You should make sure you have the latest version of shub installed by
running:
$ pip install shub --upgrade
but i have always this error
Traceback (most recent call last):
File "/usr/local/bin/shub-image", line 7, in <module>
from shub_image.tool import cli
File "/usr/local/lib/python2.7/dist-packages/shub_image/tool.py", line 42, in <module>
command_module = importlib.import_module(module_path)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/shub_image/push.py", line 4, in <module>
from shub.deploy import list_targets
ImportError: cannot import name list_targets
did you try:
sudo apt-get install xvfb
Another way is to compile manually the packages, a sort of:
apt-get source xvfb
./configure --prefix=$HOME/myapps
make
make install
And the third way, is download the .deb from the source web page https://pkgs.org/download/xvfb
after download it, you can mv it to the path of the downloaded sources:
mv xvfb_1.16.4-1_amd64.deb /var/cache/apt/archives/
then you change your directory and do:
sudo dpkg -i xvfb_1.16.4-1_amd64.deb
and that's all!
I resolved my problems ( use selenium in scrapinghub )
1- for xvfb in docker i use
RUN apt-get install -qy xvfb
2- for creating docker image i used this
and for installing geckodriver i use this code
#
# Geckodriver Dockerfile
#
FROM blueimp/basedriver
# Add the Firefox release channel of the Debian Mozilla team:
RUN echo 'deb http://mozilla.debian.net/ jessie-backports firefox-release' >> \
/etc/apt/sources.list \
&& curl -sL https://mozilla.debian.net/archive.asc | apt-key add -
# Install Firefox:
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y \
firefox \
# Remove obsolete files:
&& apt-get clean \
&& rm -rf \
/tmp/* \
/usr/share/doc/* \
/var/cache/* \
/var/lib/apt/lists/* \
/var/tmp/*
# Install geckodriver:
RUN export BASE_URL=https://github.com/mozilla/geckodriver/releases/download \
&& export VERSION=$(curl -sL \
https://api.github.com/repos/mozilla/geckodriver/releases/latest | \
grep tag_name | cut -d '"' -f 4) \
&& curl -sL \
$BASE_URL/$VERSION/geckodriver-$VERSION-linux64.tar.gz | tar -xz \
&& mv geckodriver /usr/local/bin/geckodriver
USER webdriver
CMD ["geckodriver", "--host", "0.0.0.0"]
from here