ImportError run python project from Google Drive in Google Colab - tensorflow

I mouth my Google Drive with Google Colab (follow the post by https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d)
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!mkdir -p drive
!google-drive-ocamlfuse drive
Then I place Tensorflow nmt into drive/colab/ (drive is google drive root directory.
I run the command in Colab Cell
!python drive/colab/nmt/nmt.py\
--attention=scaled_luong \
--src=src --tgt=tgt \
--vocab_prefix=drive/colab/data/vi-vi/vocab \
--train_prefix=drive/colab/data/vi-vi/small_train \
--dev_prefix=drive/colab/data/vi-vi/val \
--test_prefix=drive/colab/data/vi-vi/test \
--out_dir=drive/colab/data/vi-vi/nmt_attention_model \
--num_train_steps=12000 \
--steps_per_stats=100 \
--num_layers=2 \
--num_units=128 \
--dropout=0.2 \
--metrics=bleu
With error
/usr/local/lib/python3.6/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "drive/colab/nmt/nmt.py", line 31, in <module>
from . import inference
ImportError: cannot import name 'inference'
What should I do ?

It's happening because you are using relative import in nmt.py and your current directory is not the same as where nmt.py is located. And because of that python is not able to find the files. You can fix it by changing your directory to mnt using
%cd drive/colab/nmt/
and then running your command.

Related

Use google coral miniPCIe on Docker :error "Failed to load delegate from libedgetpu.so.1"

I have the problem using google coral miniPCIe on Docker. My docker file is:
FROM python:3.8.13-slim-bullseye
LABEL Author="S"
LABEL version="1.0"
ENV PATH /usr/local/bin:$PATH
# PYTHON 3.8 and libraries
# ffmpeg x counter: # ImportError: libGL.so.1: cannot open shared object file: No such file or directory
# set timezone
RUN apt-get update && \
apt-get install -yq tzdata && \
ln -fs /usr/share/zoneinfo/Europe/Rome /etc/localtime && \
dpkg-reconfigure -f noninteractive tzdata;
RUN set -eux; \
apt-get update; \
apt-get -y install --no-install-recommends \
curl \
gnupg \
bash \
python3-dev \
apt-utils \
libglib2.0-0 \
libsm6 \
libxext6 \
libxrender-dev \
ffmpeg \
net-tools \
iputils-ping \
vim;
# CORAL PCI BOARD
# test with lspci -nn | grep 089a
RUN set -eux; \
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list ;\
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - ; \
apt-get update; \
apt-get -y install --no-install-recommends \
python3-pycoral gcc \
gasket-dkms \
libedgetpu1-std \
pciutils ;
RUN set -eux; \
apt-get -y install --no-install-recommends \
libatlas-base-dev \
mosquitto-clients \
sysstat ; \
rm -rf /var/lib/apt/lists/* ;
COPY ./baseimage.requirements.txt /baseimage/
RUN pip install --no-cache-dir -r /baseimage/baseimage.requirements.txt
RUN python3 -m pip install --extra-index-url https://google-coral.github.io/py-repo/ pycoral~=2.0
CMD ["/bin/bash"]
in the baseimage.requirements.txt files:
asyncua==0.9.94
Cython==0.29.23
filterpy==1.4.5
Flask==1.1.2
Flask_RESTful==0.3.8
geojson==2.5.0
imutils==0.5.4
ipython==8.4.0
keras_unet_collection==0.1.6
msgpack_numpy==0.4.7.1
msgpack_python==0.5.6
numba==0.54.0
numpy==1.19.5
opencv_python==4.1.2.30
Pillow==9.2.0
prometheus_client==0.14.1
pygeoj==1.0.0
pyminizip==0.2.4
PyMySQL==1.0.2
pynmea2==1.18.0
pyproj==3.3.1
pypylon==1.7.4
pyrealsense2==2.42.0.2924
pyserial==3.5
redis==3.5.3
requests==2.22.0
scikit_image==0.18.1
scikit_learn==1.1.1
scipy==1.6.2
setuptools==45.2.0
Shapely==1.8.2
# skimage==0.0
tensorflow==2.5.0
tflite_runtime==2.7.0
paho_mqtt==1.6.1
psutil==5.8.0
wheel==0.36.2
The docker container have mounted: -v /dev/apex_0:/dev/apex_0 and see the coral with command: lspci -nn | grep 089a
I use Python 3.8.13.
In a docker container I execute the coral example:
git clone https://github.com/google-coral/pycoral.git
cd pycoral
bash examples/install_requirements.sh classify_image.py
python3 examples/classify_image.py \
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--labels test_data/inat_bird_labels.txt \
--input test_data/parrot.jpg
And I have the error:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/tflite_runtime/interpreter.py", line 160, in load_delegate
delegate = Delegate(library, options)
File "/usr/local/lib/python3.8/site-packages/tflite_runtime/interpreter.py", line 119, in __init__
raise ValueError(capture.message)
ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples/classify_image.py", line 121, in <module>
main()
File "examples/classify_image.py", line 71, in main
interpreter = make_interpreter(*args.model.split('#'))
File "/usr/local/lib/python3.8/site-packages/pycoral/utils/edgetpu.py", line 87, in make_interpreter
delegates = [load_edgetpu_delegate({'device': device} if device else {})]
File "/usr/local/lib/python3.8/site-packages/pycoral/utils/edgetpu.py", line 52, in load_edgetpu_delegate
return tflite.load_delegate(_EDGETPU_SHARED_LIB, options or {})
File "/usr/local/lib/python3.8/site-packages/tflite_runtime/interpreter.py", line 162, in load_delegate
raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1
On host machine the same example works.

Exception: ERROR: Unrecognized fix style 'shake' is part of the RIGID package which is not enabled in this LAM

I'm trying to run a LAMMPS script in a Colab environment. However, whenever I try to use the fix command with arguments from the RIGID package the console gives me the same error.
I tested different installation and build methods, such as through apt repository and CMAKE:
Method 1
!add-apt-repository -y ppa:gladky-anton/lammps
!add-apt-repository -y ppa:openkim/latest
!apt update
!apt install -y lammps-stable
Method 2
!apt install -y cmake build-essential git ccache
!rm -rf lammps
!git clone -b release https://github.com/lammps/lammps.git mylammps
!rm -rf build
!mkdir build
!apt install -y fftw3
!cd build; cmake ../mylammps/cmake/presets/most.cmake -D CMAKE_INSTALL_PREFIX=/usr \
-D CMAKE_CXX_COMPILER_LAUNCHER=ccache -D BUILD_SHARED_LIBS=on\
-D PKG_GPU=on -D GPU_API=cuda -D LAMMPS_EXCEPTIONS=on -D PKG_BODY=on\
-D PKG_KSPACE=on -D PKG_MOLECULE=on -D PKG_MANYBODY=on -D PKG_ASPHERE=on\
-D PKG_EXTRA-MOLECULE=on -D PKG_KSPACE=on -D PKG_CG-DNA=on -D PKG_ATC=on\
-D PKG_ EXTRA-FIX=on -D PKG_FIX=on\
-D PKG_RIGID=on -D PKG_POEMS=on -D PKG_MACHDYN=on -D PKG_DPD-SMOOTH=on\
-D PKG_PYTHON=on ../mylammps/cmake
!cd build; make -j 2
!cd build; make install; make install-python
I also tested it by calling an external script using the file() method and writing commands to LAMMPS using the command() method.
The part of my code in the python script that returns an error is::
!pip install --user mpi4py
#!pip install lammps-cython
from mpi4py import MPI
from lammps import lammps
#from lammps import PyLammps
#from lammps import Lammps
import glob
from google.colab import files
from google.colab import drive
L = lammps()
...
L.command("fix freeze teflon setforce 0.0 0.0 0.0")
L.command("fix fSHAKE water shake 0.0001 10 0 b 1 a 1")
L.command("fix fRIGID RIG rigid molecule")
Or the same rows in LAMMPS script called:
#Using an external script
!pip install --user mpi4py
#!pip install lammps-cython
from mpi4py import MPI
from lammps import lammps
#from lammps import PyLammps
#from lammps import Lammps
import glob
from google.colab import files
from google.colab import drive
L = lammps()
...
L.file(input[0])
me = MPI.COMM_WORLD.Get_rank()
nprocs = MPI.COMM_WORLD.Get_size()
print("Proc %d out of %d procs has" % (me,nprocs),L)
MPI.Finalize()
The output error is:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-60-a968719e9bc6> in <module>()
1 L = lammps()
----> 2 L.file(input[0])
3 me = MPI.COMM_WORLD.Get_rank()
4 nprocs = MPI.COMM_WORLD.Get_size()
5 print("Proc %d out of %d procs has" % (me,nprocs),L)
1 frames
/usr/local/lib/python3.7/site-packages/lammps/core.py in file(self, path)
559
560 with ExceptionCheck(self):
--> 561 self.lib.lammps_file(self.lmp, path)
562
563 # -------------------------------------------------------------------------
/usr/local/lib/python3.7/site-packages/lammps/core.py in __exit__(self, exc_type, exc_value, traceback)
47 def __exit__(self, exc_type, exc_value, traceback):
48 if self.lmp.has_exceptions and self.lmp.lib.lammps_has_error(self.lmp.lmp):
---> 49 raise self.lmp._lammps_exception
50
51 # -------------------------------------------------------------------------
Exception: ERROR: Unrecognized fix style 'shake' is part of the RIGID package which is not enabled in this LAM
Any suggestions what I can do?
You need to enable the RIGID package. For better flexibility on LAMMPS features it is better to build it from source code:
Download the latest stable version, and change the directory into it
git clone -b stable https://github.com/lammps/lammps.git mylammps
cd mylammps/src
Inclued all packes you need, obviously you need RIGID to use fix shake
make yes-rigid
Now you will need to build using MPI and share its library. This may take a while, you can add '-j N' flag to do it in parallel with N cores, I do it with 8
make mpi mode=shlib -j 8
Finally install Pylammps
make install-python
Now you should be able to use PyLammps through Colab successfully.

How to increase Google Colab storage capacity

I am working on a Dataset of 70gb with GPU.
Can someone suggest any possible way to make a new Notebook with more than 300Gbs Available or any possible way to go back to previous state.
You can achieve by the following approach to make a new Notebook with more than 300GB:
Buy extra space in the corresponding account, either it will be in gdrive or in GCP storage then mount in Notebook.
Also, there is an option to mount multiple drive sources, sharing the following example to mount multiple gdrive for your reference. It will be useful if you have multiple gdrive accounts having more space altogether and feasible to use big space from different mount points.
#Code base to mount first mount point:
from google.colab import drive
drive.mount('/content/drive01')
#Follow the verification process, and enter the token
#Code base to mount second mount point:
!apt-get install -y -qq software-properties-common module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
from oauth2client.client import GoogleCredentials
import getpass
auth.authenticate_user()
creds = GoogleCredentials.get_application_default()
prompt = !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass(prompt[0] + '\n\nEnter verification code: ')
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!sudo mkdir /content/drive02
!google-drive-ocamlfuse /content/drive02
Follow the verification process, and enter the token twice. Please,
note that you need to click on the account authentication URL twice
and enter the token separately.
Credit (Some issue has been fixed) -
https://www.youtube.com/watch?v=qQ3CUHjbJ0w

How to install Bob using Conda on Google Colaboratory

I have tried installing conda and then Bob in Google colab but it still fails to import Bob. Here is a notebook that demonstrates the issue: https://colab.research.google.com/drive/1ns_pjEN0fFY5dNAO0FNLAG422APFu-NI
The steps are:
%%bash
# based on https://towardsdatascience.com/conda-google-colab-75f7c867a522
MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
!conda install --channel defaults conda python=3.6 --yes --quiet
!conda install --yes --quiet --override-channels \
-c https://www.idiap.ch/software/bob/conda -c defaults \
python=3.6 bob.io.image bob.bio.face # add more Bob packages here as needed
import sys
sys.path.append("/usr/lib/python3.6/site-packages")
import pkg_resources
import bob.extension, bob.io.base, bob.io.image
which produces:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-5-8e8041c9e60e> in <module>()
1 import pkg_resources
----> 2 import bob.extension, bob.io.base, bob.io.image
ModuleNotFoundError: No module named 'bob'

Always connected to CPU on google-colab. GPU available only with python 2

In google colab GPU seems to be available only with python 2. with python 3 i have pulled all stops but in vain
I have changed the runtime from edit > notebook settings to python 3 and GPU
I have changed the runtime from runtime > connect to runtime as well
I have connected and reconnected to google-client using
!apt-get install -y -qq software-properties-common python-software properties module-init-tools (For some reason i cannot get this line in code format along with rest below)
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
as directed by this blog
i connected to runtime via 1 and 2 above before running 3 and after 3
i am consistently getting
import tensorflow as tf
tf.test.gpu_device_name()
output: ''
and :
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
output:
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 17193406649657173379]
Any help on how to connect to GPU runtime?
When I've used Colaboratory's GPU, it's never shown in the device list. If you just run a Tensorflow session it will automatically use the GPU unless told otherwise.