Unable to run training command for LaBSE - tensorflow

I am trying to reproduce the fine-tuning stage of LaBSE[https://github.com/tensorflow/models/tree/master/official/projects/labse] model.
I have cloned the tensorflow/models repository. Set the enironment path as follows.
%env PYTHONPATH='/env/python:/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models'
!echo $PYTHONPATH
output: '/env/python:/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models'
installed pre-requisites
!pip install -U tensorflow
!pip install -U "tensorflow-text==2.10.*"
!pip install tf-models-official
Then I try to run the labse training command in readme.md.
python3 /content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/train.py \
--experiment=labse/train \
--config_file=/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/experiments/labse_bert_base.yaml \
--config_file=/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/experiments/labse_base.yaml \
--params_override=${PARAMS} \
--model_dir=/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models \
--mode=train_and_eval
Issue
I get the following error.
File "/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/train.py", line 23, in
from official.projects.labse import config_labse
ModuleNotFoundError: No module named 'official.projects.labse'
The import statement python from official.projects.labse import config_labse fails
System information
I executed this on colab as well as in a GPU machine. However in both environments I get the same error.
I need to know why the import statement failed and what corrective action should be taken for this.

Related

Exception: ERROR: Unrecognized fix style 'shake' is part of the RIGID package which is not enabled in this LAM

I'm trying to run a LAMMPS script in a Colab environment. However, whenever I try to use the fix command with arguments from the RIGID package the console gives me the same error.
I tested different installation and build methods, such as through apt repository and CMAKE:
Method 1
!add-apt-repository -y ppa:gladky-anton/lammps
!add-apt-repository -y ppa:openkim/latest
!apt update
!apt install -y lammps-stable
Method 2
!apt install -y cmake build-essential git ccache
!rm -rf lammps
!git clone -b release https://github.com/lammps/lammps.git mylammps
!rm -rf build
!mkdir build
!apt install -y fftw3
!cd build; cmake ../mylammps/cmake/presets/most.cmake -D CMAKE_INSTALL_PREFIX=/usr \
-D CMAKE_CXX_COMPILER_LAUNCHER=ccache -D BUILD_SHARED_LIBS=on\
-D PKG_GPU=on -D GPU_API=cuda -D LAMMPS_EXCEPTIONS=on -D PKG_BODY=on\
-D PKG_KSPACE=on -D PKG_MOLECULE=on -D PKG_MANYBODY=on -D PKG_ASPHERE=on\
-D PKG_EXTRA-MOLECULE=on -D PKG_KSPACE=on -D PKG_CG-DNA=on -D PKG_ATC=on\
-D PKG_ EXTRA-FIX=on -D PKG_FIX=on\
-D PKG_RIGID=on -D PKG_POEMS=on -D PKG_MACHDYN=on -D PKG_DPD-SMOOTH=on\
-D PKG_PYTHON=on ../mylammps/cmake
!cd build; make -j 2
!cd build; make install; make install-python
I also tested it by calling an external script using the file() method and writing commands to LAMMPS using the command() method.
The part of my code in the python script that returns an error is::
!pip install --user mpi4py
#!pip install lammps-cython
from mpi4py import MPI
from lammps import lammps
#from lammps import PyLammps
#from lammps import Lammps
import glob
from google.colab import files
from google.colab import drive
L = lammps()
...
L.command("fix freeze teflon setforce 0.0 0.0 0.0")
L.command("fix fSHAKE water shake 0.0001 10 0 b 1 a 1")
L.command("fix fRIGID RIG rigid molecule")
Or the same rows in LAMMPS script called:
#Using an external script
!pip install --user mpi4py
#!pip install lammps-cython
from mpi4py import MPI
from lammps import lammps
#from lammps import PyLammps
#from lammps import Lammps
import glob
from google.colab import files
from google.colab import drive
L = lammps()
...
L.file(input[0])
me = MPI.COMM_WORLD.Get_rank()
nprocs = MPI.COMM_WORLD.Get_size()
print("Proc %d out of %d procs has" % (me,nprocs),L)
MPI.Finalize()
The output error is:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-60-a968719e9bc6> in <module>()
1 L = lammps()
----> 2 L.file(input[0])
3 me = MPI.COMM_WORLD.Get_rank()
4 nprocs = MPI.COMM_WORLD.Get_size()
5 print("Proc %d out of %d procs has" % (me,nprocs),L)
1 frames
/usr/local/lib/python3.7/site-packages/lammps/core.py in file(self, path)
559
560 with ExceptionCheck(self):
--> 561 self.lib.lammps_file(self.lmp, path)
562
563 # -------------------------------------------------------------------------
/usr/local/lib/python3.7/site-packages/lammps/core.py in __exit__(self, exc_type, exc_value, traceback)
47 def __exit__(self, exc_type, exc_value, traceback):
48 if self.lmp.has_exceptions and self.lmp.lib.lammps_has_error(self.lmp.lmp):
---> 49 raise self.lmp._lammps_exception
50
51 # -------------------------------------------------------------------------
Exception: ERROR: Unrecognized fix style 'shake' is part of the RIGID package which is not enabled in this LAM
Any suggestions what I can do?
You need to enable the RIGID package. For better flexibility on LAMMPS features it is better to build it from source code:
Download the latest stable version, and change the directory into it
git clone -b stable https://github.com/lammps/lammps.git mylammps
cd mylammps/src
Inclued all packes you need, obviously you need RIGID to use fix shake
make yes-rigid
Now you will need to build using MPI and share its library. This may take a while, you can add '-j N' flag to do it in parallel with N cores, I do it with 8
make mpi mode=shlib -j 8
Finally install Pylammps
make install-python
Now you should be able to use PyLammps through Colab successfully.

Problems at running ImageDataBunch in Deepnote

I'm having trouble running this line of code in Deepnote, does anyone know why?
data = ImageDataBunch.from_folder(path, train="train", valid ="test",ds_tfms=get_transforms(), size=(256,256), bs=32, num_workers=4).normalize()
The error says:
NameError: name 'ImageDataBunch' is not defined
And previously, I have imported the Fastai library. So I don't get it!
The FastAI setup in Deepnote is not that straightforward. It's best to use a custom environment where you set stuff up in a Dockerfile and everything works afterwards in the notebook. I am not sure if the ImageDataBunch or whatever you're trying to do works the same way in FastAI v1 and v2, but here are the details for v1.
This is a Dockerfile which sets up the FastAI environment via conda:
# This is Dockerfile
FROM deepnote/python:3.9
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
RUN bash ~/miniconda.sh -b -p $HOME/miniconda
ENV PATH $HOME/miniconda/bin:$PATH
ENV PYTONPATH $HOME/miniconda
RUN $HOME/miniconda/bin/conda install python=3.9 ipykernel -y
RUN $HOME/miniconda/bin/conda install -c fastai -c pytorch fastai -y
RUN $HOME/miniconda/bin/python -m ipykernel install --user --name=conda
ENV DEFAULT_KERNEL_NAME "conda"
After that, you can test the fastai imports in the notebook:
import fastai
from fastai.vision import *
print(fastai.__version__)
ImageDataBunch
And if you download and unpack this sample MNIST dataset, you should be able to load the data like you suggested:
data = ImageDataBunch.from_folder(path, train="train", valid ="test",ds_tfms=get_transforms(), size=(256,256), bs=32, num_workers=4).normalize()
Feel free to check out or clone my Deepnote project to continue working on this.

Setting up DeepLabV3 in colab

So I am trying to set up deeplab in colab.
I am running:
[1]
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My\ Drive/deeplab_files
[2]
%env PYTHONPATH=/content/drive/My\ Drive/deeplab_files/:/content/drive/My\ Drive/deeplab_files/slim
!echo $PYTHONPATH
[3]
!python deeplab/vis.py \
--logtostderr \
--vis_split="val" \
--model_variant="xception_65" \
--atrous_rates=6 \
--atrous_rates=12 \
--atrous_rates=18 \
--output_stride=16 \
--decoder_output_stride=4 \
--vis_crop_size=360 \
--vis_crop_size=480 \
--dataset="camvid" \
--colormap_type="pascal" \
--checkpoint_dir='/content/drive/My\ Drive/deeplab_files/deeplab/datasets/PQR/exp/train_on_trainval_set/train' \
--vis_logdir='/content/drive/My\ Drive/deeplab_files/deeplab/datasets/PQR/exp/train_on_trainval_set/vis' \
--dataset_dir='/content/drive/My\ Drive/deeplab_files/deeplab/datasets/PQR/tfrecord'
The last command, however, returns
sh: 1: export: Drive/deeplab_files/slim:/content/drive/My Drive/deeplab_files/:/content/drive/My Drive/deeplab_files/slim: bad variable name
Traceback (most recent call last):
File "deeplab/vis.py", line 28, in <module>
from deeplab import common
ModuleNotFoundError: No module named 'deeplab'
Anyone have any idea how I can set up deeplab? I have it set up on my personal machine, but it is much too slow. I uploaded the entire folder to my gdrive.
The odd thing is that I can do
from deeplab import common
from the notebook and that imports successfully
Here is a Github repo containing a Colab notebook running deeplab.
I have not tested it but the way you have uploaded your entire directory to Google Drive is not the right way to run things on Colab.
Think of Colab as a separate machine and you are mounting your Google Drive on this machine. Anything available on your Google Drive is not necessarily available to the Colab machine. You will have to add path of your Google Drive folder (say '\content\drive\My Drive\<path_to_your_folder>') to the sys.path for Colab machine using sys.path.insert(0, <path_of_your_drive_folder>) to make that path available to python environment running on the Colab machine.
Solved mt question. The linked repo that abggcv gave, unfortunately, runs into the same issue this question was citing.
You should clone the repo as normal, and run everything as normal. The only change is that before you run train.py, eval.py, or vis.py you'll need to run the following block:
%cd /root/deeplabvc/models/research/
import sys
sys.path.extend(['/root/deeplabvc/models/research/', '/root/deeplab/models/research/slim/'])
Note that /root/deeplab/ is the path to where I cloned the repo. You'll need to change this if the directory where you cloned the repo is different.
Furthermore, for some reason, you wont be able to run train.py/eval.py/vis.py successively. Even clearing the flags will give you an error about a duplicate flag. To fix this, just restart the runtime (wont lose your files).
Happy segmenting!
Deeplab import error occurs mostly when the PYTHONPATH is not setup properly. The installation instruction given does not work with COLAB environment. The Following has worked for me
%cd /content/deeplab/models/research/
!mkdir -p deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/train
!mkdir -p deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/eval
!mkdir -p deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/vis
!echo ${PYTHONPATH}
%env PATH_TO_TRAIN_DIR=/content/deeplab/models/research/deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/train
%env PATH_TO_DATASET=/content/deeplab/models/research/deeplab/datasets/pascal_voc_seg/tfrecord
%env PYTHONPATH=/content/deeplab/models/research:/content/deeplab/models/research/deeplab:/content/deeplab/models/research/slim:/env/python
!echo ${PYTHONPATH}
Here is my COLAB notebook for Training of deeplab that worked

error of imoprt tensorflow as tf in python 3.5.2

I have installed TensorFlow in virtual environment on Ubunut 16.04. when I enter in virtualen by using command "source ~/tensorflow/bin/activate" it enters in virtualen. but after that when I enter the command " import tensorflow as tf" it gives me the following error
"import: not authorized 'tf' # error/constitute.c/WriteImages/1028."
how to solve this..
Maybe you forgot about telling which interpreter to use. Two variants:
Add shebang #!/usr/bin/env python3 at the beginning of you script
OR
Run script like python3 my_scripy.py

'tflite_convert' is not recognized as an internal or external command (in windows)

im trying to convert my saved_model.pb(from object detection API) file to .tflite for mlkit but when i execute the command on cmd:
tflite_convert \
--output_file=/saved_model/maonani.tflite \
--saved_model_dir=/saved_model/saved_model
i get a response saying
C:\Users\LENOVO-PC\tensorflow> tflite_convert \ --output_file=/saved_model/maonani.tflite \ --saved_model_dir=/saved_model/saved_model
'tflite_convert' is not recognized as an internal or external command,
operable program or batch file.
what should i do to make this work?
Below is answer for Linux system, hope it gives you ideas for Windows.
locate tflite_convert
if output is
/tflite_convert
where TFLITE_CONVERT_PATH depends on your TF installation, execute (2)
/tflite_convert \
--output_file=/saved_model/maonani.tflite \ --saved_model_dir=/saved_model/saved_model
if output is not found, reinstall tf, in my case it is tf v1.9
if (2) works, you might add PATH=$PATH:/bin in ~/.bashrc
if "locate" command not recognized, install it by
sudo apt-get install locate