I'm training an object detection model by following the guide here https://towardsdatascience.com/creating-your-own-object-detector-ad69dda69c85
On Google Colab I am able to execute the following and it makes use of the GPU
python train.py --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_0.75_depth_quantized_300x300_coco14_sync.config
I would now like train by using the TPU but this obviously does not just work out of the box. Running train.py is slow and appears to be using CPU only. How can I achieve this?
While using TPU in Google Colab, we should use the below mentioned code tocheck that the TPU devices are properly recognized in the environment:
import os
import pprint
import tensorflow as tf
if 'COLAB_TPU_ADDR' not in os.environ:
print('ERROR: Not connected to a TPU runtime; please see the first cell in this notebook for instructions!')
else:
tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print ('TPU address is', tpu_address)
with tf.Session(tpu_address) as session:
devices = session.list_devices()
print('TPU devices:')
pprint.pprint(devices)
This should output a list of 8 TPU devices available in our Colab environment.
In order to run the tf.keras model on a TPU, we have to convert it to a TPU-model using the tf.contrib.tpu.keras_to_tpu module.
It can be done using the below code:
# This address identifies the TPU we'll use when configuring TensorFlow.
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
tf.logging.set_verbosity(tf.logging.INFO)
resnet_model = tf.contrib.tpu.keras_to_tpu_model(
resnet_model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER)))
Fore more information, please refer this Medium Link and this Link.
Related
I am training EfficientDet v2 model in coco json format on colab. model confg are here:
gtf.Train_Dataset(root_dir, coco_dir, img_dir, set_dir, batch_size=8, image_size=512, use_gpu=True,num_workers=2)
gtf.Model();
gtf.Set_Hyperparams(lr=0.0001, val_interval=1, es_min_delta=0.0, es_patience=0)
%%time
gtf.Train(num_epochs=10, model_output_dir="trained/");
I am facing following issue while training:
I tried adding this code and restarting runtime but facing same issues.
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
Anyone to solve?
I need to export a custom object detection model, fine-tuned on a custom dataset, to TensorFlow Lite, so that it can run on Android devices.
I'm using TensorFlow 2.4.1 on Ubuntu 18.04, and so far this is what I did:
fine-tuned an 'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8' model, using a dataset of new images. I used the 'model_main_tf2.py' script from the repository;
I exported the model using 'exporter_main_v2.py'
python exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\custom_model\pipeline.config --trained_checkpoint_dir .\models\custom_model\ --output_directory .\exported-models\custom_model
which produced a Saved Model (.pb file);
3. I tested the exported model for inference, and everything works fine. In the detection routine, I used:
def get_model_detection_function(model):
##Get a tf.function for detection
#tf.function
def detect_fn(image):
"""Detect objects in image."""
image, shapes = model.preprocess(image)
prediction_dict = model.predict(image, shapes)
detections = model.postprocess(prediction_dict, shapes)
return detections, prediction_dict, tf.reshape(shapes, [-1])
return detect_fn
and the shape of the produced image object is 640x640, as expected.
Then, I tried to convert this .pb model to tflite.
After updating to the nightly version of tensorflow (with the normal version, I got an error), I was actually able to produce a .tflite file by using this code:
import tensorflow as tf
from tflite_support import metadata as _metadata
saved_model_dir = 'exported-models/custom_model/'
## Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
# Save the model.
with open('tflite/custom_model.tflite', 'wb') as f:
f.write(tflite_model)
I tried to use this model in AndroidStudio, following the instructions given here.
However, I'm getting a couple of errors:
something regarding 'Not a valid Tensorflow lite model' (have to check better on this);
the error:
java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_input_tensor:0) with 3 bytes from a Java Buffer with 270000 bytes.
The second error seems to indicate there's something weird with the input expected from the tflite model.
I examined the file with Netron, and this is what I got:
the input is expected to have...1x1x1x3 shape, or am I misinterpreting the graph?
Should I somehow set the tensor input size when using the tflite exporter?
Anyway, what is the right way to export my custom model so that it can run on Android?
TF Ops are supported via the Flex delegate. I bet that is the problem. If you want to check if it is that, you can do:
Download benchmark app with flex delegate support for TF Ops. You can find it here, in the section Native benchmark binary: https://www.tensorflow.org/lite/performance/measurement. For example, for android is https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/android_aarch64_benchmark_model_plus_flex
Connect your phone to your computer and where you have downloaded the apk, do adb push <apk_name> /data/local/tmp
Push your model adb push <tflite_model> /data/local/tmp
Open shell adb shell and go to folder cd /data/local/tmp. Then run the app with ./<apk_name> --graph=<tflite_model>
Info from:
https://www.tensorflow.org/lite/guide/ops_select
https://www.tensorflow.org/lite/performance/measurement
I would like to make sure whether the following steps I executed to get the tflite of yolov2-lite model are correct or not?
Step1 Saving graph and weights to protobuf file
flow --model cfg/yolov2-tiny.cfg --load bin/yolov2-tiny.weights --savepb.
This command created build_graph folder with yolov2-tiny.pb and yolov2-tiny.meta.
Step2 Converting pb to tflite
I executed the below piece of code to get the yolov2-tiny.tflite
import tensorflow as tf
localpb = 'yolov2-tiny.pb'
tflite_file = 'yolov2-tiny.tflite'
print("{} -> {}".format(localpb, tflite_file))
converter = tf.lite.TFLiteConverter.from_frozen_graph(
localpb,
input_arrays= ['input'],
output_arrays= ['output']
)
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
If the above steps I followed to get this tflite are correct, then please suggest me the command to run this tflite file in coral edge TPU USB accelerator.
Thank you so much :)
unfortunately, yolo models are supported by the edgetpu compiler as of now. I recommend using mobile_ssd models.
For future reference, your pipeline should be:
1) Train the model
2) Convert to tflite
3) Compiled for EdgeTPU (the step that actually delegates the work onto the TPU)
I'm trying to use a module off Tensorflow Hub (a word embedding module) with tf.contrib.estimator.RNNClassifier.
My desired model
embedded_text_feature_column = hub.text_embedding_column(
key="description",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.contrib.estimator.RNNClassifier(
sequence_feature_columns=[embedded_text_feature_column],
num_units=[32, 16])
Running that returns the following error:
ValueError: All feature_columns must be of type _SequenceDenseColumn.
You can wrap a sequence_categorical_column with an embedding_column or indicator_column.
Given (
type <class 'tensorflow_hub.feature_column._TextEmbeddingColumn'>):
_TextEmbeddingColumn(key='title_description', module_spec=<tensorflow_hub.native_module._ModuleSpec object at 0x7fb0102a5a90>, trainable=False
)
A working model
Using the TF Hub module works fine with:
estimator = tf.estimator.DNNClassifier(
hidden_units=[32, 16],
feature_columns=[embedded_text_feature_column])
Is it possible to use the nnlm module with RNNClassifier?
The Code corresponding to your Desired Model seems to be working without error in Google Colab with Tensorflow Version 1.15.
Please find the working code below:
!pip install tensorflow==1.15
import tensorflow as tf
import tensorflow_hub as hub
embedded_text_feature_column = hub.text_embedding_column(
key="description",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.contrib.estimator.RNNClassifier(
sequence_feature_columns=[embedded_text_feature_column],
num_units=[32, 16])
Here is the Link for Github Colab Gist.
Google colab brings TPUs in the Runtime Accelerator. I found an example, How to use TPU in Official Tensorflow github. But the example not worked on google-colaboratory. It stuck on following line:
tf.contrib.tpu.keras_to_tpu_model(model, strategy=strategy)
When I print available devices on colab it return [] for TPU accelerator. Does anyone knows how to use TPU on colab?
Here's a Colab-specific TPU example:
https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/shakespeare_with_tpu_and_keras.ipynb
The key lines are those to connect to the TPU itself:
# This address identifies the TPU we'll use when configuring TensorFlow.
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
...
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
training_model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER)))
(Unlike a GPU, use of the TPU requires an explicit connection to the TPU worker. So, you'll need to tweak your training and inference definition in order to observe a speedup.)