How to use tensor2tensor to classify text? - tensorflow

I want to do binary text classification using tensor2tensor only with attention and no LSTM or CNN preprocessing layers. I think that the transformer_encoder model is the best for me,but I can't find any required predifined Problem or Hparams. Can anyone give me a text classification example using tensor2tensor or some other advice?

I would recommend following their sentiment_imdb problem, since sentiment analysis is a text-classification problem:
https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/imdb.py
They also have a brief section about training a transformer_encoder for this problem on the main page:
https://github.com/tensorflow/tensor2tensor#sentiment-analysis

Try this
PROBLEM= sentiment_imdb
MODEL= transformer_encoder
HPARAMS=transformer_tiny
DATA_DIR=$HOME/t2t_data
TMP_DIR=/tmp/t2t_datagen
TRAIN_DIR=$HOME/t2t_train/$PROBLEM/$MODEL-$HPARAMS
mkdir -p $DATA_DIR $TMP_DIR $TRAIN_DIR
# Generate data
t2t-datagen \
--data_dir=$DATA_DIR \
--tmp_dir=$TMP_DIR \
--problem=$PROBLEM
# Train
# * If you run out of memory, add --hparams='batch_size=1024'.
t2t-trainer \
--data_dir=$DATA_DIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR

Related

how to prune a model from model_main_tf2

I have trained a customer object detection using
python model_main_tf2.py \
--pipeline_config_path=/[ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu8/pipeline.config][1] \
--model_dir=/content/drive/MyDrive/training_object \
--alsologtostderr
I would like to prune the resulting model according to the official guide here. however, the guide works with keras format model and the result of model_main_tf2 is tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject and does not have meta data

How to use EfficientNet Lite models as backbone for keypoint regression?

I would like to employ EfficientNet Lite 0 model as a backbone to perform a keypoint regression task. However, I get stuck at loading the model from the either Tensorflow Hub or the official GitHub repository. Could you please explain how can I:
import such model in Tensorflow with checkpoints from ImageNet
modify the last layers of the network
modify the loss according to my task
retrain the network
I am looking forward to apply Efficient Lite since I would like to convert everything to TF Lite.
TensorFlow Lite currently doesn't support EfficientNet Lite, but they do support mobile (CPU & GPU) friendly CenterNet. See this Colab that demonstrates how to use this model.
Commands to convert the keypoints model:
# Get mobile-friendly CenterNet for Keypoint detection task.
# See TensorFlow 2 Detection Model Zoo for more details:
# https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
wget http://download.tensorflow.org/models/object_detection/tf2/20210210/centernet_mobilenetv2fpn_512x512_coco17_kpts.tar.gz
tar -xf centernet_mobilenetv2fpn_512x512_coco17_kpts.tar.gz
rm centernet_mobilenetv2fpn_512x512_coco17_kpts.tar.gz*
# Export the intermediate SavedModel that outputs 10 detections & takes in an
# image of dim 320x320.
# Modify these parameters according to your needs.
python models/research/object_detection/export_tflite_graph_tf2.py \
--pipeline_config_path=centernet_mobilenetv2_fpn_kpts/pipeline.config \
--trained_checkpoint_dir=centernet_mobilenetv2_fpn_kpts/checkpoint \
--output_directory=centernet_mobilenetv2_fpn_kpts/tflite \
--centernet_include_keypoints=true \
--keypoint_label_map_path=centernet_mobilenetv2_fpn_kpts/label_map.txt \
--max_detections=10 \
--config_override=" \
model{ \
center_net { \
image_resizer { \
fixed_shape_resizer { \
height: 320 \
width: 320 \
} \
} \
} \
}"
tflite_convert --output_file=centernet_mobilenetv2_fpn_kpts/model.tflite \
--saved_model_dir=centernet_mobilenetv2_fpn_kpts/tflite/saved_model

TensorFlow lite: High loss in accuracy after converting model to tflite

I have been trying TFLite to increase detection speed on Android but strangely my .tflite model now almost only detects 1 category.
I have done testing on the .pb model that I got after retraining a mobilenet and the results are good but for some reason, when I convert it to .tflite the detection is way off...
For the retraining I used the retrain.py file from Tensorflow for poets 2
I am using the following commands to retrain, optimize for inference and convert the model to tflite:
python retrain.py \
--image_dir ~/tf_files/tw/ \
--tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/feature_vector/1 \
--output_graph ~/new_training_dir/retrainedGraph.pb \
-–saved_model_dir ~/new_training_dir/model/ \
--how_many_training_steps 500
sudo toco \
--input_file=retrainedGraph.pb \
--output_file=optimized_retrainedGraph.pb \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TENSORFLOW_GRAPHDEF \
--input_shape=1,224,224,3 \
--input_array=Placeholder \
--output_array=final_result \
sudo toco \
--input_file=optimized_retrainedGraph.pb \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--output_file=retrainedGraph.tflite \
--inference_type=FLOAT \
--inference_input_type=FLOAT \
--input_arrays=Placeholder \
--output_array=final_result \
--input_shapes=1,224,224,3
Am I doing anything wrong here? Where could the loss in accuracy come from?
I faced the same issue while I was trying to convert a .pb model into .lite.
In fact, my accuracy would come down from 95 to 30!
Turns out the mistake I was committing was not during the conversion of .pb to .lite or in the command involved to do so. But it was actually while loading the image and pre-processing it before it is passed into the lite model and inferred using
interpreter.invoke()
command.
The below code you see is what I meant by pre-processing:
test_image=cv2.imread(file_name)
test_image=cv2.resize(test_image,(299,299),cv2.INTER_AREA)
test_image = np.expand_dims((test_image)/255, axis=0).astype(np.float32)
interpreter.set_tensor(input_tensor_index, test_image)
interpreter.invoke()
digit = np.argmax(output()[0])
#print(digit)
prediction=result[digit]
As you can see there are two crucial commands/pre-processing done on the image once it is read using "imread()":
i) The image should be resized to the size that is the "input_height" and "input_width" values of the input image/tensor that was used during the training. In my case (inception-v3) this was 299 for both "input_height" and "input_width". (Read the documentation of the model for this value or look for this variable in the file that you used to train or retrain the model)
ii) The next command in the above code is:
test_image = np.expand_dims((test_image)/255, axis=0).astype(np.float32)
I got this from the "formulae"/model code:
test_image = np.expand_dims((test_image-input_mean)/input_std, axis=0).astype(np.float32)
Reading the documentation revealed that for my architecture input_mean = 0 and input_std = 255.
When I did the said changes to my code, I got the accuracy that was expected (90%).
Hope this helps.
Please file an issue on GitHub https://github.com/tensorflow/tensorflow/issues and add the link here.
Also please add more details on what you are retraining the last layer for.

How to train TensorFlow's deeplab model on Cityscapes?

Is it possible to train the current deeplab model in TensorFlow to reasonable accuracy using 4 GPUs with 11GB? I seem to be able to fit 2 batches per GPU, so am running a total batch size of 8 across 4 clones.
Following the instructions included with the model, I get a mean IoU of < 30% after 90,000 iterations.
PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim python deeplab/train.py \
--logtostderr --training_number_of_steps=90000 \
--train_split="train" --model_variant="xception_65" \
--atrous_rates=6 --atrous_rates=12 --atrous_rates=18 \
--output_stride=16 --decoder_output_stride=4 --train_crop_size=769 \
--train_crop_size=769 --train_batch_size=8 --num_clones=4 \
--dataset="cityscapes" \
--tf_initial_checkpoint=deeplab/models/xception/model.ckpt \
--train_logdir=$LOGDIR \
--dataset_dir=deeplab/datasets/cityscapes/tfrecord
I have tried with batch norm both enabled and disabled without much difference in outcome.
Thanks!
It seems I needed a much larger step length than the default. 1e-2 gives results closer to the published results, with batch size 15 and a smaller crop window size.
if you check this link https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md
It has links to pretrained models for MobileNet v2 and DeepLab trained on Cityscapes. You can modify the existing shell scripts present here to train on cityscapes.

How do I train a new model with my own custom data on tensorflow

Please forgive me, I am quite new to Tensorflow so please keep the response as detailed as possible. Thank you very much !!!
My question now is that I have run this script with some parameter
https://github.com/tensorflow/models/blob/master/inception/inception/data/build_image_data.py
# python build_image_data.py --train_directory /Input --validation_directory /validation --output_directory /output --labels_file /labels_file
to build image data and get some output, such as
train-00000-of-00002, train-00001-of-00002, validation-00000-of-00002, validation-00001-of-00002
After that, how do I train a new model with above custom data on tensorflow ?
Thank you very much !
I have resolve this stubid question by myself, thanks everyone.
http://blog.twman.org/2016/06/tensorflow.html
bazel build inception/build_image_data
bazel-bin/inception/build_image_data --train_directory="${TRAIN_DIR}" \
--validation_directory="${VALIDATION_DIR}" \
--output_directory="${OUTPUT_DIRECTORY}" \
--labels_file="${LABELS_FILE}" \
--train_shards=128 \
--validation_shards=24 \
--num_threads=8