I'm using the make_image_classifier python script to retrain a mobilenetv2 on a new set of images. My end goal is to make predictions in tfjs in the browser.
This is exactly what i'm doing:
Step 1: Retrain the model
make_image_classifier \
--image_dir input_data \
--tfhub_module https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4 \
--image_size 224 \
--saved_model_dir ./trained_model \
--labels_output_file class_labels.txt \
--tflite_output_file new_mobile_model.tflite
Step 2: Convert the tf saved model to a graph model using tensorflowjs_converter
tensorflowjs_converter \
--input_format=tf_saved_model \
--output_format=tfjs_graph_model \
--signature_name=serving_default \
--saved_model_tags=serve \
trained_model/ \
web_model/
Step 3: load the new model in the browser, preprocess an image input and ask the model to make a prediction
const model = tf.loadGraphModel('model.json').then(function(m){
var img = document.getElementById("img");
var processed=preprocessImage(img, "mobilenet")
window.prediction=m.predict(processed)
window.prediction.print();
})
})
function preprocessImage(image,modelName){
let tensor=tf.browser.fromPixels(image)
.resizeNearestNeighbor([224,224])
.toFloat();
console.log('tensor pro', tensor);
if(modelName==undefined)
{
return tensor.expandDims();
}
if(modelName=="mobilenet")
{
let offset=tf.scalar(127.5);
console.log('offset',offset);
return tensor.sub(offset)
.div(offset)
.expandDims();
}
else
{
throw new Error("Unknown Model error");
}
}
I'm getting invalid results. I checked the predictions made by the initial model and they are correct so what I'm thinking is either the conversion is not happening properly or I'm not preprocessing the image in the same manner that the initial script is.
Help.
P.S: When running the converter, I'm getting the following message. Not sure if its directly relevant to what I'm experiencing.
tensorflow/core/graph/graph_constructor.cc:750 Node 'StatefulPartitionedCall' has 71 outputs but the _output_shapes attribute specifies shapes for 605 outputs. Output shapes may be inaccurate.
make_image_classifier creates a saved_model specified to tensorflow lite. If you rather want to convert mobilenet to tensorflow.js, the command to used has been given in this answer.
Instead of using make_image_classifier, you would need to use retrain.py which can be downloded by the following
curl -LO https://github.com/tensorflow/hub/raw/master/examples/image_retraining/retrain.py
Related
I am encountering a ValueError in my Python code when trying to fine-tune Hugging Face's distribution of the GPT-2 model. Specifically:
ValueError: Dimensions must be equal, but are 64 and 0 for
'{{node Equal_1}} = Equal[T=DT_FLOAT, incompatible_shape_error=true](Cast_18, Cast_19)'
with input shapes: [64,0,1024], [2,0,12,1024].
I have around 100 text files that I concatenate into a string variable called raw_text and then pass into the following function to create training and testing TensorFlow datasets:
def to_datasets(raw_text):
# split the raw text in smaller sequences
seqs = [
raw_text[SEQ_LEN * i:SEQ_LEN * (i + 1)]
for i in range(len(raw_text) // SEQ_LEN)
]
# set up Hugging Face GPT-2 tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.pad_token = tokenizer.eos_token
# tokenize the character sequences
tokenized_seqs = [
tokenizer(seq, padding="max_length", return_tensors="tf")["input_ids"]
for seq in seqs
]
# convert tokenized sequences into TensorFlow datasets
trn_seqs = tf.data.Dataset \
.from_tensor_slices(tokenized_seqs[:int(len(tokenized_seqs) * TRAIN_PERCENT)])
tst_seqs = tf.data.Dataset \
.from_tensor_slices(tokenized_seqs[int(len(tokenized_seqs) * TRAIN_PERCENT):])
def input_and_target(x):
return x[:-1], x[1:]
# map into (input, target) tuples, shuffle order of elements, and batch
trn_dataset = trn_seqs.map(input_and_target) \
.shuffle(SHUFFLE_BUFFER_SIZE) \
.batch(BATCH_SIZE, drop_remainder=True)
tst_dataset = tst_seqs.map(input_and_target) \
.shuffle(SHUFFLE_BUFFER_SIZE) \
.batch(BATCH_SIZE, drop_remainder=True)
return trn_dataset, tst_dataset
I then try to train my model, calling train_model(*to_datasets(raw_text)):
def train_model(trn_dataset, tst_dataset):
# import Hugging Face GPT-2 model
model = TFGPT2Model.from_pretrained("gpt2")
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy()
)
model.fit(
trn_dataset,
epochs=EPOCHS,
initial_epoch=0,
validation_data=tst_dataset
)
The ValueError is triggered on the model.fit() call. The variables in all-caps are settings pulled in from a JSON file. Currently, they are set to:
{
"BATCH_SIZE":64,
"SHUFFLE_BUFFER_SIZE":10000,
"EPOCHS":500,
"SEQ_LEN":2048,
"TRAIN_PERCENT":0.9
}
Any information regarding what this error means or ideas on how to resolve it would be greatly appreciated. Thank you!
I'm having the same problem but when I change the batch size to 12 (same as n_layer parameter in the gpt-2 config file) it works.
I don't Know why it works but you can try it...
If you manage to solve it on different way I will be glad to hear.
When i convert a frozen PB model to a tensorflow JS model I loose all acuracy with predictions. Can anyone tell me why and what I am doing wrong?
I have done the following things - I have retraining the ImageNet model with my own dataset as described here:
https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0
I get accurate results with the frozen model when i run the following command for example:
python3 -m scripts.label_image \
--graph=tf_files/retrained_graph.pb \
--image=/mnt/c//Users/Harry/Pictures/220px-Afghane.jpg
The follow output it gives is spot on:
afghan hound (score=0.98313)
briard (score=0.00433)
lhasa (score=0.00401)
sussex spaniel (score=0.00346)
otterhound (score=0.00116)
I have converted my frozen model to a Tensorflow JS using the tensorflow JS converter with the following command:
tensorflowjs_converter \
--input_format=tf_frozen_model \
--output_node_names='final_result' \
'C:/Code/tensorflow-for-poets-2/tf_files/retrained_graph.pb' \
'C:/tensorflow output 2'
When i run a prediction on the tensorflow JS model with the same image i used with the frozen model i get terrible results:
Loading model:
const MODEL_URL = 'assets/dog-model/tensorflowjs_model.pb';
const WEIGHTS_URL = 'assets/dog-model/weights_manifest.json';
loadFrozenModel(MODEL_URL, WEIGHTS_URL).then(
result => (this.model = result)
);
Predicting results:
const image = tf.browser
.fromPixels(this.staticImage.nativeElement)
.resizeNearestNeighbor([224, 224])
.toFloat()
.sub(meanImageNetRGB)
.expandDims();
console.log(image);
const prediction = this.model.predict(image);
Output:
yorkshire terrier: 0.2447875738143921
komondor: 0.22793063521385193
ibizan hound: 0.0579879954457283
saluki: 0.04560968279838562
maltese dog: 0.04430125281214714
The inaccuracy has to do with the input to the model.
Make sure that the operations - cropping, reshaping, ... used to create the tensor representing the image in both version (python and js) are alike.
I'm trying to send a job up to my object detection model on CloudML to get predictions. I'm following the guide at https://cloud.google.com/ml-engine/docs/online-predict but I'm getting an error when submitting the request:
RuntimeError: Prediction failed: Error processing input: Expected uint8, got '\xf6>\x00\x01\x04\xa4d\x94...(more bytes)...\x00\x10\x10\x10\x04\x80\xd9' of type 'str' instead.
This is my code:
img = base64.b64encode(open("file.jpg", "rb").read()).decode('utf-8')
json = {"b64": img}
result = predict_json(project, model, json, "v1")
My fault, I forgot to add --input_type encoded_image_string_tensor when I exported the graph.
I save the graph to a .pb file. I get an error when I convert the .pb to .dlc. Anyone know why?
My code to build the model:
import tensorflow as tf
from tensorflow.python.framework.graph_util import convert_variables_to_constants
from tensorflow.python.ops import variable_scope
X = tf.placeholder(tf.float32, shape=[None, 1], name="input");
with variable_scope.variable_scope("input"):
a = tf.Variable([[1]], name="a", dtype=tf.float32);
g = X * a
with variable_scope.variable_scope("output"):
b = tf.Variable([[0]], name="b", dtype=tf.float32);
ss = tf.add(g, b, name="output")
sess = tf.Session();
sess.run(tf.global_variables_initializer());
graph = convert_variables_to_constants(sess, sess.graph_def, ["output/output"])
tf.train.write_graph(graph, './linear/', 'graph.pb', as_text=False)
sess.close();
convert cmd:
snpe-tensorflow-to-dlc --graph graph_sc.pb -i input 1 --out_node output/output --allow_unconsumed_nodes
error message:
2017-10-26 01:55:15,919 - 390 - INFO - INFO_ALL_BUILDING_LAYER_W_NODES: Building layer (ElementWiseMul) with nodes: [u'input_1/mul']
~/snpe-sdk/snpe-1.6.0/lib/python/converters/tensorflow/layers/eltwise.py:108: RuntimeWarning: error_code=1002; error_message=Layer paramter value is invalid. Layer input_1/mul: at least two inputs required, have 1; error_component=Model Validation; line_no=732; thread_id=140514161018688
output_name)
2017-10-26 01:55:15,920 - 390 - INFO - INFO_ALL_BUILDING_LAYER_W_NODES: Building layer (ElementWiseSum) with nodes: [u'output/output']
~/snpe-sdk/snpe-1.6.0/lib/python/converters/tensorflow/layers/eltwise.py:84: RuntimeWarning: error_code=1002; error_message=Layer paramter value is invalid. Layer output/output: at least two inputs required, have 1; error_component=Model Validation; line_no=732; thread_id=140514161018688
output_name)
SNPE requires a 3D tensor as input. Try to update your command -i input 1 to -i input 1,1,1
The input_dim argument to snpe-tensorflow-to-dlc should be of 3 dimension tensors like below example,
snpe-tensorflow-to-dlc --graph $SNPE_ROOT/models/inception_v3/tensorflow/inception_v3_2016_08_28_frozen.pb
--input_dim input "1,299,299,3" --out_node "InceptionV3/Predictions/Reshape_1" --dlc inception_v3.dlc
--allow_unconsumed_nodes
For more detailed reference to convert TensorFlow model to DLC using Neural Processing SDK follow below link,
https://developer.qualcomm.com/docs/snpe/model_conv_tensorflow.html
I use ssd_mobilenets in Object detection API to train my own model, and get .ckpt files. It works well on my computer, but now I want to use the model on my phone. So, I need convert it to .pb file. I do not know how to do it, can any one help? By the way, the graph of ssd_mobilenets is complex, I can not find which is the output of model. Is there any one knowing the name of the output?
Use export_inference_graph.py to convert model checkpoint file into a .pb file.
python tensorflow_models/object_detection/export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path architecture_used_while_training.config \
--trained path_to_saved_ckpt/model.ckpt-NUMBER \
--output_directory model/
This is the 4th code cell in object_detection_tutorial.ipynb in this link -https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
Now the cell clearly says the .pb filename which is /frozen_inference_graph.pb
So you already have the .pb file why do you want to convert ??
Anyways you can refer thsi link for freezing the graph: https://github.com/jayshah19949596/Tensorboard-Visualization-Freezing-Graph
you need to use tensorflow.python.tools.freeze_graph() function to convert your .ckpt file to .pb file
The below code line shows how you do it
freeze_graph.freeze_graph(input_graph_path,
input_saver_def_path,
input_binary,
input_checkpoint_path,
output_node_names,
restore_op_name,
filename_tensor_name,
output_graph_path,
clear_devices,
initializer_nodes)
input_graph_path : is the path to .pb file where you will write your graph and this .pb file is not frozen. you will use tf.train.write_graph() to write the graph
input_saver_def_path : you can keep it an empty string
input_binary : it is a boolean value keep it false so that the file genertaed is not binary and human readable
input_checkpoint_path : path to the .ckpt file
output_graph_path : path where you want to write you pb file
clear_devices : boolean value ... keep it False
output_node_names : explicit tensor node names that you want to save
restore_op_name : string value that should be "save/restore_all"
filename_tensor_name = "save/Const:0"
initializer_nodes = ""