OpenAI GPT-2 model use with TensorFlow JS - tensorflow

Is that possible to generate texts from OpenAI GPT-2 using TensorFlowJS?
If not what is the limitation, like model format or ...?

I don't see any reason as to why not, other than maybe some operation that is in gpt-2 that is not supported by tensorflowjs.
I don't know how to do it, but here's a nice starting point:
install.sh
python3 -m pip install -q git+https://github.com/huggingface/transformers.git
python3 -m pip install tensorflow
save.py
from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# add the EOS token as PAD token to avoid warnings
model = TFGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
model.save("./test_gpt2")
that will give you a SavedModel file. Now you can try figure out the input and output nodes, and use tensorflowjs_converter to try and convert it. Pointer: https://www.tensorflow.org/js/tutorials/conversion/import_saved_model.

Related

How could I convert Tensorflow version 1 to onnx model?

I tried python -m tf2onnx.convert --saved-model [file_name] --output [onnx_file_name]. but It is run by tensorflow = 2.4.4 automatically.
I want to run tensorflow version 1 code. Is this code have a option of it?
You can install TensorFlow version 1, I also trying to use
tf.compat.v1.layers that also work with the result . You may need to
use model.save to have .pb format and convert by the program.
I used python -m tf2onnx.convert --saved-model [model file] --output [onnx file name].onnx --opset 13 and I solved it.

error of imoprt tensorflow as tf in python 3.5.2

I have installed TensorFlow in virtual environment on Ubunut 16.04. when I enter in virtualen by using command "source ~/tensorflow/bin/activate" it enters in virtualen. but after that when I enter the command " import tensorflow as tf" it gives me the following error
"import: not authorized 'tf' # error/constitute.c/WriteImages/1028."
how to solve this..
Maybe you forgot about telling which interpreter to use. Two variants:
Add shebang #!/usr/bin/env python3 at the beginning of you script
OR
Run script like python3 my_scripy.py

Tensorflow serving No versions of servable <MODEL> found under base path

I was following this tutorial to use tensorflow serving using my object detection model. I am using tensorflow object detection for generating the model. I have created a frozen model using this exporter (the generated frozen model works using python script).
The frozen graph directory has following contents ( nothing on variables directory)
variables/
saved_model.pb
Now when I try to serve the model using the following command,
tensorflow_model_server --port=9000 --model_name=ssd --model_base_path=/serving/ssd_frozen/
It always shows me
...
tensorflow_serving/model_servers/server_core.cc:421] (Re-)adding
model: ssd 2017-08-07 10:22:43.892834: W
tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:262]
No versions of servable ssd found under base path /serving/ssd_frozen/
2017-08-07 10:22:44.892901: W
tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:262]
No versions of servable ssd found under base path /serving/ssd_frozen/
...
I had same problem, the reason is because object detection api does not assign version of your model when exporting your detection model. However, tensorflow serving requires you to assign a version number of your detection model, so that you could choose different versions of your models to serve. In your case, you should put your detection model(.pb file and variables folder) under folder:
/serving/ssd_frozen/1/. In this way, you will assign your model to version 1, and tensorflow serving will automatically load this version since you only have one version. By default tensorflow serving will automatically serve the latest version(ie, the largest number of versions).
Note, after you created 1/ folder, the model_base_path is still need to be set to --model_base_path=/serving/ssd_frozen/.
For new version of tf serving, as you know, it no longer supports the model format used to be exported by SessionBundle but now SavedModelBuilder.
I suppose it's better to restore a session from your older model format and then export it by SavedModelBuilder. You can indicate the version of your model with it.
def export_saved_model(version, path, sess=None):
tf.app.flags.DEFINE_integer('version', version, 'version number of the model.')
tf.app.flags.DEFINE_string('work_dir', path, 'your older model directory.')
tf.app.flags.DEFINE_string('model_dir', '/tmp/model_name', 'saved model directory')
FLAGS = tf.app.flags.FLAGS
# you can give the session and export your model immediately after training
if not sess:
saver = tf.train.import_meta_graph(os.path.join(path, 'xxx.ckpt.meta'))
saver.restore(sess, tf.train.latest_checkpoint(path))
export_path = os.path.join(
tf.compat.as_bytes(FLAGS.model_dir),
tf.compat.as_bytes(str(FLAGS.version)))
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
# define the signature def map here
# ...
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
'predict_xxx':
prediction_signature
},
legacy_init_op=legacy_init_op
)
builder.save()
print('Export SavedModel!')
you could find main part of the code above in tf serving example.
Finally it will generate the SavedModel in a format that can be served.
Create a version folder under like - serving/model_name/0000123/saved_model.pb
Answer's above already explained why it is important to keep a version number inside the model folder. Follow below link , here they have different sets of built models , you can take it as a reference.
https://github.com/tensorflow/serving/tree/master/tensorflow_serving/servables/tensorflow/testdata
I was doing this on my personal computer running Ubuntu, not Docker. Note I am in a directory called "serving". This is where I saved my folder "mobile_weight". I had to create a new folder, "0000123" inside "mobile_weight". My path looks like serving->mobile_weight->0000123->(variables folder and saved_model.pb)
The command from the tensorflow serving tutorial should look like (Change model_name and your directory):
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=model_weight \
--model_base_path=/home/murage/Desktop/serving/mobile_weight >server.log 2>&1
So my entire terminal screen looks like:
murage#murage-HP-Spectre-x360-Convertible:~/Desktop/serving$ nohup tensorflow_model_server --rest_api_port=8501 --model_name=model_weight --model_base_path=/home/murage/Desktop/serving/mobile_weight >server.log 2>&1
That error message can also result due to issues with the --volume argument.
Ensure your --volume mount is actually correct and points to the model's dir, as this is a general 'model not found' error, but it just seems more complex.
If on windows just use cmd, otherwise its easy to accidentally use linux file path and linux separators in cygwin or gitbash. Even with the correct file structure you can get OP's error if you don't use the windows absolute path.
#using cygwin
$ echo $TESTDATA
/home/username/directory/serving/tensorflow_serving/servables/tensorflow/testdata
$ docker run -t --rm -p 8501:8501 -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
2021-01-22 20:12:28.995834: W tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:267] No versions of servable half_plus_two found under base path /models/half_plus_two. Did you forget to name your leaf directory as a number (eg. '/1/')?
Then calling the same command with the same unchanged file structure but with the full windows path using windows file separators, and it works:
#using cygwin
$ export TESTDATA="$(cygpath -w "/home/username/directory/serving/tensorflow_serving/servables/tensorflow/testdata")"
$ echo $TESTDATA
C:\Users\username\directory\serving\tensorflow_serving\servables\tensorflow\testdata
$ docker run -t --rm -p 8501:8501 -v "$TESTDATA\\saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
2021-01-22 21:10:49.527049: I tensorflow_serving/core/basic_manager.cc:740] Successfully reserved resources to load servable {name: half_plus_two version: 1}

How do I get the 'checkpoint' of a retrained inception-v3 model?

I'm trying use a retrained inception-v3 model in tensorflow-serving. But it seems that I have to provide a 'checkpoint'. I was wondering how do I get those 'checkpoints'? The retrain.py returns me a retrained_graph.pb. I followed this tutorial (https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0)
Thank you!
You may want to look into the API changes for Tensorflow 1.0
https://www.tensorflow.org/install/migration
to make it work to create a checkpoint file/s instead of .pb file.
Also refer to:
https://www.tensorflow.org/tutorials/image_retraining
Here is an example of how to download the checkpoints for the latest edition of pre-trained Inception V3 model:
$ CHECKPOINT_DIR=/tmp/checkpoints
$ mkdir ${CHECKPOINT_DIR}
$ wget http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz
$ tar -xvf inception_v3_2016_08_28.tar.gz
$ mv inception_v3.ckpt ${CHECKPOINT_DIR}
$ rm inception_v3_2016_08_28.tar.gz

Run tensorflow through SSH

When I run tensorflow training code through SSH, I got the following error:
QXcbConnection: Could not connect to display
This happens most likely due to summary object saving the models. How do I fix this error?
Try X11forwarding using -Y flag, e.g.:
ssh -Y user#server
Did you use matplotlib in your tensorflow training code?
If your answer is yes, you can try to add the following lines in your code.
import matplotlib
matplotlib.use('Agg')