I am trying to perform a text classification using spacy v3.
I am a bit confused with CLI approach. However, following the examples in the spacy project repo HERE.
I downloaded test data right HERE and prepared it to spacy v3 format:
import pandas as pd
from spacy.tokens import DocBin
import spacy
nlp = spacy.blank("en")
doc_bin = DocBin()
df= pd.read_json("../data/data.jsonl", lines = True)
df.head()
doc_bin = DocBin()
for text, label in zip(df['text'], df['label']):
doc = nlp(text)
doc.cats[label] = True
doc_bin.add(doc)
doc_bin.to_disk('train.spacy')
Created a textclassification config.cfg file on the documentation page.
And start the training loop given as:
python -m spacy train assets/config.cfg --output training/ --paths.train assets/train.spacy --paths.dev assets/train.spacy
I get the following output:
Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.bias', 'lm_head.decoder.weight', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'lm_head.dense.bias', 'lm_head.dense.weight']
- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[2022-05-16 04:50:42,566] [INFO] Initialized pipeline components: ['transformer', 'textcat']
✔ Initialized pipeline
============================= Training pipeline =============================
ℹ Pipeline: ['transformer', 'textcat']
ℹ Initial learn rate: 0.0
E # LOSS TRANS... LOSS TEXTCAT CATS_SCORE SCORE
--- ------ ------------- ------------ ---------- ------
And it seems that the pipeline has started. However, nothing else is happening. Apparently no training is happening and I only see first empty row of training loop.
Related
TLDR:
Short term: Trying to quantize a specific portion of a TF model (recreated from a TFLite model). Skip to pictures below. \
Long term: Transfer Learn on Yamnet and compile for Edge TPU.
Source code to follow along is here
I've been trying to transfer learn on Yamnet and compile for a Coral Edge TPU for a few weeks now.
Started here, but quickly realized that model wouldn't quantize and compile for the Edge TPU because of the dynamic input and out of the box TFLite quantization doesn't work well with the preprocessing of audio before Yamnet's MobileNet.
After tinkering and learning for a few weeks, I found a Yamnet model compiled for the Edge TPU (sadly without source code) and figured my best shot would be to try to recreate it in TF, then quantize, then compile to TFLite, then compile for the edge TPU. I'll also have to figure out how to set the weights - not sure if I have to/can do that pre or post quantization. Anyway, I've effectively recreated the model, but am having a hard time quantizing without a bunch of wacky behavior.
The model currently looks like this:
I want it to look like this:
For quantizing, I tried:
TFLite Model Optimization which puts tfl.quantize ops all over the place and fails to compile for the Edge TPU.
Quantization Aware Training which throws some annoying errors that I've been trying to work through.
If you know a better way to achieve the long term goal than what I proposed, please (please please please) share! Otherwise, help on specific quant ops would be great! Also, reach out for clarity
I've ran into your same issues trying to convert the Yamnet model by tensorflow into full integers in order to compile it for Coral edgetpu and I think I've found a workaround for that.
I've been trying to stick to the tutorials posted in the section tflite-model-maker and finding a solution within this API because, for experience, I found it to be a very powerful tool.
If your goal is to build a model which is fully compiled for the edgetpu (meaning all layers, including input and output ones, being converted to int8 type) I'm afraid this solution won't fit for you. But since you posted you're trying to obtain a custom model with the same structure of:
Yamnet model compiled for the Edge TPU
then I think this workaround would help you.
When you train your custom model following the basic tutorial it is possible to export the custom model both in .tflite format
model.export(models_path, tflite_filename='my_birds_model.tflite')
and full tensorflow model:
model.export(models_path, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL])
Then it is possible to convert the full tensorflow saved model to tflite format by using the following script:
import tensorflow as tf
import numpy as np
import glob
from scipy.io import wavfile
dataset_path = '/path/to/DATASET/testing/*/*.wav'
representative_data = []
saved_model_path = './saved_model'
samples = glob.glob(dataset_path)
input_size = 15600 #Yamnet model's input size
def representative_data_gen():
for input_value in samples:
sample_rate, audio_data = wavfile.read(input_value, 'rb')
audio_data = np.array(audio_data)
splitted_audio_data = tf.signal.frame(audio_data, input_size, input_size, pad_end=True, pad_value=0) / tf.int16.max #normalization in [-1,+1] range
yield [np.float32(splitted_audio_data[0])]
tf.compat.v1.enable_eager_execution()
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
converter.experimental_new_converter = True #if you're using tensorflow<=2.2
converter.optimizations = [tf.lite.Optimize.DEFAULT]
#converter.inference_input_type = tf.uint8 # or tf.uint8
#converter.inference_output_type = tf.uint8 # or tf.uint8
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
open(saved_model_path + "converted_model.tflite", "wb").write(tflite_model)
As you can see, the lines which tell the converter to change input/output type are commented. This is because Yamnet model expects in input normalized values of audio sample in the range [-1,+1] and the numerical representation must be float32 type. In fact the compiled model of Yamnet you posted uses the same dtype for input and output layers (float32).
That being said you will end up with a tflite model converted from the full tensorflow model produced by tflite-model-maker. The script will end with the following line:
fully_quantize: 0, inference_type: 6, input_inference_type: 0, output_inference_type: 0
and the inference_type: 6 tells you the inference operations are suitable for being compiled to coral edgetpu.
The last step is to compile the model. If you compile the model with the standard edgetpu_compiler command line :
edgetpu_compiler -s converted_model.tflite
the final model would have only 4 operations which run on the EdgeTPU:
Number of operations that will run on Edge TPU: 4
Number of operations that will run on CPU: 53
You have to add the optional flag -a which enables multiple subgraphs (it is in experimental stage though)
edgetpu_compiler -sa converted_model.tflite
After this you will have:
Number of operations that will run on Edge TPU: 44
Number of operations that will run on CPU: 13
And most of the model operations will be mapped to edgetpu, namely:
Operator Count Status
MUL 1 Mapped to Edge TPU
DEQUANTIZE 4 Operation is working on an unsupported data type
SOFTMAX 1 Mapped to Edge TPU
GATHER 2 Operation not supported
COMPLEX_ABS 1 Operation is working on an unsupported data type
FULLY_CONNECTED 3 Mapped to Edge TPU
LOG 1 Operation is working on an unsupported data type
CONV_2D 14 Mapped to Edge TPU
RFFT2D 1 Operation is working on an unsupported data type
LOGISTIC 1 Mapped to Edge TPU
QUANTIZE 3 Operation is otherwise supported, but not mapped due to some unspecified limitation
DEPTHWISE_CONV_2D 13 Mapped to Edge TPU
MEAN 1 Mapped to Edge TPU
STRIDED_SLICE 2 Mapped to Edge TPU
PAD 2 Mapped to Edge TPU
RESHAPE 1 Operation is working on an unsupported data type
RESHAPE 6 Mapped to Edge TPU
I built a custom model in .h5 from Matterport's MaskRCNN implementation. I managed to save the full model and not the weights alone using model.keras_model.save(), and assume it worked correctly.
I need to convert this model to ONNX to inference in Unity Barracuda, and I have been hitting several errors along the way.
I tried:
T1. .h5 to ONNX using this tutorial and the keras2onnx package, and I hit an error at:
model = load_model('model.h5')
Error:
ValueError: Unknown layer: BatchNorm
T2. Defining custom layers using this GitHub code:
model = keras.models.load_model(r'model.h5', custom_objects={'BatchNorm':BatchNorm,
'tf':tf, 'ProposalLayer':ProposalLayer,
'PyramidROIAlign1':PyramidROIAlign1, 'PyramidROIAlign2':PyramidROIAlign2,
'DetectionLayer':DetectionLayer}, compile=False)
Error:
ValueError: No model found in config file.
ValueError: Unknown layer: PyramidROIAlign
T3. .h5 to .pb (frozen graph) and .pbtxt, and then from .pb to ONNX using tf2onnx after finding input and output nodes (seems to be only one of each?):
assert d in name_to_node, "%s is not in graph" % d
AssertionError: output0 is not in graph
T4. .h5 to SavedModel using tf-serving code from here and then python -m tf2onnx.convert --saved-model exported_models\coco_mrcnn\3 --opset 15 --output "model.onnx" to convert to ONNX:
ValueError: make_sure failure: variable mrcnn_detection/map/while/Enter already exists as state variable.
TLDR: Is there a way to convert my .h5 model to ONNX through any direct/indirect means? I have been stuck on this for days!
Thanks in advance.
Edit 1:
It seems that keras.models.load_model() throws the first two errors - wondering if there is a way I can work with the .pb/.pbtxt model, or a way around without using load_model(), or a way to solve the load_model() issue?
Edit 2:
Code for T1:
custom dataset modified from Matterport's MaskRCNN implementation
Code for T4
Try converting it to saved model format and then to onnx.
import numpy as np
import tensorflow as tf
from tensorflow import keras
def get_model():
# Create a simple model.
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mean_squared_error")
return model
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model.h5')` creates a h5 file `my_model.h5`.
model.save("my_h5_model.h5")
# It can be used to reconstruct the model identically.
model = keras.models.load_model("my_h5_model.h5")
tf.saved_model.save(model, "tmp_model")
Then convert it using tf2onnx.
python3 -m tf2onnx.convert --saved-model tmp_model --output "model.onnx"
this works for me
via anaconda powershell console (execute as admin) :
pip install tf2onnx
pip install onnxmltools
and in a notebook (for example)
from tensorflow.python.keras.models import load_model
import os
os.environ['TF_KERAS'] = '1'
import onnxmltools
model = load_model('[h5 path]')
onnx_model = onnxmltools.convert_keras(model)
onnxmltools.utils.save_model(onnx_model, '[onnx path]')
I want to further train the Greek model of spaCy el_core_news_lg for the NER task with the training configuration file that is introduced in spaCy v3.
spacy train command throws the error:
[E923] It looks like there is no proper sample data to initialize the Model of component 'tok2vec'. To check your input data paths and annotation, run: python -m spacy debug data config.cfg
When I run spacy debug data as the error suggests, I get the error:
[E930] Received invalid get_examples callback in Tok2Vec.initialize. Expected function that returns an iterable of Example objects but got: []
Any ideas?
I am following Tensorflow serving documentation to convert my trained model into a format that can be served in Docker container. As I'm new to Tensorflow, I am struggling to convert this trained model into a form that will be suitable for serving.
The model is already trained and I have the checkpoint file and .meta file. So, I need to get the .pb file and variables folder from the above two files. Can anyone please suggest me an approach on how to get this done for serving the models?
.
|-- tensorflow model
| -- 1
| |-- saved_model.pb
| -- variables
| |-- variables.data-00000-of-00001
| -- variables.index
There is multiple ways of doing this, and other methods could be required for more complex models.
I am currently using the method described here, which works great for tf.keras.models.Model and tf.keras.Sequential models (not sure for tensorflow subclassing?).
Below is a minimal working example, including creating a model using python (it seems like you have already completed this by your folder structure and can ignore the first step)
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
import tensorflow.keras.backend as K
inputs = Input(shape=(2,))
x = Dense(128, activation='relu')(inputs)
x = Dense(32, activation='relu')(x)
outputs = Dense(1)(x)
model = Model(inputs=inputs, outputs=outputs)
model.compile(optimizer='adam', loss='mse')
# loading existing weights, model architectural must be the same as the existing model
#model.load_weights(".//MODEL_WEIGHT_PATH//WEIGHT.h5")
export_path = 'SAVE_PATH//tensorflow_model//1'
with K.get_session() as sess:
tf.saved_model.simple_save(
sess,
export_path,
inputs={'inputs': model.input}, # for single input
#inputs={t.name[:-5]: t for t in model.input}, # for multiple inputs
outputs={'outputs': model.output})
I suggest you use folder name "tensorflow_model" instead of "tensorflow model", to avoid possible problems with spaces.
Then we can build the docker image in terminal by (for windows, use ^ instead of \ for line brake, and use //C/ instead of C:\ in path):
docker run -p 8501:8501 --name tfserving_test \
--mount type=bind,source="SAVE_PATH/tensorflow_model",target=/models/tensorflow_model \
-e MODEL_NAME=tensorflow_model -t tensorflow/serving
Now the container should be up and running, and we can test the serving with python
import requests
import json
#import numpy as np
payload = {
"instances": [{'inputs': [1.,1.]}]
}
r = requests.post('http://localhost:8501/v1/models/tensorflow_model:predict', json=payload)
print(json.loads(r.content))
# {'predictions': [[0.121025]]}
The container is working with our model, giving the prediction 0.121025 for the input [1., 1.]
I hope this helps:
import tensorflow as tf
from tensorflow.contrib.keras import backend as K
from tensorflow.python.client import device_lib
K.set_learning_phase(0)
model = tf.keras.models.load_model('my_model.h5')
export_path = './'
with K.get_session() as sess:
tf.saved_model.simple_save(
sess,
export_path,
inputs={'input_image': model.input},
outputs={t.name: t for t in model.outputs}
)
print('Converted to SavedModel!!!')
From your question, do you mean you no more have access to Model and you have only Check Point files and .meta files?
If that is the case, you can refer the below links which has the code for converting those files into '.pb' file.
Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file
https://github.com/petewarden/tensorflow_makefile/blob/master/tensorflow/python/tools/freeze_graph.py
If you have access to the Trained Model, then I guess you are saving it currently using tf.train.Saver. Instead of that, you can Save the Model and Export it using any of the three (commonly used) functions mentioned below:
tf.saved_model.simple_save => In this case, only Predict API is supported during Serving. Example of this is mentioned by KrisR89 in his answer.
tf.saved_model.builder.SavedModelBuilder => In this case, you can define the SignatureDefs, i.e., the APIs which you want to access during Serving.
You can find example on how to use it in the below link,
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_saved_model.py
Third way is shown below:
classifier = tf.estimator.DNNClassifier(config=training_config, feature_columns=feature_columns,hidden_units=[256, 32], optimizer=tf.train.AdamOptimizer(1e-4),n_classes=NUM_CLASSES,dropout=0.1, model_dir=FLAGS.model_dir)
classifier.export_savedmodel(FLAGS.saved_dir,
serving_input_receiver_fn=serving_input_receiver_fn)
The Example on how to save model using Estimators can be found in the below link. This supports Predict and Classification APIs.
https://github.com/yu-iskw/tensorflow-serving-example/blob/master/python/train/mnist_premodeled_estimator.py
Let me know if this information helps or if you need any further help.
I am looking to use Google Cloud ML to host my Keras models so that I can call the API and make some predictions. I am running into some issues from the Keras side of things.
So far I have been able to build a model using TensorFlow and deploy it on CloudML. In order for this to work I had to make some changes to my basic TF code. The changes are documented here: https://cloud.google.com/ml/docs/how-tos/preparing-models#code_changes
I have also been able to train a similar model using Keras. I can even save the model in the same export and export.meta format as I would get with TF.
from keras import backend as K
saver = tf.train.Saver()
session = K.get_session()
saver.save(session, 'export')
The part I am missing is how do I add the placeholders for input and output into the graph I build on Keras?
After training your model on Google Cloud ML Engine (check out this awesome tutorial ), I named the input and output of my graph with
signature = predict_signature_def(inputs={'NAME_YOUR_INPUT': new_Model.input},
outputs={'NAME_YOUR_OUTPUT': new_Model.output})
You can see the full exporting example for an already trained keras model 'model.h5' below.
import keras.backend as K
import tensorflow as tf
from keras.models import load_model, Sequential
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def
# reset session
K.clear_session()
sess = tf.Session()
K.set_session(sess)
# disable loading of learning nodes
K.set_learning_phase(0)
# load model
model = load_model('model.h5')
config = model.get_config()
weights = model.get_weights()
new_Model = Sequential.from_config(config)
new_Model.set_weights(weights)
# export saved model
export_path = 'YOUR_EXPORT_PATH' + '/export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'NAME_YOUR_INPUT': new_Model.input},
outputs={'NAME_YOUR_OUTPUT': new_Model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()
You can also see my full implementation.
edit: And if my answer solved your problem, just leave me an uptick here :)
I found out that in order to use keras on google cloud one has to install it with a setup.py script and put it on the same place folder where you run the gcloud command:
├── setup.py
└── trainer
├── __init__.py
├── cloudml-gpu.yaml
├── example5-keras.py
And in the setup.py you put content such as:
from setuptools import setup, find_packages
setup(name='example5',
version='0.1',
packages=find_packages(),
description='example to run keras on gcloud ml-engine',
author='Fuyang Liu',
author_email='fuyang.liu#example.com',
license='MIT',
install_requires=[
'keras',
'h5py'
],
zip_safe=False)
Then you can start your job running on gcloud such as:
export BUCKET_NAME=tf-learn-simple-sentiment
export JOB_NAME="example_5_train_$(date +%Y%m%d_%H%M%S)"
export JOB_DIR=gs://$BUCKET_NAME/$JOB_NAME
export REGION=europe-west1
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir gs://$BUCKET_NAME/$JOB_NAME \
--runtime-version 1.0 \
--module-name trainer.example5-keras \
--package-path ./trainer \
--region $REGION \
--config=trainer/cloudml-gpu.yaml \
-- \
--train-file gs://tf-learn-simple-sentiment/sentiment_set.pickle
To use GPU then add a file such as cloudml-gpu.yaml in your module with the following content:
trainingInput:
scaleTier: CUSTOM
# standard_gpu provides 1 GPU. Change to complex_model_m_gpu for 4
GPUs
masterType: standard_gpu
runtimeVersion: "1.0"
I don't know much about Keras. I consulted with some experts, and the following should work:
from keras import backend as k
# Build the model first
model = ...
# Declare the inputs and outputs for CloudML
inputs = dict(zip((layer.name for layer in model.input_layers),
(t.name for t in model.inputs)))
tf.add_to_collection('inputs', json.dumps(inputs))
outputs = dict(zip((layer.name for layer in model.output_layers),
(t.name for t in model.outputs)))
tf.add_to_collection('outputs', json.dumps(outputs))
# Fit/train the model
model.fit(...)
# Export the model
saver = tf.train.Saver()
session = K.get_session()
saver.save(session, 'export')
Some important points:
You have to call tf.add_to_collection after you create the model
but before you ever call K.get_session(), fit etc.,
You should be sure set the name of input and output layers when
you add them to the graph because you'll need to refer to them
when you send prediction requests.
Here's another answer that may help. Assuming you already have a keras model you should be able to append this to the end of your script and get an ML Engine compatible version of the model (protocol buffer). Note that you need to upload the saved_model.pb file and the sibling directory with variables to ML Engine for it to work. Note also that the .pb file must be named saved_model.pb or saved_model.pbtxt.
Assuming your model is name model
from tensorflow import saved_model
model_builder = saved_model.builder.SavedModelBuilder("exported_model")
inputs = {
'input': saved_model.utils.build_tensor_info(model.input)
}
outputs = {
'earnings': saved_model.utils.build_tensor_info(model.output)
}
signature_def = saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name=saved_model.signature_constants.PREDICT_METHOD_NAME
)
model_builder.add_meta_graph_and_variables(
K.get_session(),
tags=[saved_model.tag_constants.SERVING],
signature_def_map={saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature_def
})
model_builder.save()
will export the model to directory /exported_model.