Run prediction from saved model in tensorflow 2.0 - tensorflow

I have a saved model (a directory with model.pd and variables) and wanted to run predictions on a pandas data frame.
I've unsuccessfully tried a few ways to do this:
Attempt 1: Restore the estimator from the saved model
estimator = tf.estimator.LinearClassifier(
feature_columns=create_feature_cols(),
model_dir=path,
warm_start_from=path)
Where path is the directory that has a model.pd and variables folder. I got an error
ValueError: Tensor linear/linear_model/dummy_feature1/weights is not found in
gs://bucket/Trainer/output/2013/20191008T170504.583379-63adee0eaee0/serving_model_dir/export/1570554483/variables/variables
checkpoint {'linear/linear_model/dummy_feature1/weights': [1, 1], 'linear/linear_model/dummy_feature2/weights': [1, 1]
}
Attempt 2: Run prediction directly from the saved model by running
imported = tf.saved_model.load(path) # path is the directory that has a `model.pd` and variables folder
imported.signatures["predict"](example)
But has not successfully passed the argument - looks like the function is looking for a tf.example and I am not sure how to convert a data frame to tf.example.
My attempt to convert is below but got an error that df[f] is not a tensor:
for f in features:
example.features.feature[f].float_list.value.extend(df[f])
I've seen solutions on StackOverflow but they are all tensorflow 1.14. Greatly appreciate it if someone can help with tensorflow 2.0.

Considering you have your saved model present like this:
my_model
assets saved_model.pb variables
You can load your saved model using:
new_model = tf.keras.models.load_model('saved_model/my_model')
# Check its architecture
new_model.summary()
To perform prediction on a DataFrame you need to:
Wrap scalars into a list so as to have a batch dimension (models only process batches of data, not single samples)
Call convert_to_tensor on each feature
Example 1:
If you have values for the first test row as
sample = {
'Type': 'Cat',
'Age': 3,
'Breed1': 'Tabby',
'Gender': 'Male',
'Color1': 'Black',
'Color2': 'White',
'MaturitySize': 'Small',
'FurLength': 'Short',
'Vaccinated': 'No',
'Sterilized': 'No',
'Health': 'Healthy',
'Fee': 100,
'PhotoAmt': 2,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = new_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
print(
"This particular pet had a %.1f percent probability "
"of getting adopted." % (100 * prob)
)
Example 2:
Or if you have multiple rows present in the same order as the train data
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = new_model(predict_dataset, training=False)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p))

Related

Tensorflow TabTransformer model get weights before training

I am working with the Tensorflow Tabtransformer custom model and I am trying to get the initial weights after declaration using model.get_weights() but I get the following error:
from tabtransformertf.models.tabtransformer import TabTransformer
from tabtransformertf.utils.preprocessing import df_to_dataset, build_categorical_prep
numeric_features = ['TotPkts', 'TotBytes', 'Seq', 'Dur', 'Mean', 'StdDev', 'Sum', 'Min',
'Max', 'SrcBytes', 'DstBytes', 'Rate', 'SrcRate',
'DstRate']
cat_features = ['SrcPkts', 'DstPkts']
TARGET_FEATURE_NAME = "Category"
category_prep_layers = build_categorical_prep(tab_df, cat_features) #tab_df is the dataframe I am working on
model = TabTransformer(
numerical_features = numeric_features,
categorical_features = cat_features,
categorical_lookup=category_prep_layers,
numerical_discretisers=None, # simply passing the numeric features
embedding_dim=32,
out_dim=1,
out_activation='sigmoid',
depth=6,
heads=5,
attn_dropout=0.087687,
ff_dropout=0.429539,
mlp_hidden_factors=[1, 1],
use_column_embedding=False,
)
model.get_weights()
ValueError: Weights for model sequential have not yet been created. Weights are created when the Model is first called on inputs or `build()` is called with an `input_shape`.

Performing inference with a BERT (TF 1.x) saved model

I'm stuck on one line of code and have been stalled on a project all weekend as a result.
I am working on a project that uses BERT for sentence classification. I have successfully trained the model, and I can test the results using the example code from run_classifier.py.
I can export the model using this example code (which has been reposted repeatedly, so I believe that it's right for this model):
def export(self):
def serving_input_fn():
label_ids = tf.placeholder(tf.int32, [None], name='label_ids')
input_ids = tf.placeholder(tf.int32, [None, self.max_seq_length], name='input_ids')
input_mask = tf.placeholder(tf.int32, [None, self.max_seq_length], name='input_mask')
segment_ids = tf.placeholder(tf.int32, [None, self.max_seq_length], name='segment_ids')
input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn({
'label_ids': label_ids, 'input_ids': input_ids,
'input_mask': input_mask, 'segment_ids': segment_ids})()
return input_fn
self.estimator._export_to_tpu = False
self.estimator.export_savedmodel(self.output_dir, serving_input_fn)
I can also load the exported estimator (where the export function saves the exported model into a subdirectory labeled with a timestamp):
predict_fn = predictor.from_saved_model(self.output_dir + timestamp_number)
However, for the life of me, I cannot figure out what to provide to predict_fn as input for inference. Here is my best code at the moment:
def predict(self):
input = 'Test input'
guid = 'predict-0'
text_a = tokenization.convert_to_unicode(input)
label = self.label_list[0]
examples = [InputExample(guid=guid, text_a=text_a, text_b=None, label=label)]
features = convert_examples_to_features(examples, self.label_list,
self.max_seq_length, self.tokenizer)
predict_input_fn = input_fn_builder(features, self.max_seq_length, False)
predict_fn = predictor.from_saved_model(self.output_dir + timestamp_number)
result = predict_fn(predict_input_fn) # this generates an error
print(result)
It doesn't seem to matter what I provide to predict_fn: the examples array, the features array, the predict_input_fn function. Clearly, predict_fn wants a dictionary of some type - but every single thing that I've tried generates an exception due to a tensor mismatch or other errors that generally mean: bad input.
I presumed that the from_saved_model function wants the same sort of input as the model test function - apparently, that's not the case.
It seems that lots of people have asked this very question - "how do I use an exported BERT TensorFlow model for inference?" - and have gotten no answers:
Thread #1
Thread #2
Thread #3
Thread #4
Any help? Thanks in advance.
Thank you for this post. Your serving_input_fn was the piece I was missing! Your predict function needs to be changed to feed the features dict directly, rather than use the predict_input_fn:
def predict(sentences):
labels = [0, 1]
input_examples = [
run_classifier.InputExample(
guid="",
text_a = x,
text_b = None,
label = 0
) for x in sentences] # here, "" is just a dummy label
input_features = run_classifier.convert_examples_to_features(
input_examples, labels, MAX_SEQ_LEN, tokenizer
)
# this is where pred_input_fn is replaced
all_input_ids = []
all_input_mask = []
all_segment_ids = []
all_label_ids = []
for feature in input_features:
all_input_ids.append(feature.input_ids)
all_input_mask.append(feature.input_mask)
all_segment_ids.append(feature.segment_ids)
all_label_ids.append(feature.label_id)
pred_dict = {
'input_ids': all_input_ids,
'input_mask': all_input_mask,
'segment_ids': all_segment_ids,
'label_ids': all_label_ids
}
predict_fn = predictor.from_saved_model('../testing/1589418540')
result = predict_fn(pred_dict)
print(result)
pred_sentences = [
"That movie was absolutely awful",
"The acting was a bit lacking",
"The film was creative and surprising",
"Absolutely fantastic!",
]
predict(pred_sentences)
{'probabilities': array([[-0.3579178 , -1.2010787 ],
[-0.36648935, -1.1814401 ],
[-0.30407643, -1.3386648 ],
[-0.45970002, -0.9982413 ],
[-0.36113673, -1.1936386 ],
[-0.36672896, -1.1808994 ]], dtype=float32), 'labels': array([0, 0, 0, 0, 0, 0])}
However, the probabilities returned for sentences in pred_sentences do not match the probabilities I get use estimator.predict(predict_input_fn) where estimator is the fine-tuned model being used within the same (python) session. For example, [-0.27276006, -1.4324446 ] using estimator vs [-0.26713806, -1.4505868 ] using predictor.

Using tf,py_func with pickle files in Dataset API

I am trying to use the Dataset API with my dataset, which are pickle files. These files contains my data which is a vector of floats and the labels which is a one hot vector.
I have tried using the tf.py_func to load the features but I am unable to do it as I have missmatching shapes. As, I am these pickle files which includes the label as well, I can not give it directly to the tuple as the example here. So I am a bit lost on how to continue.
This is my code so far
path = "my_dir_to_pkl_files"
pkl_files = glob.glob((path+"*.pkl"))
dataset = tf.data.Dataset.from_tensor_slices((pkl_files))
dataset = dataset.map(
lambda filename: tuple(tf.py_func(
load_features, [filename], [tf.float32])))
And here is my python function to read the features.
def load_features(name):
decoded = name.decode("UTF-8")
if os.path.exists(decoded):
with open(decoded, 'rb') as f:
file = pickle.load(f)
return file['features']
# I have commented the line below but this should return
# the features and the label in a one hot vector
# return file['features'], file['targets']
else:
print("Something went wrong!")
exit(-1)
I would expect Dataset API to return a tuple with N features and 1 hot vector for each sample in my batch. Instead im getting
InvalidArgumentError: pyfunc_0 returns 30 values, but expects to see 1
values.
Any suggestions? Thanks.
Edit:
I show how my pickle file is. The features vector has a shape of [30,100]. I attach the same file as well here.
{'features': array([[0.64864044, 0.71419346, 0.35874235, ..., 0.66058507, 0.89013242,
0.67564707],
[0.15958826, 0.38115951, 0.46636267, ..., 0.49682084, 0.08863887,
0.17142761],
[0.26925915, 0.27901399, 0.91624607, ..., 0.30269212, 0.47494327,
0.43265325],
...,
[0.50405357, 0.7441127 , 0.04308265, ..., 0.06766902, 0.87449393,
0.31018099],
[0.44777562, 0.30836258, 0.48148097, ..., 0.74899213, 0.97264324,
0.43391464],
[0.50583501, 0.56803691, 0.61290449, ..., 0.8350931 , 0.52897295,
0.23731264]]), 'targets': array([0, 0, 1, 0])}
The error I got is after I try to get an element for the dataset
dataset.make_one_shot_iterator()
next_element = iterator.get_next()
print(sess.run(next_element))

Saving and running wide_deep.py model

I've been playing around with the Tensorflow Wide and Deep tutorial using the census dataset.
The linear/wide tutorial states:
We will train a logistic regression model, and given an individual's information our model will output a number between 0 and 1
At the moment, I can't work out how to predict the output of an individual input (copied from the unit test):
TEST_INPUT_VALUES = {
'age': 18,
'education_num': 12,
'capital_gain': 34,
'capital_loss': 56,
'hours_per_week': 78,
'education': 'Bachelors',
'marital_status': 'Married-civ-spouse',
'relationship': 'Husband',
'workclass': 'Self-emp-not-inc',
'occupation': 'abc',
}
How can we predict and output whether this person is likely to earn <50k (0) or >=50k (1)?
The function is predict, but I didn't figure out how to input one example data directly (I tried numpy_input_fn and dict of tensors).
Instead, using the input function in wide_deep.py to write down the data into a temporary csv file then read it, the predict function can be used:
TEST_INPUT = ('18,Self-emp-not-inc,987,Bachelors,12,Married-civ-spouse,abc,'
'Husband,zyx,wvu,34,56,78,tsr,<=50K')
# Create temporary CSV file
input_csv = '/tmp/census_model/test.csv'
with tf.gfile.Open(input_csv, 'w') as temp_csv:
temp_csv.write(TEST_INPUT)
# restore model trained by wide_deep.py with same model_dir and model_type
model = wide_deep.build_estimator(FLAGS.model_dir, FLAGS.model_type)
pred_iter = model.predict(input_fn=lambda: wide_deep.input_fn(input_csv, 1, False, 1))
for pred in pred_iter:
# print(pred)
print(pred['classes'])
There are other attributes like probability, logits etc in pred.
Hookay, I can answer this now.. So if you want to evaluate the test set accuracy, you can follow the accepted answer, but if you want to make your own predictions, here are the steps.
First, construct a new input_fn, notice you need to alter the columns and the default column values since the label column wouldn't be there.
def parse_csv(value):
print('Parsing', data_file)
columns = tf.decode_csv(value, record_defaults=_PREDICT_COLUMNS_DEFAULTS)
features = dict(zip(_PREDICT_COLUMNS, columns))
return features
def predict_input_fn(data_file):
assert tf.gfile.Exists(data_file), ('%s not found. Please make sure the path is correct.' % data_file)
dataset = tf.data.TextLineDataset(data_file)
dataset = dataset.map(parse_csv, num_parallel_calls=5)
dataset = dataset.batch(1) # => This is very important to get the rank correct
iterator = dataset.make_one_shot_iterator()
features = iterator.get_next()
return features
Then you can just call it simply by
results = model.predict(
input_fn=lambda: predict_input_fn(data_file='test.csv')
)

"Output 0 of type double does not match declared output type string" while running the iris sample program in TensorFlow Serving

I am running the sample iris program in TensorFlow Serving. Since it is a TF.Learn model, I am exporting the model using the following classifier.export(export_dir=model_dir,signature_fn=my_classification_signature_fn) and the signature_fn is defined as shown below:
def my_classification_signature_fn(examples, unused_features, predictions):
"""Creates classification signature from given examples and predictions.
Args:
examples: `Tensor`.
unused_features: `dict` of `Tensor`s.
predictions: `Tensor` or dict of tensors that contains the classes tensor
as in {'classes': `Tensor`}.
Returns:
Tuple of default classification signature and empty named signatures.
Raises:
ValueError: If examples is `None`.
"""
if examples is None:
raise ValueError('examples cannot be None when using this signature fn.')
if isinstance(predictions, dict):
default_signature = exporter.classification_signature(
examples, classes_tensor=predictions['classes'])
else:
default_signature = exporter.classification_signature(
examples, classes_tensor=predictions)
named_graph_signatures={
'inputs': exporter.generic_signature({'x_values': examples}),
'outputs': exporter.generic_signature({'preds': predictions})}
return default_signature, named_graph_signatures
The model gets successfully exported using the following piece of code.
I have created a client which makes real-time predictions using TensorFlow Serving.
The following is the code for the client:
flags.DEFINE_string("model_dir", "/tmp/iris_model_dir", "Base directory for output models.")
tf.app.flags.DEFINE_integer('concurrency', 1,
'maximum number of concurrent inference requests')
tf.app.flags.DEFINE_string('server', '', 'PredictionService host:port')
#connection
host, port = FLAGS.server.split(':')
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# Classify two new flower samples.
new_samples = np.array([5.8, 3.1, 5.0, 1.7], dtype=float)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'iris'
request.inputs["x_values"].CopyFrom(
tf.contrib.util.make_tensor_proto(new_samples))
result = stub.Predict(request, 10.0) # 10 secs timeout
However, on making the predictions, the following error is displayed:
grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INTERNAL, details="Output 0 of type double does not match declared output type string for node _recv_input_example_tensor_0 = _Recv[client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=2016246895612781641, tensor_name="input_example_tensor:0", tensor_type=DT_STRING, _device="/job:localhost/replica:0/task:0/cpu:0"]()")
Here is the entire stack trace.
enter image description here
The iris model is defined in the following manner:
# Specify that all features have real-value data
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3, model_dir=model_dir)
# Fit model.
classifier.fit(x=training_set.data,
y=training_set.target,
steps=2000)
Kindly guide a solution for this error.
I think the problem is that your signature_fn is going on the else branch and passing predictions as the output to the classification signature, which expects a string output and not a double output. Either use a regression signature function or add something to the graph to get the output in the form of a string.