Model testing on AWS sagemaker "could not convert string to float" - data-science

The XGboost model was trained on AWS sagemaker and deployed successfully but I keep getting the following error: ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (415) from model with message "could not convert string to float: ".
Any thoughts?
Test data is as following:
size mean
269 5600.0 17.499633
103 1754.0 9.270272
160 4968.0 14.080601
40 4.0 17.500000
266 36308.0 11.421855
test_data_array = test_data.drop(['mean'], axis=1).as_matrix()
test_data_array = np.array([np.float32(x) for x in test_data_array])
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
def predict(data, rows=32):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
#print(split_array)
predictions = ''
for array in split_array:
print(array[0], type(array[0]))
predictions = ','.join([predictions, xgb_predictor.predict(array[0]).decode('utf-8')])
return np.fromstring(predictions[1:], sep=',')
predictions = predict(test_data_array)

SageMaker XGBoost cannot handle csv input with header. Please make sure the string header was removed before sending the data to the endpoint.
Also for csv prediction, SageMaker XGBoost assumes that CSV input does not have the label column. So please remove the label column in the input data as well.

Related

cannot deploy YAMNet model to SageMaker

I followed this tutorial and had the model fine-tuned.
the model-saving part of serving model is like this:
saved_model_path = 'dogs_and_cats_yamnet/yamnet-model/00000001'
input_segment = tf.keras.layers.Input(shape=(), dtype=tf.float32, name='audio')
embedding_extraction_layer = hub.KerasLayer(yamnet_model_handle,
trainable=False, name='yamnet')
_, embeddings_output, _ = embedding_extraction_layer(input_segment)
serving_outputs = my_model(embeddings_output)
serving_outputs = ReduceMeanLayer(axis=0, name='classifier')(serving_outputs)
serving_model = tf.keras.Model(input_segment, serving_outputs)
serving_model.save(saved_model_path, include_optimizer=False)
Then followed this page, uploading the model to S3 and deploying the model.
!tar -C "$PWD" -czf dogs_and_cats_yamnet.tar.gz dogs_and_cats_yamnet/
model_data = Session().upload_data(path="dogs_and_cats_yamnet.tar.gz", key_prefix="model")
model = TensorFlowModel(model_data=model_data, role=sagemaker_role, framework_version="2.3")
predictor = model.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")
Deployment seems successful, but when I try to do inference,
waveform = np.zeros((3*48000), dtype=np.float32)
result = predictor.predict(waveform)
the following error occurs.
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"error": "The first dimension of paddings must be the rank of inputs[1,2] [1,144000]\n\t [[{{node yamnet_frames/tf_op_layer_Pad/Pad}}]]"
I have no idea why this happens. I am struggling with it for hours and coming up with no clue.
YAMNet works fine when I pulled the model from tf hub directly and take inference with it.
This is kind of a minor question I guess, but I would appreciate any helpful answers.
Thank you in advance.

tensorFlowJs Json model issue

trying to get a custom tensorFlowJS model to work in JS but getting error with the shape, kindly note that the model is already trained and converted to model.json but getting below error
Error: Error: Error in concat4D: Shape of tensors[1] (1,30,40,256) does not match the shape of the rest (1,15,20,832) along the non-concatenated axis 1.
below is the code:
const RGB = await imageToRgbaMatrix(imageUrl);
const model = await tf.loadLayersModel(modelUrl);
const ImageData = await tf.tensor([RGB]);
const predictions = model.predict(ImageData);
in brief i am trying to implement the tfjs model (model.json) with binary file with it gives:
Error in concat4D: Shape of tensors[1] (1,30,40,256) does not match the shape of the rest (1,15,20,832).

How to read a utf-8 encoded binary string in tensorflow?

I am trying to convert an encoded byte string back into the original array in the tensorflow graph (using tensorflow operations) in order to make a prediction in a tensorflow model. The array to byte conversion is based on this answer and it is the suggested input to tensorflow model prediction on google cloud's ml-engine.
def array_request_example(input_array):
input_array = input_array.astype(np.float32)
byte_string = input_array.tostring()
string_encoded_contents = base64.b64encode(byte_string)
return string_encoded_contents.decode('utf-8')}
Tensorflow code
byte_string = tf.placeholder(dtype=tf.string)
audio_samples = tf.decode_raw(byte_string, tf.float32)
audio_array = np.array([1, 2, 3, 4])
bstring = array_request_example(audio_array)
fdict = {byte_string: bstring}
with tf.Session() as sess:
[tf_samples] = sess.run([audio_samples], feed_dict=fdict)
I have tried using decode_raw and decode_base64 but neither return the original values.
I have tried setting the the out_type of decode raw to the different possible datatypes and tried altering what data type I am converting the original array to.
So, how would I read the byte array in tensorflow? Thanks :)
Extra Info
The aim behind this is to create the serving input function for a custom Estimator to make predictions using gcloud ml-engine local predict (for testing) and using the REST API for the model stored on the cloud.
The serving input function for the Estimator is
def serving_input_fn():
feature_placeholders = {'b64': tf.placeholder(dtype=tf.string,
shape=[None],
name='source')}
audio_samples = tf.decode_raw(feature_placeholders['b64'], tf.float32)
# Dummy function to save space
power_spectrogram = create_spectrogram_from_audio(audio_samples)
inputs = {'spectrogram': power_spectrogram}
return tf.estimator.export.ServingInputReceiver(inputs, feature_placeholders)
Json request
I use .decode('utf-8') because when attempting to json dump the base64 encoded byte strings I receive this error
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: b'longbytestring'
Prediction Errors
When passing the json request {'audio_bytes': 'b64': bytestring} with gcloud local I get the error
PredictionError: Invalid inputs: Expected tensor name: b64, got tensor name: [u'audio_bytes']
So perhaps google cloud local predict does not automatically handle the audio bytes and base64 conversion? Or likely somethings wrong with my Estimator setup.
And the request {'instances': [{'audio_bytes': 'b64': bytestring}]} to REST API gives
{'error': 'Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="Input to DecodeRaw has length 793713 that is not a multiple of 4, the size of float\n\t [[Node: DecodeRaw = DecodeRaw[_output_shapes=[[?,?]], little_endian=true, out_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_source_0_0)]]")'}
which confuses me as I explicitly define the request to be a float and do the same in the serving input receiver.
Removing audio_bytes from the request and utf-8 encoding the byte strings allows me to get predictions, though in testing the decoding locally, I think the audio is being incorrectly converted from the byte string.
The answer that you referenced, is written assuming you are running the model on CloudML Engine's service. The service actually takes care of the JSON (including UTF-8) and base64 encoding.
To get your code working locally or in another environment, you'll need the following changes:
def array_request_example(input_array):
input_array = input_array.astype(np.float32)
return input_array.tostring()
byte_string = tf.placeholder(dtype=tf.string)
audio_samples = tf.decode_raw(byte_string, tf.float32)
audio_array = np.array([1, 2, 3, 4])
bstring = array_request_example(audio_array)
fdict = {byte_string: bstring}
with tf.Session() as sess:
tf_samples = sess.run([audio_samples], feed_dict=fdict)
That said, based on your code, I suspect you are looking to send data as JSON; you can use gcloud local predict to simulate CloudML Engine's service. Or, if you prefer to write your own code, perhaps something like this:
def array_request_examples,(input_arrays):
"""input_arrays is a list (batch) of np_arrays)"""
input_arrays = (a.astype(np.float32) for a in input_arrays)
# Convert each image to byte strings
bytes_strings = (a.tostring() for a in input_arrays)
# Base64 encode the data
encoded = (base64.b64encode(b) for b in bytes_strings)
# Create a list of images suitable to send to the service as JSON:
instances = [{'audio_bytes': {'b64': e}} for e in encoded]
# Create a JSON request
return json.dumps({'instances': instances})
def parse_request(request):
# non-TF to simulate the CloudML Service which does not expect
# this to be in the submitted graphs.
instances = json.loads(request)['instances']
return [base64.b64decode(i['audio_bytes']['b64']) for i in instances]
byte_strings = tf.placeholder(dtype=tf.string, shape=[None])
decode = lambda raw_byte_str: tf.decode_raw(raw_byte_str, tf.float32)
audio_samples = tf.map_fn(decode, byte_strings, dtype=tf.float32)
audio_array = np.array([1, 2, 3, 4])
request = array_request_examples([audio_array])
fdict = {byte_strings: parse_request(request)}
with tf.Session() as sess:
tf_samples = sess.run([audio_samples], feed_dict=fdict)

Correct format for input data on CloudML

I'm trying to send a job up to my object detection model on CloudML to get predictions. I'm following the guide at https://cloud.google.com/ml-engine/docs/online-predict but I'm getting an error when submitting the request:
RuntimeError: Prediction failed: Error processing input: Expected uint8, got '\xf6>\x00\x01\x04\xa4d\x94...(more bytes)...\x00\x10\x10\x10\x04\x80\xd9' of type 'str' instead.
This is my code:
img = base64.b64encode(open("file.jpg", "rb").read()).decode('utf-8')
json = {"b64": img}
result = predict_json(project, model, json, "v1")
My fault, I forgot to add --input_type encoded_image_string_tensor when I exported the graph.

"Output 0 of type double does not match declared output type string" while running the iris sample program in TensorFlow Serving

I am running the sample iris program in TensorFlow Serving. Since it is a TF.Learn model, I am exporting the model using the following classifier.export(export_dir=model_dir,signature_fn=my_classification_signature_fn) and the signature_fn is defined as shown below:
def my_classification_signature_fn(examples, unused_features, predictions):
"""Creates classification signature from given examples and predictions.
Args:
examples: `Tensor`.
unused_features: `dict` of `Tensor`s.
predictions: `Tensor` or dict of tensors that contains the classes tensor
as in {'classes': `Tensor`}.
Returns:
Tuple of default classification signature and empty named signatures.
Raises:
ValueError: If examples is `None`.
"""
if examples is None:
raise ValueError('examples cannot be None when using this signature fn.')
if isinstance(predictions, dict):
default_signature = exporter.classification_signature(
examples, classes_tensor=predictions['classes'])
else:
default_signature = exporter.classification_signature(
examples, classes_tensor=predictions)
named_graph_signatures={
'inputs': exporter.generic_signature({'x_values': examples}),
'outputs': exporter.generic_signature({'preds': predictions})}
return default_signature, named_graph_signatures
The model gets successfully exported using the following piece of code.
I have created a client which makes real-time predictions using TensorFlow Serving.
The following is the code for the client:
flags.DEFINE_string("model_dir", "/tmp/iris_model_dir", "Base directory for output models.")
tf.app.flags.DEFINE_integer('concurrency', 1,
'maximum number of concurrent inference requests')
tf.app.flags.DEFINE_string('server', '', 'PredictionService host:port')
#connection
host, port = FLAGS.server.split(':')
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# Classify two new flower samples.
new_samples = np.array([5.8, 3.1, 5.0, 1.7], dtype=float)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'iris'
request.inputs["x_values"].CopyFrom(
tf.contrib.util.make_tensor_proto(new_samples))
result = stub.Predict(request, 10.0) # 10 secs timeout
However, on making the predictions, the following error is displayed:
grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INTERNAL, details="Output 0 of type double does not match declared output type string for node _recv_input_example_tensor_0 = _Recv[client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=2016246895612781641, tensor_name="input_example_tensor:0", tensor_type=DT_STRING, _device="/job:localhost/replica:0/task:0/cpu:0"]()")
Here is the entire stack trace.
enter image description here
The iris model is defined in the following manner:
# Specify that all features have real-value data
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3, model_dir=model_dir)
# Fit model.
classifier.fit(x=training_set.data,
y=training_set.target,
steps=2000)
Kindly guide a solution for this error.
I think the problem is that your signature_fn is going on the else branch and passing predictions as the output to the classification signature, which expects a string output and not a double output. Either use a regression signature function or add something to the graph to get the output in the form of a string.