What does mean_squared_error translate to in keras - tensorflow

while looking at tensorflow examples online , i'm seeing this
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], dtype=float)
ys = np.array([1.0, 1.5, 2.0, 2.5, 3.0, 3.5], dtype=float)
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
I was trying to re-write this line so that it uses objects rather than string literals
model.compile(optimizer='sgd', loss='mean_squared_error')
So far i'm able to
model.compile(optimizer=keras.optimizers.SGD(), loss='mean_squared_error')
For mean_squared_error we have keras.losses.mean_squared_error(y_true, y_pred)
I'm unable to understand y_true, y_pred and what values needs to be provided for the example above.
In summary, from example above what is equivalent of
loss='mean_squared_error'

you need to pass simply
model.compile(optimizer=tf.keras.optimizers.SGD(), loss=tf.keras.losses.MeanSquaredError())
y_true and y_pred are handled automatically by keras

Related

TFlite: set_tensor() takes 3 positional arguments but 4 were given

I've written a simple program to calculate a quadratic equation with Tensorflow. Now, I'd like to transform the code for running on the Coral Dev Board by using Tensorflow lite.
The following code shows the generation of tflite-file:
# Define and compile the neural network
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
# Provide the data
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
# Generation TFLite Model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TFLite-Model
with open('mobilenet_v2_1.0_224.tflite', 'wb') as f:
f.write(tflite_model)
This code runs on the Coral Dev Board:
# Load TFLite model and allocate tensors.
interpreter = tflite.Interpreter(model_path="mobilenet_v2_1.0_224.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=np.float32)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], xs, ys)
...
The last codeline runs on error:
TypeError: set_tensor() takes 3 positional arguments but 4 were given
The output of 'input_details[0]['index']':
{'name': 'serving_default_dense_input:0',
'index': 0,
'shape': array([1, 1], dtype=int32),
'shape_signature': array([-1, 1], dtype=int32),
'dtype': <class 'numpy.float32'>,
'quantization': (0.0, 0),
'quantization_parameters':
{'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}
}
I' don't understand the cause of error. Has someone any idea?
You error is the following. You are passing a dictionary, to your set_tensor method.
That means when python, reads that line of code. It gives you a TypeError, since you are passing a interable with 2 concurrent values. So that is the why of you error!
Now to fix your code. First you need to understand that the set_tensor method, expects the index of the given tensor. What you are currently passing in the input_details[0]['index'] is something else entirely. What you want to pass is the index, of you tensor. Which is as your displayed data given by interpreter.get_input_details() showed is 0.
Also you are supposed to define the index of only one of the given data. Either the test data or the train data, not both at the same time. So eliminate either one of the xs or ys variables.
So just rewrite this line like this
interpreter.set_tensor(0, ys)
I hope this get right, usually is good to also take a look at documentation. So you understand what each method expects https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter#set_tensor
My approach was wrong. In Xs are the X-values and in Ys are the Y-values (result values) of the quadratic equation. I was not aware that you cannot do training in Tflite. But thanks for the effort anyway.

Tensorflow Certification Exam

Sorry if this is something it was just asked, but I searched for it without success. I am thinking about to apply for the tensorflow certificate exam. My first question is if, during the exam, the custom activation functions are allowed.
For example: Imagine a question about a regression when the data is:
features = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0], dtype=float)
targets = np.array([0.0, 1.0, 4.0, 9.0, 16.0, 25.0, 36.0], dtype=float)
This is clearly a x^2 problem. Could I do something like this?
tf_lpow = lambda x: tf.math.pow(x, 2)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(units=1, activation=tf_lpow, input_shape=(1,)),
])
Considering that maybe this could not be allowed, I was thinking about another solution:
lr_scheduler = ReduceLROnPlateau(monitor='loss', factor=0.75, patience=50, min_lr=3e-80)
callbacks = [lr_scheduler]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(units=6, activation='sigmoid', kernel_regularizer=tf.keras.regularizers.l2(0.01), kernel_initializer=tf.keras.initializers.RandomNormal(stddev=0.01), input_shape=(1,)),
tf.keras.layers.Dense(units=1, activation='linear')
])
But even in the case the loss is decreasing, the accuracy is stack in 0.2857, not reaching the goal. In this case, what could I do?
Thanks in advance.

in Google Collaboratory, what is the code for telling the AI/ML to show/print on the screen the model that it predicted?

see the code below for a basic ML test. I ran this, but wanted to see the mathematical formula predicted by the AI using the given xs and ys array set, and the training x value.
import tensorflow as tf
import numpy as np
from tensorflow import keras
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss = 'mean_squared_error')
xs = np.array([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], dtype=float)
ys = np.array([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0], dtype=float)
model.fit(xs,ys,epochs=10)
print(model.predict([-2.5]))

type numpy.ndarray doesn't define __round__ method in tensorflow model.predict

model = tf.keras.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0], dtype=float)
ys = np.array([.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5], dtype=float)
model.fit(xs, ys, epochs=1000)
return (model.predict(y_new))
this code giving error:
type numpy.ndarray does not define round method in model.predict()

Linear regression and Estimator returning wrong loses. Tensorflow

I implemented this model using Keras, and the result was as expected. Now im trying with Tensorflow and I just can't get it right.
As you can see at bellow my loss is just not right.
what am I doing wrong here?
ps: I prefer to use estimators instead of multiply tensors and etc.
X = numpy.array([ 1.1, 1.3, 1.5, 2.0, 2.2, 2.9, 3.0, 3.2, 3.2, 3.7, 3.9, 4.0, 4.0, 4.1, 4.5, 4.9, 5.1, 5.3, 5.9, 6.0, 6.8, 7.1, 7.9, 8.2, 8.7, 9.0, 9.5, 9.6, 10.3, 10.5])
y = numpy.array([ 39.343, 46.205, 37.731, 43.525, 39.891, 56.642, 60.15, 54.445, 64.445, 57.189, 63.218, 55.794, 56.957, 57.081, 61.111, 67.938, 66.029, 83.088, 81.363, 93.94, 91.738, 98.273, 101.302, 113.812, 109.431, 105.582, 116.969, 112.635, 122.391, 121.872])
#reduce salaries to unit of thousands
#Split 70% training, 30% test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
#Create estimator
feat_cols = [ tf.feature_column.numeric_column('X', shape=[1]) ]
estimator = tf.estimator.LinearRegressor(feature_columns=feat_cols)
#input functions
train_input_func = tf.estimator.inputs.numpy_input_fn({'X': X_train}, y_train, shuffle=False)
test_input_func = tf.estimator.inputs.numpy_input_fn({'X': X_test}, y_test, shuffle=False)
#Train and test
estimator.train(input_fn=train_input_func)
train_metrics = estimator.evaluate(input_fn=train_input_func)
test_metrics = estimator.evaluate(input_fn=test_input_func)
#Predict salary for arbitrary years of experience
X_single_data = np.array([4.6])
pred_input_func = tf.estimator.inputs.numpy_input_fn({'X': X_single_data}, shuffle=False)
single_pred = estimator.predict(pred_input_func)
print('--Train metrics--')
print(train_metrics)
print(' ')
print('--Test metrics--')
print(test_metrics)
--Train metrics--
{'average_loss': 5795.477, 'label/mean': 72.32367, 'loss': 121705.016, 'prediction/mean': 1.2057142, 'global_step': 1}
--Test metrics--
{'average_loss': 7422.221, 'label/mean': 84.588104, 'loss': 66799.99, 'prediction/mean': 1.3955557, 'global_step': 1}
FYI:
This is what I got with keras:
Link to image
You are missing the number of epochs to train together with a batch size. Add these parameters to your definition of the input functions, e.g., in case of train input for linear regression it may look like this
train_input_func = \
tf.estimator.inputs.numpy_input_fn({'X': X_train},\
y_train,\
num_epochs=500,\
batch_size=1,\
shuffle=False)
After that it should start training and loss function will go down.