Tensorflow Certification Exam - tensorflow

Sorry if this is something it was just asked, but I searched for it without success. I am thinking about to apply for the tensorflow certificate exam. My first question is if, during the exam, the custom activation functions are allowed.
For example: Imagine a question about a regression when the data is:
features = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0], dtype=float)
targets = np.array([0.0, 1.0, 4.0, 9.0, 16.0, 25.0, 36.0], dtype=float)
This is clearly a x^2 problem. Could I do something like this?
tf_lpow = lambda x: tf.math.pow(x, 2)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(units=1, activation=tf_lpow, input_shape=(1,)),
])
Considering that maybe this could not be allowed, I was thinking about another solution:
lr_scheduler = ReduceLROnPlateau(monitor='loss', factor=0.75, patience=50, min_lr=3e-80)
callbacks = [lr_scheduler]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(units=6, activation='sigmoid', kernel_regularizer=tf.keras.regularizers.l2(0.01), kernel_initializer=tf.keras.initializers.RandomNormal(stddev=0.01), input_shape=(1,)),
tf.keras.layers.Dense(units=1, activation='linear')
])
But even in the case the loss is decreasing, the accuracy is stack in 0.2857, not reaching the goal. In this case, what could I do?
Thanks in advance.

Related

TFlite: set_tensor() takes 3 positional arguments but 4 were given

I've written a simple program to calculate a quadratic equation with Tensorflow. Now, I'd like to transform the code for running on the Coral Dev Board by using Tensorflow lite.
The following code shows the generation of tflite-file:
# Define and compile the neural network
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
# Provide the data
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
# Generation TFLite Model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TFLite-Model
with open('mobilenet_v2_1.0_224.tflite', 'wb') as f:
f.write(tflite_model)
This code runs on the Coral Dev Board:
# Load TFLite model and allocate tensors.
interpreter = tflite.Interpreter(model_path="mobilenet_v2_1.0_224.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=np.float32)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], xs, ys)
...
The last codeline runs on error:
TypeError: set_tensor() takes 3 positional arguments but 4 were given
The output of 'input_details[0]['index']':
{'name': 'serving_default_dense_input:0',
'index': 0,
'shape': array([1, 1], dtype=int32),
'shape_signature': array([-1, 1], dtype=int32),
'dtype': <class 'numpy.float32'>,
'quantization': (0.0, 0),
'quantization_parameters':
{'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}
}
I' don't understand the cause of error. Has someone any idea?
You error is the following. You are passing a dictionary, to your set_tensor method.
That means when python, reads that line of code. It gives you a TypeError, since you are passing a interable with 2 concurrent values. So that is the why of you error!
Now to fix your code. First you need to understand that the set_tensor method, expects the index of the given tensor. What you are currently passing in the input_details[0]['index'] is something else entirely. What you want to pass is the index, of you tensor. Which is as your displayed data given by interpreter.get_input_details() showed is 0.
Also you are supposed to define the index of only one of the given data. Either the test data or the train data, not both at the same time. So eliminate either one of the xs or ys variables.
So just rewrite this line like this
interpreter.set_tensor(0, ys)
I hope this get right, usually is good to also take a look at documentation. So you understand what each method expects https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter#set_tensor
My approach was wrong. In Xs are the X-values and in Ys are the Y-values (result values) of the quadratic equation. I was not aware that you cannot do training in Tflite. But thanks for the effort anyway.

How to combine two different trained ML models as one?

I have trained two ml models based on two different datasets. Then I saved them as model1.pkl and model2.pkl . There are two user inputs(not input data for model) like x=0 and x=1 and if x=0 I have to go with model1.pkl for prediction else I have to go with model2.pkl for prediction. I can do them using if condition but my problem is I have to know whether is there any possibilities to save back that as model.pkl including this condition statement. If I combine them and save as a model it will be easy to load in other IDEs.
You can create a class, that has a minimal interface like a model class like this:
# create the test setup
import lightgbm as lgb
import pickle as pkl
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import LabelEncoder
#from sklearn
df['x1']= LabelEncoder().fit_transform(df['x1'])
data= {
'x': [1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0],
'q': [0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 2.0, 2.0, 2.0, 3.0, 3.0, 3.0],
'b': [1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0],
'target': [0.0, 2.0, 1.5, 0.0, 5.1, 4.0, 0.0, 1.0, 2.0, 0.0, 2.1, 1.5]
}
df= pd.DataFrame(data)
X, y=df.iloc[:, :-1], df.iloc[:, -1]
X= X.astype('float32')
# create two models
model1= LinearRegression()
model2 = lgb.LGBMRegressor(n_estimators=5, num_leaves=10, min_child_samples=1)
ser_model1= X['x']==0.0
model1.fit(X[ser_model1], y[ser_model1])
model2.fit(X[~ser_model1], y[~ser_model1])
# define a class that mocks the model interface
class CombinedModel:
def __init__(self, model1, model2):
self.model1= model1
self.model2= model2
def predict(self, X, **kwargs):
ser_model1= X['x']==0.0
return pd.concat([
pd.Series(self.model1.predict(X[ser_model1]), index=X.index[ser_model1]),
pd.Series(self.model2.predict(X[~ser_model1]), index=X.index[~ser_model1])
]
).sort_index()
# create a model with the two trained sum models
# and pickle it
model= CombinedModel(model1, model2)
model.predict(X)
with open('model.pkl', 'wb') as fp:
pkl.dump(model, fp)
model= model1= model2= None
# test load it
with open('model.pkl', 'rb') as fp:
model= pkl.load(fp)
model.predict(X)
If you want, of course you can also implement a fit method in the class above, which just calls the two models. If you implement the necessary methods, you could even use this class in a sklearn pipeline.
You can use Ensemble Voting Classifier which will consider the outputs from both the models and give appropriate output.
Link- https://machinelearningmastery.com/voting-ensembles-with-python/

What does mean_squared_error translate to in keras

while looking at tensorflow examples online , i'm seeing this
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], dtype=float)
ys = np.array([1.0, 1.5, 2.0, 2.5, 3.0, 3.5], dtype=float)
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
I was trying to re-write this line so that it uses objects rather than string literals
model.compile(optimizer='sgd', loss='mean_squared_error')
So far i'm able to
model.compile(optimizer=keras.optimizers.SGD(), loss='mean_squared_error')
For mean_squared_error we have keras.losses.mean_squared_error(y_true, y_pred)
I'm unable to understand y_true, y_pred and what values needs to be provided for the example above.
In summary, from example above what is equivalent of
loss='mean_squared_error'
you need to pass simply
model.compile(optimizer=tf.keras.optimizers.SGD(), loss=tf.keras.losses.MeanSquaredError())
y_true and y_pred are handled automatically by keras

type numpy.ndarray doesn't define __round__ method in tensorflow model.predict

model = tf.keras.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0], dtype=float)
ys = np.array([.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5], dtype=float)
model.fit(xs, ys, epochs=1000)
return (model.predict(y_new))
this code giving error:
type numpy.ndarray does not define round method in model.predict()

Linear regression and Estimator returning wrong loses. Tensorflow

I implemented this model using Keras, and the result was as expected. Now im trying with Tensorflow and I just can't get it right.
As you can see at bellow my loss is just not right.
what am I doing wrong here?
ps: I prefer to use estimators instead of multiply tensors and etc.
X = numpy.array([ 1.1, 1.3, 1.5, 2.0, 2.2, 2.9, 3.0, 3.2, 3.2, 3.7, 3.9, 4.0, 4.0, 4.1, 4.5, 4.9, 5.1, 5.3, 5.9, 6.0, 6.8, 7.1, 7.9, 8.2, 8.7, 9.0, 9.5, 9.6, 10.3, 10.5])
y = numpy.array([ 39.343, 46.205, 37.731, 43.525, 39.891, 56.642, 60.15, 54.445, 64.445, 57.189, 63.218, 55.794, 56.957, 57.081, 61.111, 67.938, 66.029, 83.088, 81.363, 93.94, 91.738, 98.273, 101.302, 113.812, 109.431, 105.582, 116.969, 112.635, 122.391, 121.872])
#reduce salaries to unit of thousands
#Split 70% training, 30% test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
#Create estimator
feat_cols = [ tf.feature_column.numeric_column('X', shape=[1]) ]
estimator = tf.estimator.LinearRegressor(feature_columns=feat_cols)
#input functions
train_input_func = tf.estimator.inputs.numpy_input_fn({'X': X_train}, y_train, shuffle=False)
test_input_func = tf.estimator.inputs.numpy_input_fn({'X': X_test}, y_test, shuffle=False)
#Train and test
estimator.train(input_fn=train_input_func)
train_metrics = estimator.evaluate(input_fn=train_input_func)
test_metrics = estimator.evaluate(input_fn=test_input_func)
#Predict salary for arbitrary years of experience
X_single_data = np.array([4.6])
pred_input_func = tf.estimator.inputs.numpy_input_fn({'X': X_single_data}, shuffle=False)
single_pred = estimator.predict(pred_input_func)
print('--Train metrics--')
print(train_metrics)
print(' ')
print('--Test metrics--')
print(test_metrics)
--Train metrics--
{'average_loss': 5795.477, 'label/mean': 72.32367, 'loss': 121705.016, 'prediction/mean': 1.2057142, 'global_step': 1}
--Test metrics--
{'average_loss': 7422.221, 'label/mean': 84.588104, 'loss': 66799.99, 'prediction/mean': 1.3955557, 'global_step': 1}
FYI:
This is what I got with keras:
Link to image
You are missing the number of epochs to train together with a batch size. Add these parameters to your definition of the input functions, e.g., in case of train input for linear regression it may look like this
train_input_func = \
tf.estimator.inputs.numpy_input_fn({'X': X_train},\
y_train,\
num_epochs=500,\
batch_size=1,\
shuffle=False)
After that it should start training and loss function will go down.