How to hide warnings from xgboost library in jupyter? - xgboost

Not working:
import warnings
warnings.filterwarnings('ignore')
The warning I get:
[14:24:45] WARNING: C:/Jenkins/workspace/xgboost-win64_release_0.90/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
It clutters my output in cell.

Change the verbosity parameter verbosity = 0 in the model definition. The values it can take are: 0 - "silent", 1 - "warning", 2 - "info", 3 - "debug"
xgboost = xgb.XGBRegressor(objective ='reg:linear', verbosity = 0, random_state=42)
XGBoost Documentation

Just add silent = True in the model's definition:
xgboost = xgb.XGBRegressor(random_state=42,silent=True)

I had the same issue, and adding both verbose=0 and silent=True helped me
xgboost = xgb.XGBRegressor(objective ='reg:linear', verbosity = 0, silent=True, random_state=42)

Related

How to use mlflow to deploy model that requires tensorflow_text for bert on local machine?

I recently use mlflow 1.29.0 to track my model training, I use BERT for text embedding which need to import tensorflow_text to register op before training, here is an example:
import tensorflow_hub as hub
import tensorflow_text as text
def create_model():
text_input = tf.keras.layers.Input(shape = (),dtype = tf.string, name = 'text_input')
preprocessed_text = preprocess_model(text_input)
encoder_text = encoder_model(preprocessed_text)['pooled_output']
text_output = tf.keras.layers.Dropout(0.1,name = 'dropout1')(encoder_text)
text_output = tf.keras.layers.Dense(units = 400, activation = tf.keras.activations.sigmoid, name = 'text_dense1')(text_output)
text_output = tf.keras.layers.Dropout(0.1,name = 'dropout2')(text_output)
final_output = tf.keras.layers.Dense(units = 1, activation = tf.keras.activations.sigmoid,name = 'output')(text_output)
model = tf.keras.Model(inputs = [text_input],outputs = [final_output])
return model
if __name__ == '__main__':
preprocess_path = 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3'
encoder_path = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/4'
preprocess_model = hub.KerasLayer(preprocess_path)
encoder_model = hub.KerasLayer(encoder_path)
with mlflow.start_run() as run:
model = create_model()
model.fit(...)
mlflow.keras.log_model(keras_model = model,...)
mlflow.end_run()
The code run successfully, and the mlflow ui showed everything, however, when I start to deploy the data on my local machine with the following command
mlflow sagemaker build-and-push-container
mlflow sagemaker run-local -m runs:/XXXXX/XXXX -p 4999
it showed the following error:
FileNotFoundError: Op type not registered 'CaseFoldUTF8' in binary running on mighty. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
I think it's because the tensorflow_text need to be import before running the model. (based on mlflow, the conda.yaml contain tensorflow-text==2.3.0)
I've meet this error several times when I train the model, I put 'import tensorflow_text as text' at top and the problem was fixed.
However I'm not quite sure how to do that when I deploy the model locally, can anyone help me with that? thank you!
I tried other command like mlflow models serve -m runs:/XXXXX/XXXX -p 4999, and the error is still there.

Issue with tf.ParseExampleV2 when converting to Tensorflow Lite : "op is neither a custom op nor a flex op"

excuse my english.
I've been trying to handle Estimators API of tensorflow (v2.x), but when i'm trying to convert a model from tf.estimators to tflite with this code :
import tensorflow as tf
import numpy as np
feature_name = "features"
feature_columns = [tf.feature_column.numeric_column(feature_name, shape=[2])]
classifier = tf.estimator.LinearClassifier(
feature_columns=feature_columns,
n_classes=2,
model_dir="Z:\\tests\\iris")
feature_spec = {'features': tf.io.FixedLenFeature(shape=[2], dtype=np.float32)}
serving_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
classifier.export_saved_model(export_dir_base='Z:\\tests\\iris\\', serving_input_receiver_fn=serving_fn)
saved_model_obj = tf.saved_model.load("Z:\\tests\\iris\\1613055608")
concrete_func = saved_model_obj.signatures['serving_default']
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
print(saved_model_obj.signatures.keys())
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
tflite_model = converter.convert()
with open('Z:\\tests\\model.tflite_estimators', 'wb') as f:
f.write(tflite_model)
I got the following error :
ConverterError: C:\Users\\.....\tensorflow\python\saved_model\load.py:909:0: error: 'tf.ParseExampleV2' op is neither a custom op nor a flex op
C:\Users\\.....\tensorflow\python\saved_model\load.py:859:0: note: called from
P:\\.....\sanstitre3.py:19:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py:465:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py:578:0: note: called from
<ipython-input-115-f30bf3b642d5>:1:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3343:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3263:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3072:0: note: called from
C:\Users\\.....\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\async_helpers.py:68:0: note: called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.ParseExampleV2 {dense_shapes = [#tf.shape<2>], device = "", num_sparse = 0 : i64, result_segment_sizes = dense<[0, 0, 0, 1, 0, 0]> : vector<6xi32>}
Some guy on the internet already proposed to add those 2 lines under converter.experimental_new_converter = True :
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
It compiles without errors, just warnings, but when i put the TFLite model on my STM32, it gaves me the error TOOL ERROR/ Unknown layer type FlexParseExampleV2, stopping.
Can someone help me on this ?
Have a nice day
TensorFlow Lite Micro doesn't support Flex delegate, so Selece TF ops can't be run on MCUs. You can try restructuring your model with (for example) keras sequential API instead to make it converted only with TFLite ops.
context: https://github.com/tensorflow/tensorflow/issues/34350#issuecomment-579027135

why keras layers initialization doesn't work

when i run my small keras model i got this error
FailedPreconditionError: Attempting to use uninitialized value bn6/beta
[[{{node bn6/beta/read}} = IdentityT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
full traceback error
code:
"input layer"
command_input = keras.layers.Input(shape=(1,1))
image_measurements_features = keras.layers.Input(shape=(1, 640))
"command module"
command_module_layer1=keras.layers.Dense(128,activation='relu')(command_input)
command_module_layer2=keras.layers.Dense(128,activation='relu')(command_module_layer1)
"concatenation layer"
j=keras.layers.concatenate([command_module_layer2,image_measurements_features])
"desicion module"
desicion_module_layer1=keras.layers.Dense(512,activation='relu')(j)
desicion_module_layer2=keras.layers.Dense(256,activation='relu')(desicion_module_layer1)
desicion_module_layer3=keras.layers.Dense(128,activation='relu')(desicion_module_layer2)
desicion_module_layer4=keras.layers.Dense(3,activation='relu')(desicion_module_layer3)
initt = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(initt)
big_hero_4=keras.models.Model(inputs=[command_input, image_measurements_features], outputs=desicion_module_layer4)
big_hero_4.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
"train the model"
historyy=big_hero_4.fit([x, y],z,batch_size=None, epochs=1,steps_per_epoch=1000)
do you have any solutions for this error ?
Why keras doesn't initialize the layers automatically without using global variables initializer (the error exists before and after adding the global initializer)
You initialize your model and then make and compile it. That's the wrong order, first define your model, compile it and then initialize. Same code, just different order
I got this to work. Forget about the session when using keras, it only complicates things.
import keras
import tensorflow as tf
import numpy as np
command_input = keras.layers.Input(shape=(1,1))
image_measurements_features = keras.layers.Input(shape=(1, 640))
command_module_layer1 = keras.layers.Dense(128 ,activation='relu')(command_input)
command_module_layer2 = keras.layers.Dense(128 ,activation='relu')(command_module_layer1)
j = keras.layers.concatenate([command_module_layer2, image_measurements_features])
desicion_module_layer1 = keras.layers.Dense(512,activation='relu')(j)
desicion_module_layer2 = keras.layers.Dense(256,activation='relu')(desicion_module_layer1)
desicion_module_layer3 = keras.layers.Dense(128,activation='relu')(desicion_module_layer2)
desicion_module_layer4 = keras.layers.Dense(3,activation='relu')(desicion_module_layer3)
big_hero_4 = keras.models.Model(inputs=[command_input, image_measurements_features], outputs=desicion_module_layer4)
big_hero_4.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
# Mock data
x = np.zeros((1, 1, 1))
y = np.zeros((1, 1, 640))
z = np.zeros((1, 1, 3))
historyy=big_hero_4.fit([x, y], z, batch_size=None, epochs=1,steps_per_epoch=1000)
This code should start training with no issues. If you still have the same error it might be due to some other part of your code if there is more.

how to fix tf.constant unexpected argument error

In the original code flags were set like tf.apps.flags.DEFINE_string(
'master', '', 'The address of the TensorFlow master to use.'). then i changed the tf.app.flags to tf.flags
originally FLAGS = tf.app.flags.FLAGS, changed to tf.flags.FLAGS similarly.
but the error in the tf.constant has been there in both the cases. how to fix it?
i feel like this error has something to do with the python versions. but cant figure it out
replica_id=tf.constant(FLAGS.task, dtype=tf.int32, shape=()),
Try this, for me it works just fine:
import tensorflow as tf
FLAGS = tf.flags.FLAGS
tf.flags.DEFINE_integer('task', 10, "my value for the constant")
# now define your constant
replica_id = tf.constant(value=FLAGS.task, dtype=tf.float32)
# see if it works:
with tf.Session() as sess:
print(sess.run(replica_id))

How to control frequency of loss logging messages when using tf.Estimator

I'm using TF 1.4.
My question is about tf.estimator.Estimator.
I'd like to control the frequency of the "loss and step" Info messages, like:
INFO:tensorflow:loss = 0.00896569, step = 14901 (14.937 sec)
I'm passing a tf.estimator.RunConfig to the Estimator's constructor. But I don't think there is a parameter to control the "loss and step" messages.
I think the parameter is hard-coded in estimator.py, in the _train_model method:
worker_hooks.extend([
training.NanTensorHook(estimator_spec.loss),
training.LoggingTensorHook(
{
'loss': estimator_spec.loss,
'step': global_step_tensor
},
every_n_iter=100)
])
log_step_count_steps is supported in tensorflow v1.8: https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig
try returning the logging_hook as training_hook param in returned estimator_spec for mode == 'train'
Printing extra training metrics with Tensorflow Estimator
https://github.com/tensorflow/tensorflow/pull/619/commits/48603b7faed85753ab905f177cbf4e0c8d1dcb64
https://www.tensorflow.org/install/install_sources#clone_the_tensorflow_repository
src: https://stackoverflow.com/a/38097276/2218905