I keep getting a TypeError when using gbt_regression_prediction().compute with XGBoost and daal4py - xgboost

I have a pre-trained XGBoost model that I want to optimize with daal4py but I'm getting the following error
TypeError: Argument 'model' has incorrect type (expected daal4py._daal4py.gbt_regression_model, got XGBRegressor)
Here is the line with that is throwing the error:
y_pred = d4p.gbt_regression_prediction().compute(x_test, xgb_model).prediction.reshape(-1)

If you pass the XGBoost object to d4p.gbt_regression_prediction().compute(x_test, xgb_model).prediction.reshape(-1) you will continue to get this error.
You must first convert the model to daal4py format before passing it to the prediction method. Please see the example below.
daal_model = d4p.get_gbt_model_from_xgboost(xgb_model.get_booster())
daal_model).prediction.reshape(-1)

Related

How to solve having different ranks of tensor?

I have a tensorflow.js model and I have created dummy input for my model which is:
a=tf.tensor2d([1,2,3,4,5,6,7,8],[8,1],dtype="int32")
And I have input it into my model using the following line:
model.then(m => m.executeAsync({"input_ids":a,"attention_mask":a,"token_type_ids":a}))
Three of the inputs are having the same value, but I am getting the following error message:
Uncaught (in promise) Error: Error in matMul: inputs must have the same rank of at least 2, got ranks 3 and 2
Does anyone know what I did wrong and caused there are rank 3 tensors in my input? Thank you.
Things you can do:
Check the input dimension of your model
Reshape the input tensor dimension to the expected dimension by the model
Then inference

How to let XGBoost output as int, not objects?

I am using XGBoost to classify fake or true news. However after dealing with the text by Tfidf vector, training my XGBclassifier and running the predict X_test successfully.
The error message displaying when calculating accuracy,precision,Recall and F-measure. Message is:
TypeError: '<' not supported between instances of 'str' and 'int'.

Calculate gradient of trained network

assume I got a trained model, I am trying to calculate it's Jacobin (trying to understand some it's mathematical properties after training). I am trying to use autograd as follow:
from autograd import jacobian
jacobian_pred=jacobian(model.predict)
jacobian_pred(x)
where x is from my training set. It raises an error:
TypeError: object of type 'numpy.float32' has no len()
What should I do?
Thanks!

Problem in Keras with 'merge' - TypeError: 'module' object is not callable

I tried to merge layer3, layer4 and layer5 with following line of code:
layer = merge([layer3,layer4,layer5],mode='sum')
But it throws this error:
show the TypeError: 'module' object is not callable
Why is my code not working?
I assume you're trying to run source code written for an older Keras version. 'sum' just adds your layers element wise. You could also use TensorFlow to do the same:
layer = tf.add(layer3, layer4)
layer = tf.add(layer, layer5)

TensorFlow attention_decoder with RNNCell (state_is_tuple=True)

I want to build a seq2seq model with an attention_decoder, and to use MultiRNNCell with LSTMCell as the encoder. Because the TensorFlow code suggests that "This default behaviour (state_is_tuple=False) will soon be deprecated.", I set the state_is_tuple=True for the encoder.
The problem is that, when I pass the state of encoder to attention_decoder, it reports an error:
*** AttributeError: 'LSTMStateTuple' object has no attribute 'get_shape'
This problem seems to be related to the attention() function in seq2seq.py and the _linear() function in rnn_cell.py, in which the code calls the 'get_shape()' function of the 'LSTMStateTuple' object from the initial_state generated by the encoder.
Although the error disappears when I set state_is_tuple=False for the encoder, the program gives the following warning:
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x11763dc50>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
I would really appreciate if someone can give any instruction about building seq2seq with RNNCell (state_is_tuple=True).
I ran into this issue also, the lstm states need to be concatenated or else _linear will complain. The shape of LSTMStateTuple depends on the kind of cell you're using. With a LSTM cell, you can concatenate the states like this:
query = tf.concat(1,[state[0], state[1]])
If you're using a MultiRNNCell, concatenate the states for each layer first:
concat_layers = [tf.concat(1,[c,h]) for c,h in state]
query = tf.concat(1, concat_layers)