How to solve having different ranks of tensor? - tensorflow

I have a tensorflow.js model and I have created dummy input for my model which is:
a=tf.tensor2d([1,2,3,4,5,6,7,8],[8,1],dtype="int32")
And I have input it into my model using the following line:
model.then(m => m.executeAsync({"input_ids":a,"attention_mask":a,"token_type_ids":a}))
Three of the inputs are having the same value, but I am getting the following error message:
Uncaught (in promise) Error: Error in matMul: inputs must have the same rank of at least 2, got ranks 3 and 2
Does anyone know what I did wrong and caused there are rank 3 tensors in my input? Thank you.

Things you can do:
Check the input dimension of your model
Reshape the input tensor dimension to the expected dimension by the model
Then inference

Related

In TensorFlow 1 / Keras, how to see the value of a Tensor during training?

On my Keras model, I need to see the output of a hidden layer during training.
Here is what I have done:
net = Model(x, [y, hidden_layer])
then, I constructed a custom callback
class CustomCallback(keras.callbacks.Callback):
def on_batch_end(self):
print(self.model.output[1])
But, when I run the training with:
net.train_on_batch(train_data)
I get:
ValueError: Error when checking model target: the list of Numpy arrays
that you are passing to your model is not the size the model expected.
Expected to see 2 array(s), but instead got the following list of 1
arrays:
Any idea ?
Thanks

I keep getting a TypeError when using gbt_regression_prediction().compute with XGBoost and daal4py

I have a pre-trained XGBoost model that I want to optimize with daal4py but I'm getting the following error
TypeError: Argument 'model' has incorrect type (expected daal4py._daal4py.gbt_regression_model, got XGBRegressor)
Here is the line with that is throwing the error:
y_pred = d4p.gbt_regression_prediction().compute(x_test, xgb_model).prediction.reshape(-1)
If you pass the XGBoost object to d4p.gbt_regression_prediction().compute(x_test, xgb_model).prediction.reshape(-1) you will continue to get this error.
You must first convert the model to daal4py format before passing it to the prediction method. Please see the example below.
daal_model = d4p.get_gbt_model_from_xgboost(xgb_model.get_booster())
daal_model).prediction.reshape(-1)

Error incompatible shapes in function model.fit()

I am new in Keras. I want to try U-net. I used this tutorial from tensorflow: https://github.com/tensorflow/models/blob/master/samples/outreach/blogs/segmentation_blogpost/image_segmentation.ipynb.
I used the code for U-net creation with my own dataset. They have got images 256x256x3 and I made my images with same shape.
Now, I got error:
InvalidArgumentError: Incompatible shapes: [1376256] vs. [458752]
[[{{node training/Adam/gradients/loss/conv2d_23_loss/mul_grad/BroadcastGradientArgs}}] ]
It is in function model.fit(). I have got 130 training examples and batch size is 5 (I know, that those number are small...).
Please, Can anybody know, what can cause this error in function model.fit()?
Thank you for your help.
1376256 is exactly 3 x 458752. I suspect you are not correctly accounting for your channels somewhere. As this appears to be on your output layer, it may be that you're trying to predict 3 classes when there are only 1.
In future, or if this doesn't help, please provide more information including the code for your model and number of classes you're trying to predict, so people can better help.

Why is "step" argument necessary when predicting using data tensors? what does this error mean?

I am trying to predict() the output for a single data point d, using my trained Keras model loaded from a file. But I get a ValueError If predicting from data tensors, you should specify the 'step' argument. What does that mean?
I tried setting step=1, but then I get a different error ValueError: Cannot feed value of shape () for Tensor u'input_1:0', which has shape '(?, 600)'.
Here is my code:
d = np.concatenate((hidden[p[i]], hidden[x[i]])).resize((1,600))
hidden[p[i]] = autoencoder.predict(d,steps=)
The model is expecting (?,600) as input. I have concatenated two numpy arrays of shape (300,) each to get (600,), which is resized to (1,600). This (1,600) is my input to predict().
In my case, the input to predict was None (because I had a bug in another part of the code).
In official doc, steps refer to the total number of steps before stopping. So steps=1 means make predictions on one batch instead of making prediction on one record (single data point).
https://keras.io/models/sequential/
-> Define value of steps argument,
d = np.concatenate((hidden[p[i]],
hidden[x[i]])).resize((1,600))
hidden[p[i]] = autoencoder.predict(d,steps=1)
If you are using a test data generator, it is good practice to define the steps, as mentioned in the documentation.
If you are predicting a single instance, no need to define the steps. Just make sure the argument (i.e. instance 'd') is not None, otherwise that error will show up. Some reshaping may also be necessary.
in my case i got the same error, i just reshaped the data to predict with numpy function reshape() to the shape of the data originally used to train the model.

How to correct shape of Keras input into a 3D array

I've a Keras model that when I fit fails with this error
> kerasInput = Input(shape=(None, 47))
> LSTM(..)(kerasInput)
...
> model.fit(realInput, ...)
ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (10842, 1)
When looking at my input I found it has a shape of (10842, 1) but for each row it's actually a list of list. I can verify with
> pd.DataFrame(realInput[0]).shape
(260, 47)
How I could correct my input shape?
When trying with keras Reshape layer, the creation of the model fails with:
Model inputs must come from `keras.layers.Input` (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to your model was not an Input tensor, it was generated by layer reshape_8.
Note that input tensors are instantiated via `tensor = keras.layers.Input(shape)`.
The tensor that caused the issue was: reshape_8/Reshape:0
You can use numpy.expand_dims method to convert the shape to 3D.
import numpy as np
np.expand_dims(realInput,axis=0)
Reshape layer keras
https://keras.io/layers/core/#reshape
Use the third parameter as 1
# Something Similar to this
X_train = np.reshape(X_train,(X_train.shape[0],X_train.shape[1],1))
Edit: Added np.reshape method
Refer this repository: https://github.com/NilanshBansal/Stock_Price_Prediction/blob/master/Stock_Price_Prediction_20_days_later_4_LSTM.ipynb
As I said before in the comments. You will need to make sure to reshape your data to match what LSTM expects to receive and also make sure the input_shape is correctly set.
I found this post quite helpful when I struggled with inputting to an LSTM layer. I hope it helps you too : Reshape input for LSTM