Onnx load error: The input tensor cannot be reshaped to the requested shape - tensorflow

I want to convert a tensorflow model to an onnx file, the conversion is successful and the file is saved. But when I load the onnx model with onnxruntime, it threw an error:
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Loop node. Name:'generic_loop_Loop__518' Status Message: Non-zero status code returned while running Reshape node. Name:'NmtModel/while/Reshape_1' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:37 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector&) size != 0 && (input_shape.Size() % size) == 0 was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,4,0,512}, requested shape:{1,-1,0,512}
Seems the reshape operation cannot accept the actual tensor shape of {1,4,0,512}, how can I solve this please?

Related

ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type int). How o rectify this type of error in python?

My training data size and testing data size is ((80000, 77, 1), (8000, 77, 1))
training and testing data shape
fit the data into CNN model during implementation am getting value error
tried to fit my data to CNN model during the training process am getting value error as
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type int).
I tried using these commands still error occurs as:
x_train = np.asarray(x_train).astype(np.float32)
y_train = np.asarray(y_train).astype(np.float32)
ValueError: could not convert string to float: '10.152.152.11-50.23.134.226-41711-80-6'

I keep getting a TypeError when using gbt_regression_prediction().compute with XGBoost and daal4py

I have a pre-trained XGBoost model that I want to optimize with daal4py but I'm getting the following error
TypeError: Argument 'model' has incorrect type (expected daal4py._daal4py.gbt_regression_model, got XGBRegressor)
Here is the line with that is throwing the error:
y_pred = d4p.gbt_regression_prediction().compute(x_test, xgb_model).prediction.reshape(-1)
If you pass the XGBoost object to d4p.gbt_regression_prediction().compute(x_test, xgb_model).prediction.reshape(-1) you will continue to get this error.
You must first convert the model to daal4py format before passing it to the prediction method. Please see the example below.
daal_model = d4p.get_gbt_model_from_xgboost(xgb_model.get_booster())
daal_model).prediction.reshape(-1)

Using patch from larger image as input dim to Keras CNN gives error 'Tensor' object has no attribute '_keras_history'*

I am trying to create a CNN with keras to process 20x20 patches from a larger image of 600x600.
When I attempt the run the code below I receive an error AttributeError: 'Tensor' object has no attribute '_keras_history'
The below code is only intended to look at the first 20 x 20 patch out of an total of 900, I am trying to get this functioning before attempting to loop through the entire input image.
I don't understand why it is returning the error as each layer is generated with an keras layer and I haven't applied any other operations to the tensor?
I am using tensorflow 1.3 and keras 2.0.6.
nb_filters=16
input_image=Input(shape=(600,600,3))
Input_1R=Reshape((900,20,20,3))(input_image)
conv1=Convolution2D(nb_filters,(5,5),activation='relu',padding='valid')(Input_1R[:,0])
conv4=Convolution2D(1,(6,6),activation='hard_sigmoid',padding='same')(conv1)
dense6=Dense(1)(conv4)
output_dense=dense6
model = Model(inputs=input_image, outputs=output_dense)
The error occurs because the slicing operation Input_1R[:,0] is not performed in a Keras layer.
You can wrap it into a Lambda layer:
sliced = Lambda(lambda x: x[:, 0])(Input_1R)
conv1 = Convolution2D(nb_filters, (5,5), activation='relu', padding='valid')(sliced)

TensorFlow: Unable to feed the same placeholder twice in the same session

I defined a placeholder - X of shape ( None , 100)
In one session of tensorflow:
I fed the placeholder with an input of (64,100) and Trained my Model
Now when I again want to input a (3,100) matrix to the placeholder but it shows an Error here saying :
ERROR : Cannot feed value of shape (3, 100) for Tensor
u'RNN-LM/zeros:0', which has shape '(64, 100)'

In Tensorflow, how do I generate a scalar summary?

Does anyone have a minimal example of using a SummaryWriter with a scalar_summary in order to see (say) a cross entropy result during a training run?
The example given in the documentation:
merged_summary_op = tf.merge_all_summaries()
summary_writer = tf.train.SummaryWriter('/tmp/mnist_logs', sess.graph_def)
total_step = 0
while training:
total_step += 1
session.run(training_op)
if total_step % 100 == 0:
summary_str = session.run(merged_summary_op)
summary_writer.add_summary(summary_str, total_step)
Returns an error: TypeError: Fetch argument None of None has invalid type , must be a string or Tensor. (Can not convert a NoneType into a Tensor or Operation.)
When I run it.
If I add a:
tf.scalar_summary('cross entropy', cross_entropy)
operation after my cross entropy calculation, then instead I get the error:
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_2' with dtype float
Which suggests that I need to add a feed_dict to the
summary_str = session.run(merged_summary_op)
call, but I am not clear what that feed_dict should contain....
The feed_dict should contain the same values that you use for running the training_op. It basically specifies the input values to your network for which you want to calculate the summaries.
The error is probably coming from:
session.run(training_op)
Did you paste the example code into a version of the mnist code that requires a feed_dict for feeding in training examples? Check the backtrace it gave you (and include it above if that doesn't solve the problem).