Getting argument data's type Tensor[3,850,850] is incompatible with the type Tensor[3,370,1224] of the passed Variable on cntk - cntk

I am getting argument data's type Tensor[3,850,850] is incompatible with the type Tensor[3,370,1224] of the passed Variable error when running cntk's Fast RCNN sample code.
Sometime the number in Tensor[] changes...without any modification on config files
Thanks in advance.

It was crash between network model image size and new trainning image configuration After I erase model I train earlier. It worked!

Related

Problem when saving a machine learning keras model

I follow this tutorial on keras
https://keras.io/examples/nlp/semantic_similarity_with_bert/
I wanted to save the model with this command
model.save("saved_model/my_model")
I got this warnings when i saved the model
enter image description here
Then when i want to load the model to use it with this command
tf.keras.models.load_model('saved_model/my_model')
I got this error
enter image description here
Is this the good way to save the model ?
your first structure is inside a dict. You must extract the item from the dict to be able to get rid of your error. Try checking this out.

GluonCV inference with finetuned model - “Please make sure source and target networks have the same prefix” error

I used GluonCV to finetune an object detection model in order to recognize some custom classes, mostly following the related tutorial.
I tried using both “ssd_512_resnet50_v1_coco” and “ssd_512_mobilenet1.0_coco” as base models, and the training process ended successfully (the accuracy on the validation dataset is reasonably high).
The problem is, I tried running inference with the newly trained model, by using for example:
classes = ["CML_mug", "person"]
net = gcv.model_zoo.get_model('ssd_512_mobilenet1.0_custom',
classes=classes,
pretrained_base=False,
ctx=ctx)
net.load_params("saved_weights/-0070.params", ctx=ctx)
but I get the error:
AssertionError: Parameter 'mobilenet0_conv0_weight' is missing in file: saved_weights/CML_mobilenet_00/-0000.params, which contains parameters: 'ssd0_ssd0_mobilenet0_conv0_weight', 'ssd0_ssd0_mobilenet0_batchnorm0_gamma', 'ssd0_ssd0_mobilenet0_batchnorm0_beta', ..., 'ssd0_ssd0_ssdanchorgenerator2_anchor_2', 'ssd0_ssd0_ssdanchorgenerator3_anchor_3', 'ssd0_ssd0_ssdanchorgenerator4_anchor_4', 'ssd0_ssd0_ssdanchorgenerator5_anchor_5'. Please make sure source and target networks have the same prefix.
So, it seems the network parameters are named differently in the .params file and in the model I’m using for inference. Specifically, in the .params file, the name of the network weights is prefixed by the string “ssd0_ssd0_”, which lead to the error when invoking net.load_parameters.
I did this whole procedure a few times in the past without having problems, did anything change? I’m running it on Ubuntu 18.04, with mxnet-mkl (1.6.0) and gluoncv (0.7.0).
I tried loading the .params file by:
from mxnet import nd
model = nd.load(0070.param)
and I wanted to modify it and remove the “ssd0_ssd0_” string that is causing the problem.
I’m trying to navigate the dictionary, but between the keys I only found a:
ssd0_resnetv10_conv0_weight
so, slightly different than indicated in the error.
Anyway, this way of fixing the issue would be a little cumbersome, I’d prefer a more direct way.
Ok, fixed it. Basically, during training I was saving the .params file by using:
net.export(param_file)
and, as I said, loading them during inference by:
net.load_parameters(param_file)
However, it doesn’t work this way, but it does if instead of export I use:
net.save_parameters(param_file)

unsigned int overflow error in converting image to MNIST format

I'm a newbie of deep learning utilizing tensorflow.
I want to make the own model that predict my custom images that are constructed on the grayscale.
But the only thing that I know is MNIST example utilizing tensorflow.
So I used a converting module from this repo but the error had been occurred such as this.
Images like to convert was constructed as 80,680 of training images, 20,170 of test images.
I really don't know why this error has occurred.
Please help me.
The script you're referring to doesn't correctly set up the headers for the MNIST format. It was addressed in a previous Github issue that has since been deleted, but my modification:
header = array('B')
header.extend([0,0,8,1,0,0])
header.append(int('0x'+hexval[2:][:2],16))
header.append(int('0x'+hexval[2:][2:],16))
to
header = array('B')
header.extend([0,0,8,1])
header.append(int('0x'+hexval[2:][:2],16))
header.append(int('0x'+hexval[4:][:2],16))
header.append(int('0x'+hexval[6:][:2],16))
header.append(int('0x'+hexval[8:][:2],16))
should get it working. Hope this helps!

Tensorflow Lite: error on AllocateTensors() when I make .tflite using flatc

I am trying to test my model using tensorflow-lite on x86_64 PC.
I coded a c++ test code and succeed interpreting given mobilenet model and executing inference.
I wanted to change some operation in the model to my custom operation.
Before doing that, I checked whether I can convert .tflite to json correctly.
What I did is changing the mobilenet.lite to mobilenet.json using flatc and tensorflow lite's schema (schema.fbs) and re-changing the mobilenet.json to mobilenet_new.lite.
However, when I tested mobilenet_new.lite, error occurs like below : tensorflow/contrib/lite/kernels/kernel_util.cc:35 std::abs(input_product_scale - bias_scale) <= 1e-6 * std::min(input_product_scale, bias_scale) was not true.
When I converted the mobilenet_new.lite to mobilenet_new.json, two JSON files were the same without any difference. Why does this error happen? If the parameter values are the same, how this can be possible?
If you have knowledge about this, please give me help.
Thanks
I has resolved this.
when I debug this problem, it was flatbuffer problem.
flatbuffer change float to string when make json file.
So, it becomes fixed point value with precision 6. This makes float value rounding.
So, when I converted tflite -> json -> tflite, there was some change between two tflite files.

Tensorflow Warning while saving SparseTensor

I'm working on a TensorFlow project where 'targets' is defined as:
targets = tf.sparse_placeholder(tf.int32, name='targets')
Now saving my model with saver.save(sess, model_path, meta_graph_suffix='meta', write_meta_graph=True) gives me the following error:
WARNING:tensorflow:Error encountered when serializing targets.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'SparseTensor' object has no attribute 'name'
I believe the warning is printed in the following lines of code: https://github.com/tensorflow/tensorflow/blob/f974e8d0c2420c6f7e2a2791febb4781a266823f/tensorflow/python/training/saver.py#L1452
Reloading the model with saver.restore(session, save_path) seems to work though.
Has anyone seen this issue before? Why would serializing a SparseTensor give that warning? Is there any way to avoid this warning?
I'm using TensorFlow version 0.10.0rc0 python 2.7 GPU version. I can't provide a minimal example, it doesn't happen all the time, only in certain configurations. And I can't share the model I currently have this issue with.
The component placeholders (for indices, values, and possibly shape) somehow get added to some collections. If you trace through the code in saver.py, you can see ops.get_all_collection_keys() being used.
This should be a benign warning. I will forward to the team to see if something can be done to improve this handling.
The warning means that a SparseTensor type of operation has been added to a collection whose to_proto() implementation expects a "name" field.
I'd consider this a bug if you intend to restore the complete graph from meta_graph, including all the Python objects, and you should find out which operation added that SparseTensor into a collection.
If you never intend to restore from meta_graph, then you can ignore this error.
Hope that helps.
Sherry