TypeError: dnn_Model.setInputMean() takes at most 1 argument (3 given) - typeerror

Please resolve TypeError while executing the object detection project.
I have tried to implement the object detection program but TypeError is appearing.
I hope I will get the resolution within 24Hr of the same.

You are supposed to give only one argument.
i.e.,
model.setInputMean([127.5,127.5,127.5])

Related

Tensorflow Conv2d function - Type error

I'm working on a neural net problem, and in the conv2d function, i'm getting a Type mismatch issue.
Here's the code snippet
conv_layer1 = tf.nn.conv2d(inputs, w_layer1, strides=strides, padding='VALID') + b_layer1
I'm getting this error
TypeError: Expected binary or unicode string, got <bound method Kernel.raw_input of <ipykernel.ipkernel.IPythonKernel object at 0x000001C0A75CB470>>
I tried having [1,1,1,1] inline aswell as a variable, but no luck
The complete error trace is here(search for "In [46]:"
https://github.com/mymachinelearnings/CarND-Traffic-Sign-Classifier-Project/blob/attempt1/Traffic_Sign_Classifier.ipynb
Looks like a typo. In your notebook, you're feeding input into your network which is a built-in python method to get input from e.g. a keyboard. Obviously this doesn't make much sense as input to a convolutional network. Chances are you meant to type inputs as in your question?
Note that the syntax highlighting in the notebook shows this quite clearly -- input is displayed in green (at least in my browser) signifying that it has a special meaning.

Tensorflow: slicing PartitionedVariable

I have created a TensorFlow PartitionedVariable object. Unfortunately I need to slice it at some other point of my program (not according to how the variable is partitioned). Unfortunately when I try the obvious (X[count:]), I get the error TypeError: PartitionedVariable object has no attribute getitem. Is it a bug, or is there any workaround of how to slice PartitionedVariable?
I'm afraid you'll have to use tf.slice().
tf.slice(X, [count], [-1])
in your case.

Tensorflow fails with the error: InvalidArgumentError: Node 'initial_state': Unknown input node 'initial_state/input'

The source-code with the entire exception trace can be found here. This is very weird because it occurs in the tf.global_variables_initializer() method in the "Train" section of the code. Why would this method fail trying to get the "initial_state" variable? The line right above it shows that this variable is clearly present in global scope.
The code is part of a MOOC, and strangely enough nobody else in the class (>5000) seems to have experienced that problem...
Hmmm..What could I be missing..
I was getting a similar error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Node 'init_3/NoOp': Unknown input node '^is_training/Assign'
It turns out I had with tf.device('/gpu:0') when I was in fact running TensorFlow on CPU.

MXNET build model error on r

When I try to use mxnet to build a feedforward model it appeared the following error:
Error in mx.io.internal.arrayiter(as.array(data), as.array(label), unif.rnds, :
basic_string::_M_replace_aux
I follow the R regression example on mxnet website but I change the data into my own data which contains 109 examples and 1876 variables. The first several steps can run without error until ran the model building step. I just can't understand the error information mean. I wonder that it is because of my dataset or the way I deal with the data.
Can you provide the code snippet you are using? That gives more details on the issue. Also, any stacktrace will be useful.
You get this error message mainly due to invalid column/row access and shape (dimension) mismatch. Can you verify if you are using correct "index" values in creating matrix. Let me know if this fixes the issue.
However, MXNet can be better at printing details about error in the stacktrace. I have created a issue to follow up on this - https://github.com/dmlc/mxnet/issues/4206

Tensorflow Warning while saving SparseTensor

I'm working on a TensorFlow project where 'targets' is defined as:
targets = tf.sparse_placeholder(tf.int32, name='targets')
Now saving my model with saver.save(sess, model_path, meta_graph_suffix='meta', write_meta_graph=True) gives me the following error:
WARNING:tensorflow:Error encountered when serializing targets.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'SparseTensor' object has no attribute 'name'
I believe the warning is printed in the following lines of code: https://github.com/tensorflow/tensorflow/blob/f974e8d0c2420c6f7e2a2791febb4781a266823f/tensorflow/python/training/saver.py#L1452
Reloading the model with saver.restore(session, save_path) seems to work though.
Has anyone seen this issue before? Why would serializing a SparseTensor give that warning? Is there any way to avoid this warning?
I'm using TensorFlow version 0.10.0rc0 python 2.7 GPU version. I can't provide a minimal example, it doesn't happen all the time, only in certain configurations. And I can't share the model I currently have this issue with.
The component placeholders (for indices, values, and possibly shape) somehow get added to some collections. If you trace through the code in saver.py, you can see ops.get_all_collection_keys() being used.
This should be a benign warning. I will forward to the team to see if something can be done to improve this handling.
The warning means that a SparseTensor type of operation has been added to a collection whose to_proto() implementation expects a "name" field.
I'd consider this a bug if you intend to restore the complete graph from meta_graph, including all the Python objects, and you should find out which operation added that SparseTensor into a collection.
If you never intend to restore from meta_graph, then you can ignore this error.
Hope that helps.
Sherry