tensorflow add new op : could attr accept scalar tensor? - tensorflow

I can't find detail info about this in official doc.
Could anyone give more detailed info?

TensorFlow uses attrs as "compile-time constants" that determine the behavior and type (number of inputs and outputs) of an op.
You can define an op that has a TensorProto as one of its attrs. For example the tf.constant() op takes its value as an attr, which is defined here in the corresponding op registration.
There are a few limitations to this feature:
It is not currently possible to constrain the shape of the tensor statically. You would need to validate this in the constructor for the op (where GetAttr is typically called).
Similarly, it is not currently possible to constrain the element type of the tensor statically, so you will need to check this at runtime as well.
In the Python wrapper for your op, you will need to pass the attr's value as a TensorProto, e.g. by calling tf.contrib.util.make_tensor_proto() to do the conversion.
In general, you may find it much easier to use a simple int, float, bool, or string attr instead of a scalar TensorProto, but the TensorProto option is available if you need to encode a less common type.

Related

normal cdf and pdf functions in Pyomo

I am working on a mathematical model in pyomo. There are parameters that are based on a normal distribution. The input for these distributions is not a simple numerical number, but it's another parameter that is defined in pyomo.
I imported the Statistics package to use normal distribution but I get this error:
Cannot convert non-constant Pyomo expression (0 < s) to bool.
This error is usually caused by using a Var, unit, or mutable Param in a
Boolean context such as an "if" statement, or when checking container
membership or equality.
I found the answer. I share it here for others if they had the same question.
I think the easiest approach is working with python (Numpy ), generating whatever you want, then assigning it to pyomo objects. I tried this and it worked very well.

TensorFlow placeholder for InputList

Some raw operations use InputLists, not (only) simple Inputs. I want to add a Placeholder to my Graph, and during TF_SessionRun add the actual array of tensors. I have two problems with it:
TF_SessionRun does not talk about InputList, it only knows Inputs. I assume (correct me if I am wrong), that from a TF_Session point of view, an InputList is just an Input (giving the first element of the array).
I cannot solve to have a Placeholder in the Graph. Defining Placeholder requires to give its data type, but in an InputList every Tensor can have its own data type.
I am looking either for a data type "DT_List" or similar indicating that the given Placeholder is a list of different tensors, OR looking for another raw operations, called "ListPlaceholder" or similar, to cater for this purpose.
How shall it be done?
P.S. Imagine raw operation Save. It's third parameter is an InputList of Tensors to save. I made a Graph that works well for a single Tensor, but I cannot solve it for multiple ones in one go.
It seems after a lot of checking, that I incorrectly guessed that there is (or should be) such a thing as an InputList input. The inputs to Session.Run are always single Tensors and as such no "Placeholder for list" exists. In the mentioned "Save" elementary operation, the "data" parameter - as guessed - has to be added using TF_AddInputList, but the list of TF_Outputs in its parameter list has to be assembled from individual TF_Output elements and cannot be retrieved as one TF_OutputList from a "Placeholder" like node.
If my conclusion is wrong, please correct me.

What exactly qualifies as a 'Tensor' in TensorFlow?

I am new to TensorFlow and just went through the eager execution tutorial and came across the tf.decode_csv function. Not knowing about it, I read the documentation. https://www.tensorflow.org/api_docs/python/tf/decode_csv
I don't really understand it.
The documentation says 'records: A Tensor of type string.'
So, my question is: What qualifies as a 'Tensor'?
I tried the following code:
dec_res = tf.decode_csv('0.1,0.2,0.3', [[0.0], [0.0], [0.0]])
print(dec_res, type(dec_res))
l = [[1,2,3],[4,5,6],[7,8,9]]
r = tf.reshape(l, [9,-1])
print(l, type(l))
print(r, type(r))
So the list dec_res contains tf.tensor objects. That seems reasonable to me. But is an ordinary string also a 'Tensor' according to the documentation?
Then I tried something else with the tf.reshape function. In the documentation https://www.tensorflow.org/api_docs/python/tf/reshape it says that 'tensor: A Tensor.' So, l is supposed to be a tensor. But it is not of type tf.tensor but simply a python list. This is confusing.
Then the documentation says
Returns:
A Tensor. Has the same type as tensor.
But the type of l is list where the type of r is tensorflow.python.framework.ops.Tensor. So the types are not the same.
Then I thought that TensorFlow is very generous with things being a tensor. So I tried:
class car(object):
def __init__(self, color):
self.color = color
red_car = car('red')
#test_reshape = tf.reshape(red_car, [1, -1])
print(red_car.color) # to check, that red_car exists.
Now, the line in comments results in an error.
So, can anyone help me to find out, what qualifies as a 'Tensor'?
P.S.: I tried to read the source code of tf.reshape as given in the documentation
Defined in tensorflow/python/ops/gen_array_ops.py.
But this file does not exist in the Github repo. Does anyone know how to read it?
https://www.tensorflow.org/programmers_guide/tensors
TensorFlow, as the name indicates, is a framework to define and run
computations involving tensors. A tensor is a generalization of
vectors and matrices to potentially higher dimensions. Internally,
TensorFlow represents tensors as n-dimensional arrays of base
datatypes.
What you are observing commes from the fact that tensorflow operations (like reshape) can be built from various python types using the function tf.convert_to_tensor:
https://www.tensorflow.org/api_docs/python/tf/convert_to_tensor
All standard Python op constructors apply this function to each of
their Tensor-valued inputs, which allows those ops to accept numpy
arrays, Python lists, and scalars in addition to Tensor objects

TensorFlow shape checker

Unlike most programming languages, TensorFlow does not regard the shape of an array as part of the type. The downside of this is that, if you make a mistake and accidentally provide data of the wrong shape, it may silently give a wrong answer e.g. Slightly different shape converges to wrong number - why? which makes debugging difficult.
Does there exist a shape checker for TF? That is, a function or program that can analyze a graph (with sample feed_dict if need be) and raise the alarm if there is a shape mismatch?
Tensorflow does offer a shape checker mechanism which is technically the shape argument you should specify while declaring Tensorflow place holders. By default, tensorflow takes [None,None] for shape. But , for example if you do specify the shape while declaring your place holders, then it will raise shape error whenever user enters data of incorrect/conflicting shape. For example
lets say I declared a place holder named X and did specify its shape argument too:
X=tf.placeholder(dtype=tf.float32, shape=[None,256])
Now, this means that number of rows of X can vary but number of features will always be 256. And now , if I mistakenly feed data of shape lets say 1000 rows and 20 features, shape error will be raised.
Also, check this link :https://www.tensorflow.org/api_docs/python/tf/placeholder
Use:
print(str(tf.Shape(test_tensor))) # where test_tensor is
whatever your tensor's name is
Documentation available here: https://www.tensorflow.org/api_docs/python/tf/shape

Find all variables that a tensorflow op depends upon

Is there a way to find all variables that a given operation (usually a loss) depends upon?
I would like to use this to then pass this collection into optimizer.minimize() or tf.gradients() using various set().intersection() combinations.
So far I have found op.op.inputs and tried a simple BFS on that, but I never chance upon Variable objects as returned by tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) or slim.get_variables()
There does seem to be a correspondence between corresponding 'Tensor.op._idandVariables.op._id` fields, but I'm not sure that's a something I should rely upon.
Or maybe I should't want to do this in the first place?
I could of course construct my disjoint sets of variables meticulously while building my graph, but then it would be easy to miss something if I change the model.
The documentation for tf.Variable.op is not particularly clear, but it does refer to the crucial tf.Operation used in the implementation of a tf.Variable: any op that depends on a tf.Variable will be on a path from that operation. Since the tf.Operation object is hashable, you can use it as the key of a dict that maps tf.Operation objects to the corresponding tf.Variable object, and then perform the BFS as before:
op_to_var = {var.op: var for var in tf.trainable_variables()}
starting_op = ...
dependent_vars = []
queue = collections.deque()
queue.append(starting_op)
visited = set([starting_op])
while queue:
op = queue.popleft()
try:
dependent_vars.append(op_to_var[op])
except KeyError:
# `op` is not a variable, so search its inputs (if any).
for op_input in op.inputs:
if op_input.op not in visited:
queue.append(op_input.op)
visited.add(op_input.op)