How could I limit the range of a variable in tensorflow - variables

I want to train a model using tensorflow.
I have the following variable which I want the model to learn it
Mj=tf.get_variable('Mj_',dtype=tf.float32, shape=[500,4],initializer=tf.random_uniform_initializer(maxval=1, minval=0))
I want the resulted value of Mj to be between 0 and 1. How can I add this constraint?

The proper way to do this would be to pass the clipping function tf.clip_by_value as the constraint argument to the tf.Variable constructor:
Mj=tf.get_variable('Mj_',
dtype=tf.float32,
shape=[500,4],
initializer=tf.random_uniform_initializer(maxval=1, minval=0),
constraint=lambda t: tf.clip_by_value(t, 0, 1))
From the docs of tf.Variable:
constraint: An optional projection function to be applied to the
variable after being updated by an Optimizer (e.g. used to implement
norm constraints or value constraints for layer weights). The function
must take as input the unprojected Tensor representing the value of
the variable and return the Tensor for the projected value (which must
have the same shape). Constraints are not safe to use when doing
asynchronous distributed training.
Or you might want to consider simply adding a nonlinearity tf.sigmoid on top of your variable.
Mj=tf.get_variable('Mj_',dtype=tf.float32, shape=[500,4])
Mj_out=tf.sigmoid(Mj)
This will transform your variable to range between 0 and 1. Read more about activation functions here.

I think the function you're looking for is tf.clip_by_value.
Link to Docs.

Related

How to implement the tensor product of two layers in Keras/Tf

I'm trying to set up a DNN for classification and at one point I want to take the tensor product of a vector with itself. I'm using the Keras functional API at the moment but it isn't immediately clear that there is a layer that does this already.
I've been attempting to use a Lambda layer and numpy in order to try this, but it's not working.
Doing a bit of googling reveals
tf.linalg.LinearOperatorKronecker, which does not seem to work either.
Here's what I've tried:
I have a layer called part_layer whose output is a single vector (rank one tensor).
keras.layers.Lambda(lambda x_array: np.outer(x_array, x_array),) ( part_layer) )
Ideally I would want this to to take a vector of the form [1,2] and give me [[1,2],[2,4]].
But the error I'm getting suggests that the np.outer function is not recognizing its arguments:
AttributeError: 'numpy.ndarray' object has no attribute '_keras_history
Any ideas on what to try next, or if there is a simple function to use?
You can use two operations:
If you want to consider the batch size you can use the Dot function
Otherwise, you can use the the dot function
In both case the code should look like this:
dot_lambda = lambda x_array: tf.keras.layers.dot(x_array, x_array)
# dot_lambda = lambda x_array: tf.keras.layers.Dot(x_array, x_array)
keras.layers.Lambda(dot_lamda)( part_layer)
Hope this help.
Use tf.tensordot(x_array, x_array, axes=0) to achieve what you want. For example, the expression print(tf.tensordot([1,2], [1,2], axes=0)) gives the desired result: [[1,2],[2,4]].
Keras/Tensorflow needs to keep an history of operations applied to tensors to perform the optimization. Numpy has no notion of history, so using it in the middle of a layer is not allowed. tf.tensordot performs the same operation, but keeps the history.

Kernel's hyper-parameters; initialization and setting bounds

I think many other people like me might be interested in how they can use GPFlow for their special problems. The key is how GPFlow is customizable, and a good example would be very helpful.
In my case, I read and tried lots of comments in raised issues without any real success. Setting kernel model parameters is not straightforward (creating with default values, and then do it via the delete object method). Transform method is vague.
It would be really helpful if you could add an example showing. how one can initialize and set bounds of an anisotropic kernel model (length-scales values and bounds, variances, ...) and specially adding observations error (as an array-like alpha parameter)
If you just want to set a value, then you can do
model = gpflow.models.GPR(np.zeros((1, 1)),
np.zeros((1, 1)),
gpflow.kernels.RBF(1, lengthscales=0.2))
Alternatively
model = gpflow.models.GPR(np.zeros((1, 1)),
np.zeros((1, 1)),
gpflow.kernels.RBF(1))
model.kern.lengthscales = 0.2
If you want to change the transform, you either need to subclass the kernel, or you can also do
with gpflow.defer_build():
model = gpflow.models.GPR(np.zeros((1, 1)),
np.zeros((1, 1)),
gpflow.kernels.RBF(1))
transform = gpflow.transforms.Logistic(0.1, 1.))
model.kern.lengthscales = gpflow.params.Parameter(0.3, transform=transform)
model.compile()
You need the defer_build to stop the graph being compiled before you've changed the transform. Using the approach above, the compilation of the tensorflow graph is delayed (until the explicit model.compile()) so is built with the intended bounding transform.
Using an array parameter for likelihood variance is outside the scope of gpflow. For what it's worth (and because it has been asked about before), that particular model is especially problematic as it is not clear how test points are defined.
Setting kernel parameters can be done using the .assign() function, or through direct assignment. See the notebook https://github.com/GPflow/GPflow/blob/develop/doc/source/notebooks/understanding/tf_graphs_and_sessions.ipynb. You do not need to delete a parameter to assign a new value to it.
If you want to have per-datapoint noise, you will need to implement your own custom likelihood, which you can do by taking Gaussian likelihood in likelihoods.py as an example.
If by "bounds" you mean limiting the optimisation range for a parameter, you can use the Logistic transform. If you want to pass in a custom transformation for a parameter, you can pass a constructed Parameter object into constructors with a custom transform. Alternatively you can assign a newly created Parameter with a new transform to the model.
Here is more information on how to access and change GPflow parameters: viewing, getting and settings parameters documentation.
Extra bit for #user1018464 answer about replacing transform in existing parameter: changing transformation is a bit tricky, you can't change transformation once a model was compiled in TensorFlow.
E.g.
likelihood = gpflow.likelihoods.Gaussian()
likelihood.variance.transform = gpflow.transforms.Logistic(1., 10.)
----
GPflowError: Parameter "Gaussian/variance" has already been compiled.
Instead you have to reset GPflow object:
likelihood = gpflow.likelihoods.Gaussian() # All tensors compiled
likelihood.clear()
likelihood.variance.transform = gpflow.transforms.Logistic(2, 5)
likelihood.variance = 2.5
likelihood.compile()

What's the meaning of grad_ys of tf.gradients? [duplicate]

I want to understand the grad_ys paramter in tf.gradients. I've seen it used like a multiplyer of the true gradient but its not crear in the definition. Mathematically how would the whole expression look like?
Edit: better clarification of notation is here
ys are summed up to make a single scalar y, and then tf.gradients computes dy/dx where x represents variables from xs
grad_ys represent the "starting" backprop value. They are 1 by default, but a different value can be when you want to chain several tf.gradients calls together -- you can pass in the output of previous tf.gradients call into grad_ys to continue the backprop flow.
For formal definition, look at the chained expression in Reverse Accumulation here: https://en.wikipedia.org/wiki/Automatic_differentiation#Reverse_accumulation
The term corresponding to dy/dw3 * dw3/dw2 in TensorFlow is a vector of 1's (think of it as if TensorFlow wraps cost with a dummy identity op). When you specify grad_ys this term is replaced with grad_ys instead of vector of 1s

taking the gradient in Tensorflow, tf.gradient

I am using this function of tensorflow to get my function jacobian. Came across two problems:
The tensorflow documentation is contradicted to itself in the following two paragraph if I am not mistaken:
gradients() adds ops to the graph to output the partial derivatives of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys.
Blockquote
Blockquote
Returns:
A list of sum(dy/dx) for each x in xs.
Blockquote
According to my test, it is, in fact, return a vector of len(ys) which is the sum(dy/dx) for each x in xs.
I do not understand why they designed it in a way that the return is the sum of the columns(or row, depending on how you define your Jacobian).
How can I really get the Jacobian?
4.In the loss, I need the partial derivative of my function with respect to input (x), but when I am optimizing with respect to the network weights, I define x as a placeholder whose value is fed later, and weights are variable, in this case, can I still define the symbolic derivative of function with respect to input (x)? and put it in the loss? ( which later when we optimize with respect to weights will bring second order derivative of the function.)
I think you are right and there is a typo there, it was probably meant to be "of length len(ys)".
For efficiency. I can't explain exactly the reasoning, but this seems to be a pretty fundamental characteristic of how TensorFlow handles automatic differentiation. See issue #675.
There is no straightforward way to get the Jacobian matrix in TensorFlow. Take a look at this answer and again issue #675. Basically, you need one call to tf.gradients per column/row.
Yes, of course. You can compute whatever gradients you want, there is no real difference between a placeholder and any other operation really. There are a few operations that do not have a gradient because it is not well defined or not implemented (in which case it will generally return 0), but that's all.

Weights/bias initialization with constants cntk

Is there a way to initialize weights/bias with constant matrices. E.g., instead of Dense(hidden_layers_dim_1, init=he_normal()), can I do Dense(hidden_layers_dim_1, init=W), where W is a float matrix.
Update-1:
Dense layers seem to accept a numpy array and constant values as initial weights now (cntk-2.0rc3) as per the parameter documentation here.
Layers cannot take initial weight values, yet. However, you can pass in an initial bias value using init_bias named argument in any appropriate layer. But, if you must use a initial weight value, I guess you have create a variable and define your own network as you have done. i.e.
features = <your_input_var>
W = cntk.Parameter((<proper_shape>), init=<intial_value>)
B = cntk.Parameter((<proper_shape>), init=<intial_value>)
output = features # W + B;