tf.function property in pytorch - tensorflow

I'm a beginner in pytorch, and I have some functions that are needed to implement in network.
My question is: is there any way like tf.function, or should I use "class(nn.Module)" with variable?
For example, let X be a 10x2 matrix . In pseudo-code:
a = Variable(1.0)
b = Variable(1.0)
Y = a*X[:,0]**2 + b*X[:,1]

In PyTorch you don't need things like tf.function, you just use normal Python code (because of the dynamic graph).
Please give more detailed example (with code) of what you're trying to do if the above doesn't answer your question.

Related

I have the code below which I want to translate into pytorch. I'm looking for a way to translate np.vectorize to any pytorch way in this case

I need to translate this code to pytorch. The code given below use np.vectorize. I am looking for a pytorch equivalent for this.
class SimplexPotentialProjection(object):
def __init__(self, potential, inversePotential, strong_convexity_const, precision = 1e-10):
self.inversePotential = inversePotential
self.gradPsi = np.vectorize(potential)
self.gradPsiInverse = np.vectorize(inversePotential)
self.precision = precision
self.strong_convexity_const = strong_convexity_const
The doc for numpy.vectorize clearly states that:
The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop.
Therefore, in order to convert your numpy code to pytorch you'll simply need apply potential and inversePotential in a loop over their tensor arguments.
However, that might be very inefficient. You would better re-implement your functions to act "natively" in a vectorized manner on tensors.

How to use tensorflow's FFT?

I am having some trouble reconciling my FFT results from MATLAB and TF. The results are actually very different. Here is what I have done:
1). I would attach my data file here but didn't find a way to do so. Anyways, my data is stored in a .mat file, and the variable we will work with is called 'TD'. In MATLAB, I first subtract the mean of the data, and then perform fft:
f_hat = TD-mean(TD);
x = fft(f_hat);
2). In TF, I use
tf.math.reduce_mean
to calculate the mean, and it only differs from MATLAB's mean on the order of 10^-8. So in TF I have:
mean_TD = tf.reduce_mean(TD)
f_hat_int = TD - mean_TD
f_hat_tf = tf.dtypes.cast(f_hat_int,tf.complex64)
x_tf = tf.signal.fft(f_hat_tf)
So up until 'f_hat' and 'f_hat_tf', the difference is very slight and is caused only by the difference in the mean. However, x and x_tf are very different. I am wondering did I not use TF's FFT correctly?
Thanks!
Picture showing the difference

Learning to rank how to save model

I successfully managed to implement learning to rank by following the tutorial TF-Ranking for sparse features using the ANTIQUE question answering dataset.
Now my goal is to successfully save the learned model to disk so that I can easily load it without training again. Due to the Tensorflow docs, the estimator.export_saved_model() method seems to be the way to go. But I can't wrap my head around how to tell Tensorflow how my feature structure looks like. Due to the docs here the easiest way seems to be calling tf.estimator.export.build_parsing_serving_input_receiver_fn(), which returns me the required inpur receiver function which I have to pass to the export_saved_model function. But how do I tell Tensorflow how my features from my learning to rank model look like?
From my current understanding the model has context feature specs and example feature specs. So I guess I somehow have to combine those two specs into one feature description, which I then can pass to the build_parsing_serving_input_receiver_fn function?
So I think you are on the right track;
You can get a build_ranking_serving_input_receiver_fn like this: (substitue context_feature_columns(...) and example_feature_columns(...) with defs you probably have for creating your own context and example structures for your training data):
def example_serving_input_fn():
context_feature_spec = tf.feature_column.make_parse_example_spec(
context_feature_columns(_VOCAB_PATHS).values())
example_feature_spec = tf.feature_column.make_parse_example_spec(
list(example_feature_columns(_VOCAB_PATHS).values()))
servingInputReceiver = tfr.data.build_ranking_serving_input_receiver_fn(
data_format=tfr.data.ELWC,
context_feature_spec=context_feature_spec,
example_feature_spec=example_feature_spec,
list_size=_LIST_SIZE,
receiver_name="input_ranking_data",
default_batch_size=None)
return servingInputReceiver
And then pass this to export_saved_model like this:
ranker.export_saved_model('path_to_save_model', example_serving_input_fn())
(ranker here is a tf.estimator.Estimator, maybe you called this 'estimator' in your code)
ranker = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=_MODEL_DIR,
config=run_config)

How to implement the tensor product of two layers in Keras/Tf

I'm trying to set up a DNN for classification and at one point I want to take the tensor product of a vector with itself. I'm using the Keras functional API at the moment but it isn't immediately clear that there is a layer that does this already.
I've been attempting to use a Lambda layer and numpy in order to try this, but it's not working.
Doing a bit of googling reveals
tf.linalg.LinearOperatorKronecker, which does not seem to work either.
Here's what I've tried:
I have a layer called part_layer whose output is a single vector (rank one tensor).
keras.layers.Lambda(lambda x_array: np.outer(x_array, x_array),) ( part_layer) )
Ideally I would want this to to take a vector of the form [1,2] and give me [[1,2],[2,4]].
But the error I'm getting suggests that the np.outer function is not recognizing its arguments:
AttributeError: 'numpy.ndarray' object has no attribute '_keras_history
Any ideas on what to try next, or if there is a simple function to use?
You can use two operations:
If you want to consider the batch size you can use the Dot function
Otherwise, you can use the the dot function
In both case the code should look like this:
dot_lambda = lambda x_array: tf.keras.layers.dot(x_array, x_array)
# dot_lambda = lambda x_array: tf.keras.layers.Dot(x_array, x_array)
keras.layers.Lambda(dot_lamda)( part_layer)
Hope this help.
Use tf.tensordot(x_array, x_array, axes=0) to achieve what you want. For example, the expression print(tf.tensordot([1,2], [1,2], axes=0)) gives the desired result: [[1,2],[2,4]].
Keras/Tensorflow needs to keep an history of operations applied to tensors to perform the optimization. Numpy has no notion of history, so using it in the middle of a layer is not allowed. tf.tensordot performs the same operation, but keeps the history.

optimize.root with a matrix equation

I am trying to solve the following linear system using optimize.root
AX = b
With the following code.
A = [[0,1,0],[2,1,0],[1,4,1]]
def foo(X):
b = np.matrix([2,1,1])
out = np.dot(A,X) - b
return out.tolist()
sol = scipy.optimize.root(foo,[0,0,0])
I know that I can simply use the numpy.linalg.solve to do this easily. But I am actually trying to solve a non linear system that is in matrix form. See my question here. So I need to find a way to make this method work. To do that I am trying to solve this problem in this simple case. But I get the error
TypeError: fsolve: there is a mismatch between the input and output shape of the 'func' argument 'foo'.Shape should be (3,) but it is (1, 3).
From what I have read from other similar stackoverflow questions this happens because the out put of the foo function is not compatible with the shape of the initial guess [0,0,0]
Surely there is a way to solve this equation using scipy.optimize.root. Can anyone please help?
(I'm assuming the capital B in your .dot is a typo for A.)
Try using np.array for b. np.matrix creates a "row vector", i.e. shape (1, 3) whereas your initial guess has shape (3,).