tensorflow how to average several IndexedSlicesValue? - tensorflow

I defined a model of RNN in tensorflow, one of the gradients in compute_gradients is of type IndexedSlices while others are of type tensor. After I session.run(compute_gradients ...), the returned value type of IndexedSlices is IndexedSlicesValue, then I have two questions:
How could I average several IndexedSlicesValue values?
How can I serialize a IndexedSlicesValue and send it to another machine through socket?
Thank you very much!

IndexedSlices is really an encoding of a sparse tensor, using a pair of dense tensors. It probably comes from the gradient of a tf.gather operation. There is some API documentation about IndexedSlices here that may help: https://www.tensorflow.org/api_docs/python/tf/IndexedSlices
I don't know of much code to work with IndexedSlices directly; typically they are an internal detail used as part of gradient code. Depending on the data sizes, the easiest way to work with them might be to convert them into a dense Tensor and process/send that.

Related

Dealing with missing values in tensorflow

I need some guidance on the approach to imputation in tensorflow/deep learning. I am familiar with how scikit-learn handles imputation, and when I map it to the tensorflow ecosystem, I would expect to use preprocessing layers in keras or functions in tensorflow transform to do the imputation. However, at least to my knowledge, these functions do not exist. So I have a few questions:
Is there a reason tied to how deep learning works that these functions do not exist (for example, dense sampling needs to be as accurate as possible, and you have a large amount of data, hence imputation is never required)
If it is not #1, how should one handle imputation in tensorflow? For example, during serving, your input could be missing data, and there's nothing you can do about that. I would think integrating it into preprocessing_fn would be the thing to do.
Is it possible to have the graph do different things during training and serving? For example, train on no missing values data, and if during serving you encounter that situation, do something like ignore that value or set it to a specified default.
Thank you!
Please refer to Mean imputation for missing data to impute missing values from your data with mean.
In the example below, x is a feature, represented as a tf.SparseTensor in the preprocessing_fn. In order to convert it to a dense tensor, we compute its mean, and set the mean to be the default value when it is missing from an instance.
Answering your third question, TensorFlow Transform builds transformations into the TensorFlow graph for your model so the same transformations are performed at training and inference time.
For your mentioned use-case, the below example for imputation would work, because default_value param sets values for indices if not specified. And if default_value param is not set, it defaults to Zero.
Example Code:
def preprocessing_fn(inputs):
return {
'x_out': tft.sparse_tensor_to_dense_with_shape(
inputs['x'], default_value=tft.mean(x), shape=[None, 1])
}

Implementing backprop in numpy

I a trying to implement backprop in numpy by defining a function that performs some kind operation given an input, weight matrix and bias, and returns the output with the backward function, which can be used to update weights.
Currently this is my code , however I think there are some bugs in the derivation, as the gradients for the W1 matrix are too large. Here is a pytorch implementation for the same thing as a reference torch.
Any help is appreciated.

How does TensorFlow calculate the gradients of an FFT layer?

If I insert the function, e.g., tf.fft(input, name=None), into a neural network, how does TensorFlow calculate the gradients in backpropagation?
I didn't find any documentation about this.
I am using TensorFlow 1.0.
If you're just inserting the tf.fft(...) function in the middle of a model I'm not certain tensorflow will even be able to handle a forward pass. If you read the docs on tf.signal.fft (https://www.tensorflow.org/api_docs/python/tf/signal/fft) or even just read the tf.fft function header, they both require inputs with dtype=tf.complex64 or dtype=tf.complex128. Perhaps tensorflow will cast float32 inputs to complex and then back again, allowing you to complete a forward pass, I'm not sure, but from what I can gather from reading tensorflow gradient documents casting values causes a disconnect between error gradient and Model parameters, meaning a backward pass won't work. You could try implementing a custom fft function which doesn't cast values and see if that works? It's not so easy though.

how to convert pytorch adaptive_avg_pool2d method to keras or tensorflow

I don't know how to convert the PyTorch method adaptive_avg_pool2d to Keras or TensorFlow. Anyone can help?
PyTorch mehod is
adaptive_avg_pool2d(14,[14])
I tried to use the average pooling, the reshape the tensor in Keras, but got the error:
ValueError: total size of new array must be unchanged
I'm not sure if I understood your question, but in PyTorch, you pass the spatial dimensions to AdaptiveAvgPool2d. For instance, if you want to have an output sized 5x7, you can use nn.AdaptiveAvgPool2d((5,7)).
If you want a global average pooling layer, you can use nn.AdaptiveAvgPool2d(1). In Keras you can just use GlobalAveragePooling2D.
For other output sizes in Keras, you need to use AveragePooling2D, but you can't specify the output shape directly. You need to calculate/define the pool_size, stride, and padding parameters depending on how you want the output shape. If you need help with the calculations, check this page of CS231n course.

How should I structure my labels for TensorFlow?

I'm trying to use TensorFlow to train output servo commands given an input image.
I plan on using a file as #mrry suggested in this question, with the images like so:
../some/path/some_img.JPG *some_label*
My question is, what are the label formats I can provide to TensorFlow and what structures are suggested?
My data is basically n servo commands from 0-10 seconds. A vector would work great:
[0,2,4,3]
or similarly:
[0,.25,.4,.3]
I couldn't find much about labels in the docs. Can anyone shed any light on TensorFlow labels?
And a very related question is what is the best way to structure these for TensorFlow to properly learn from them?
In Tensorflow Labels are just generic tensor. You can use any kind of tensor to store your labels. In your case a 1-D tensor with shape (4,) seems to be desired.
Labels do only differ from the rest of the data by its use in the computational graph. (Usually) labels should only be used inside the loss function while you propagate the other data through the whole network. For your problem a 4-d regression function should work.
Also, look at my newest comment to the (old) question. Using the slice_input_producer seems to be preferable in your case.