I need some guidance on the approach to imputation in tensorflow/deep learning. I am familiar with how scikit-learn handles imputation, and when I map it to the tensorflow ecosystem, I would expect to use preprocessing layers in keras or functions in tensorflow transform to do the imputation. However, at least to my knowledge, these functions do not exist. So I have a few questions:
Is there a reason tied to how deep learning works that these functions do not exist (for example, dense sampling needs to be as accurate as possible, and you have a large amount of data, hence imputation is never required)
If it is not #1, how should one handle imputation in tensorflow? For example, during serving, your input could be missing data, and there's nothing you can do about that. I would think integrating it into preprocessing_fn would be the thing to do.
Is it possible to have the graph do different things during training and serving? For example, train on no missing values data, and if during serving you encounter that situation, do something like ignore that value or set it to a specified default.
Thank you!
Please refer to Mean imputation for missing data to impute missing values from your data with mean.
In the example below, x is a feature, represented as a tf.SparseTensor in the preprocessing_fn. In order to convert it to a dense tensor, we compute its mean, and set the mean to be the default value when it is missing from an instance.
Answering your third question, TensorFlow Transform builds transformations into the TensorFlow graph for your model so the same transformations are performed at training and inference time.
For your mentioned use-case, the below example for imputation would work, because default_value param sets values for indices if not specified. And if default_value param is not set, it defaults to Zero.
Example Code:
def preprocessing_fn(inputs):
return {
'x_out': tft.sparse_tensor_to_dense_with_shape(
inputs['x'], default_value=tft.mean(x), shape=[None, 1])
}
Related
I have 2 different models (Model A is Keras .h5 and Model B is Torch .pth). They need to be served with TFServing. I converted both of these models to Tensorflow (with index .pb) for serving.
I succeeded to serve and get the outputs, but when I compared serving results with the straight model's outputs (on Keras and Torch model), I found that it had made wrong results. The prediction's score for the same image on the server-side is more unreliable than in model output. I could not understand, whether it raises from faults in model converting or anything else?
How could I fix it?
The reason of different results is due to different default parameters of layers and optimizer. For example in pytorch decay-rate of batch-norm is considered as 0.9, whereas in keras it is 0.99. Like that, there may be other variation in default parameters.
I would also recommend to check the weight initializations, as they might be different between both frameworks. Thank you!
If I insert the function, e.g., tf.fft(input, name=None), into a neural network, how does TensorFlow calculate the gradients in backpropagation?
I didn't find any documentation about this.
I am using TensorFlow 1.0.
If you're just inserting the tf.fft(...) function in the middle of a model I'm not certain tensorflow will even be able to handle a forward pass. If you read the docs on tf.signal.fft (https://www.tensorflow.org/api_docs/python/tf/signal/fft) or even just read the tf.fft function header, they both require inputs with dtype=tf.complex64 or dtype=tf.complex128. Perhaps tensorflow will cast float32 inputs to complex and then back again, allowing you to complete a forward pass, I'm not sure, but from what I can gather from reading tensorflow gradient documents casting values causes a disconnect between error gradient and Model parameters, meaning a backward pass won't work. You could try implementing a custom fft function which doesn't cast values and see if that works? It's not so easy though.
I am looking for a proper or best way to get variable importance in a Neural Network created with Keras. The way I currently do it is I just take the weights (not the biases) of the variables in the first layer with the assumption that more important variables will have higher weights in the first layer. Is there another/better way of doing it?
Since everything will be mixed up along the network, the first layer alone can't tell you about the importance of each variable. The following layers can also increase or decrease their importance, and even make one variable affect the importance of another variable. Every single neuron in the first layer itself will give each variable a different importance too, so it's not something that straightforward.
I suggest you do model.predict(inputs) using inputs containing arrays of zeros, making only the variable you want to study be 1 in the input.
That way, you see the result for each variable alone. Even though, this will still not help you with the cases where one variable increases the importance of another variable.
*Edited to include relevant code to implement permutation importance.
I answered a similar question at Feature Importance Chart in neural network using Keras in Python. It does implement what Teque5 mentioned above, namely shuffling the variable among your sample or permutation importance using the ELI5 package.
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor
import eli5
from eli5.sklearn import PermutationImportance
def base_model():
model = Sequential()
...
return model
X = ...
y = ...
my_model = KerasRegressor(build_fn=basemodel, **sk_params)
my_model.fit(X,y)
perm = PermutationImportance(my_model, random_state=1).fit(X,y)
eli5.show_weights(perm, feature_names = X.columns.tolist())
It is not that simple. For example, in later stages the variable could be reduced to 0.
I'd have a look at LIME (Local Interpretable Model-Agnostic Explanations). The basic idea is to set some inputs to zero, pass it through the model and see if the result is similar. If yes, then that variable might not be that important. But there is more about it and if you want to know it, then you should read the paper.
See marcotcr/lime on GitHub.
This is a relatively old post with relatively old answers, so I would like to offer another suggestion of using SHAP to determine feature importance for your Keras models. SHAP also allows you to process Keras models using layers requiring 3d input like LSTM and GRU while eli5 cannot.
To avoid double-posting, I would like to offer my answer to a similar question on Stackoverflow on using SHAP.
I'm trying to use TensorFlow to train output servo commands given an input image.
I plan on using a file as #mrry suggested in this question, with the images like so:
../some/path/some_img.JPG *some_label*
My question is, what are the label formats I can provide to TensorFlow and what structures are suggested?
My data is basically n servo commands from 0-10 seconds. A vector would work great:
[0,2,4,3]
or similarly:
[0,.25,.4,.3]
I couldn't find much about labels in the docs. Can anyone shed any light on TensorFlow labels?
And a very related question is what is the best way to structure these for TensorFlow to properly learn from them?
In Tensorflow Labels are just generic tensor. You can use any kind of tensor to store your labels. In your case a 1-D tensor with shape (4,) seems to be desired.
Labels do only differ from the rest of the data by its use in the computational graph. (Usually) labels should only be used inside the loss function while you propagate the other data through the whole network. For your problem a 4-d regression function should work.
Also, look at my newest comment to the (old) question. Using the slice_input_producer seems to be preferable in your case.
Is there a canonical way to reuse computations from a previously-supplied placeholder in TensorFlow? My specific use case:
supply many inputs (using one placeholder) simultaneously, all of which are fed through a network to obtain smaller representations
define a loss based on various combinations of these smaller representations
train on one batch at a time, where each batch uses some subset of the inputs, without recomputing the smaller representations
Here is the goal in code, but which is defective because the same computations are carried out again and again:
X_in = some_fixed_data
combinations_in = large_set_of_combination_indices
for combination_batch_in in batches(combinations_in, batch_size=128):
session.run(train_op, feed_dict={X: X_in, combinations: combination_batch_in})
Thanks.
The canonical way to share computed values across sess.Run() calls is to use a Variable. In this case, you could set up your graph so that when the Placeholders are fed, they compute a new value of the representation that is saved into a Variable. A separate portion of the graph reads those Variables to compute the loss. This will not work if you need to compute gradients through the part of the graph that computes the representation. Computing those gradients will require recomputing every Op in the encoder.
This is the kind of thing that should be solved automatically with CSE (common subexpression elimination). Not sure what the support in TensorFlow right now, might be kind of spotty, but there's optimizer_do_cse flag for Graph options which is defaulting to false, and you can set it to true using GraphConstructorOptions. Here's a C++ example of using GraphConstructorOptions (sorry, couldn't find a Python one)
If that doesn't work, you could do "manual CSE", ie, figure out which part is being needlessly recomputed, factor it out into separate Tensor, and reference that tensor in all the calculations.