how to change activation function in DNNClassifier in tensorflow r0.9? - tensorflow

I couldn't find a way to change activation function in DNNClassifier. The document is not well written. I want to do something like:
classifier = learn.DNNClassifier(hidden_units=[8,16,8], n_classes=2, activation_fn=relu)
But there is no activation_fn in the fucntion, so I can hardly change it.
Can anyone help? Thanks,

So there are a bunch of different activation functions out there. The dictionary below just gives you the more common ones. You can find out about all activation function here: https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html
activation = {'relu': tf.nn.relu,
'tanh': tf.nn.tanh,
'sigmoid': tf.nn.sigmoid,
'elu': tf.nn.elu,
'softplus': tf.nn.softplus,
'softsign': tf.nn.softsign,
'relu6': tf.nn.relu6,
'dropout': tf.nn.dropout}

Related

How to get activation of a hidden layer in tensorflow.js?

In TensorFlow.js, I have a very simple tf.Sequential model created like this:
let model = tf.sequential();
model.add(tf.layers.dense({inputShape: [784], units: 128, activation: 'relu'}));
model.add(tf.layers.dense({units: 10}));
model.add(tf.layers.softmax());
During prediction time, how can I get the activation of the second tf.layers.dense layer?
Can I just delete model.layers[2] and use model.predict() as normal?
(I know I can do this in advance by defining two model outputs with the functional API, but let's say I have a pre-made tf.Sequential model that I want to inspect the logits of.)
For more complex models, there's an easier way. If model is the original model, you can create a copy using tf.model({inputs:model.inputs, outputs: model.layers[2].output}), thereby only needing to provide the first and last layer
I figured out how to do this.
Deleting model.layers[2] doesn't work, since apparently model.predict() doesn't depend on that property.
One way to do this is to create a duplicate tf.Sequential model, copying over all the layers (except the last) from the original.
let m2 = tf.sequential();
m2.add(model.layers[0]);
m2.add(model.layers[1]);
Then m2.predict() will output the logits.

What is the default activation function of cudnnlstm in tensorflow

What's the default activation function of cudnnlstm in TensorFlow? How can I set an activation function such as relu? Maybe it's just linear model? I read the document, but I did not find it.
For example, the code is below:
lstmcell=tf.contrib.cudnn_rnn.CudnnLSTM(1,encoder_size,direction="bidirectional")
hq,_ =lstmcell(query)
And I read the document of TensorFlow From this link.
The function is below
__init__(
num_layers,
num_units,
input_mode=CUDNN_INPUT_LINEAR_MODE,
direction=CUDNN_RNN_UNIDIRECTION,
dropout=0.0,
seed=None,
dtype=tf.float32,
kernel_initializer=None,
bias_initializer=None,
name=None
)
And no keyword to set a parameter such as "activation = "tanh" just like tf.nn.rnn_cell.LSTMell.
So what's the default activation function of cudnnlstm in TensorFlow, and how to change it to leaky_relu.
tf.contrib.cudnn_rnn.CudnnLSTM() : Tanh
This was given in the Keras github.
https://github.com/keras-team/keras/issues/8510#issuecomment-429255318
Nvidia documentation.
https://devblogs.nvidia.com/optimizing-recurrent-neural-networks-cudnn-5/
To answer OP's 2nd question which was edited in later, there is currently no way to set a custom activation function for CudnnLSTM and CudnnGRU.

Does anyone know how we can change the loss function in DNNClassifier tensorflow premade estimator?

I want to use a separate loss function in the DNNClassifier as the data is highly imbalanced i want to use
tf.nn.weighted_cross_entropy_with_logits as the loss function but i guess i need to build a new estimator for it?
Is it possible to change the loss function in the existing pre baked DNNClassifier by tensorflow Estimator API?
You can set the classifier's optimizer and the activation function in the hidden layers, but I don't think you can define a custom loss function.
Since your input data is "highly imbalanced," you can set custom weights by assigning your weights to the constructor's weight_column argument. The documentation is here.

What activation function is used in the nce_loss?

I am stuck with the nce_loss activation function in the word2vec model. I want to figure out what activation function it uses among all these listed here:
These include smooth nonlinearities (sigmoid, tanh, elu, softplus,
and softsign), continuous but not everywhere differentiable functions
(relu, relu6, crelu and relu_x), and random regularization (dropout).
I have searched for it in this function and somewhere else but failed to get any ideas.
I suppose it is the relu* series. Any hints please?
None of those. It uses CrossEntropy.

How to change the activation function in LSTM cell in Tensorflow

I am trying to change the activation function in the LSTM cell from the new 1.0 release of Tensorflow but am having difficulty.
There is tf.contrib.rnn.LSTMcell which the API states should allow for changing functions but it does not seem to be implemented yet for this cell.
Furthermore, tf.contrib.rnn.BasicLSTMCell, which also should allow for different activation functions doesn't seem to exist anymore.
Do I just need to wait or is there another solution?
When you instantiate both tf.contrib.rnn.LSTMcell and tf.contrib.rnn.BasicLSTMCell you can pass the activation function as the activation parameter. If you look at the linked documentation, you'll see, for example, that the constructor's signature for BasicLSTMCell is
__init__(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=tf.tanh)