tensorflow seq2seq with multiple outputs - tensorflow

I just started to work on tensorflow not so long. I'm working on the seq2seq model and using seq2seq example code.
I want to modify seq2seq model code to get top-k outputs (k is 5 or 10) for the Reinforcement learning model, not to get top-1 output.
First, I think I should modify decoder part of the seq2seq somehow, but I don't know which part is to change.
Is there any references or codes for the problem?

check out https://github.com/tensorflow/tensorflow/issues/654. There are some discussions on this, but no worked example yet.

tf.contrib.seq2seq.BeamSearchDecoder would do the magic for you.

Related

Tensorflow's quantization for RNN and LSTM

In the guide for Quantization Aware Training, I noticed that RNN and LSTM were listed in the roadmap for "future support". Does anyone know if it is supported now?
Is using Post-Training Quantization also possible for quantizing RNN and LSTM? I don't see much information or discussion about it so I wonder if it is possible now or if it is still in development.
Thank you.
I am currently trying to implement a speech enhancement model in 8-bit integer based on DTLN (https://github.com/breizhn/DTLN). However, when I tried to infer the quantized model without any audio/ empty array, it adds a weird waveform on top of the result: A constant signal every 125 Hz. I have checked other places in the code and there is no problem, just boils down to the quantization process with RNN/LSTM.

Need to understand the ML model deployment through MicroMutableOpResolver

I am new to tensorflow lite, and I have noticed that many examples are using static tflite::MicroMutableOpResolver < > micro_op_resolver;
So, my question is: how many layers can we add here when deploying the model?
Or can the layers be exactly the same as described in the machine learning model?
What if there are any layers that repeat?
As mentioned here: https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_mutable_op_resolver.h
The Add* functions, as mentioned in the above link, add the various built-in operators to the MicroMutableOpResolver object.
How can I add Conv1D and Maxpool1D to the main_functions.cc code? This is still not clear to me. What if my model contains a layer structure like this?
Conv1D
Conv1D
Dropout
Dense
Dropout
Flatten
Dense
Could you please explain that to me?
Also, would you mind sharing the detailed link or document for MicroMutableOpResolver?
Regards,
Divya

Predicting using adanet for binary classification problem

I followed the tutorial of adanet:
https://github.com/tensorflow/adanet/tree/master/adanet/examples/tutorials
and was able to apply adanet to my own binary classification problem.
But how can I predict using the train model? I have a very little knowledge of TensorFlow. Any helps would be really appreciated
You can immediately call estimator.predict on a new unlabeled example and get a prediction.

Keras always output the same thing

I trained a keras model inspired by the Inception Model on Tensorflow backend.
The problem is, the ouput is always the same, for differents images I tested.
However, model.evaluate give me a high accuracy percentage, so, the model seems to work.
Have you an idea ? Thanks!
Finally found the answer.
I just forgot to preprocess my input for the prediction.
All is clear now!

Looking for resnet implementation in tensorflow

Are there any resnet implementations in tensorflow? I came across a few (e.g. https://github.com/ry/tensorflow-resnet, https://github.com/xuyuwei/resnet-tf) but these implementations have some bugs (e.g. see the Issues section on the respective github page). I am looking to train imagenet using resnet and looking for tensorflow implementations.
There are some (50/101/152) in tensorflow:models/slim.
The example notebook shows how to get a pre-trained inception running, res-net is probably no different.
I implemented a cifar10 version of ResNet with tensorflow. The validation errors of ResNet-32, ResNet-56 and ResNet-110 are 6.7%, 6.5% and 6.2% respectively. (You can modify the number of layers easily as hyper-parameters.)
I tried to be friendly with new ResNet fan and wrote everything straightforward. You can run the cifar10_train.py file directly without any downloads.
https://github.com/wenxinxu/resnet_in_tensorflow
I implemented Resnet by use of ronnie.ai and keras. Both of tool are great.
While ronnie is more easy from scratch.