Why is there no mention of contrib.layers.linear in the Tensorflow documentation? - tensorflow

I'm trying to understand someone else's simple tensorflow model and they make use of contrib.layers.linear.
However I cannot find any information on this anywhere and it's not mentioned in the tensorflow documentation.

The tf.contrib.layers module has API documentation here. As you observed in your answer, the contrib APIs in TensorFlow are (especially) subject to change. The tf.contrib.layers.linear() function appears to have been removed, but you can use tf.contrib.layers.fully_connected(…, activation_fn=None) to achieve the same effect.

I managed to find the answer and felt it was still worth posting this to save others wasting their time.
"In general, tf.contrib contains contributed code. It is meant to contain features and contributions that eventually should get merged into core TensorFlow, but whose interfaces may still change, or which require some testing to see whether they can find broader acceptance.
Code in tf.contrib isn't supported by the Tensorflow team. It is included in the hope that it is helpful, but it might change or be removed at any time; there are no guarantees." source

According to what I can see in the Master branch, the function linear still exists in contrib.layers. It actually is a "simple alias which removes the activation_fn parameter":
linear = functools.partial(fully_connected, activation_fn=None)
Here is a link from the 1.0 branch (to increase link persistence).
Though, if the doc still shows it, the link to contrib.layers.linear seems indeed broken.

Related

Is tensorflow_transform a going concern for tf 2.0?

For example, will it eventually work? Does it work? What are the goals and plans? Where can we read about it.
Is tensorflow_transform a going concern for tf 2.0?
Absolutely! Development is ongoing. Issues are being actively discussed, PRs are being worked on and there have been several changes to the master branch this week.
will it eventually work? Does it work?
Yes it works now (in general at least). Perhaps if you are encountering some specific issue could ask a new question with what, specifically, isnt working for you.
What are the goals and plans? Where can we read about it.
The tensorflow team are really good at communicating plans via RFCs and doing development in the open. I am less familiar with work on tf-transform but all the signs are this is developed with the same culture. Check out:
the github repo
the official site

Which model should i use for tensorflow (contrib or models)?

For example if i want to use renset_v2, there are two model file on tensorflow:
one is here, another is here. Lots of tensorflow model are both in models/research and tensorflow/contrib.
I am very confused: which model is better? which model should i use?
In general, tf.contrib contains contributed code mostly by the community. It is meant to contain features and contributions that eventually should get merged into core TensorFlow, but whose interfaces may still change, or which require some testing to see whether they can find broader acceptance.
The code in tf.contrib isn't supported by the Tensorflow team. It is included in the hope that it is helpful, but it might change or be removed at any time; there are no guarantees.
tf.research folder contains machine learning models implemented by researchers in TensorFlow. The models are maintained by their respective authors and have a lower chance of being deprecated.
On the other hand models present directly are officially supported by Tensorflow team and are generally preferred as they have a lower chance of being deprecated in future releases, If you have a model implemented in both, you should generally avoid using the contrib version keeping in mind future compatibility, but the community does do some awesome stuff there, so you might find some models/work not present in the main repository but would be helpful if you used them directly from contrib branch.
Also notice the phrase generally avoid since it is a bit application dependent.
Hope that answers your question, comment with your doubts.
With Tensorflow 2.0 (that will come soon) tf.contrib will be removed.
Therefore, you have to start using models/research if you want your project to be up-to-date and still working in the next months.

Can .tflite capture tf.hub.text_embedding_column() processes?

Just a general question here, no reproducible example but thought this might be the right place anyway since its very software specific.
I am building a model which I want to convert to .tflite. It relies on tf.hub.text_embedding_collumn() for feature generation. When I convert to .tflite will this be captured such that the resulting model will take raw text as input rather than a sparse vector representation?
Would be good to know just generally before I invest too much time in this approach. Thanks in advance!
Currently I don't imagine this would work, as we do not support enough string ops to implement that. One approach would be to do this handling through a custom op, but implementing this custom op would require domain knowledge and mitigate the ease-of-use advance of using tf hub in the first place.
There is some interest in defining a set of hub operators that are verified to work well with tflite, but this is not yet ready.

What Tensorflow API to use for Seq2Seq

This year Google produced 5 different packages for seq2seq:
seq2seq (claimed to be general purpose but
inactive)
nmt (active but supposed to be just
about NMT probably)
legacy_seq2seq
(clearly legacy)
contrib/seq2seq
(not complete probably)
tensor2tensor (similar purpose, also
active development)
Which package is actually worth to use for the implementation? It seems they are all different approaches but none of them stable enough.
I've had too a headache about some issue, which framework to choose? I want to implement OCR using Encoder-Decoder with attention. I've been trying to implement it using legacy_seq2seq (it was main library that time), but it was hard to understand all that process, for sure it should not be used any more.
https://github.com/google/seq2seq: for me it looks like trying to making a command line training script with not writing own code. If you want to learn Translation model, this should work but in other case it may not (like for my OCR), because there is not enough of documentation and too little number of users
https://github.com/tensorflow/tensor2tensor: this is very similar to above implementation but it is maintained and you can add more of own code for ex. reading own dataset. The basic usage is again Translation. But it also enable such task like Image Caption, which is nice. So if you want to try ready to use library and your problem is txt->txt or image->txt then you could try this. It should also work for OCR. I'm just not sure it there is enough documentation for each case (like using CNN at feature extractor)
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/seq2seq: apart from above, this is just pure library, which can be useful when you want to create a seq2seq by yourself using TF. It have a function to add Attention, Sequence Loss etc. In my case I chose that option as then I have much more freedom of choosing the each step of framework. I can choose CNN architecture, RNN cell type, Bi or Uni RNN, type of decoder etc. But then you will need to spend some time to get familiar with all the idea behind it.
https://github.com/tensorflow/nmt : another translation framework, based on tf.contrib.seq2seq library
From my perspective you have two option:
If you want to check the idea very fast and be sure that you are using very efficient code, use tensor2tensor library. It should help you to get early results or even very good final model.
If you want to make a research, not being sure how exactly the pipeline should look like or want to learn about idea of seq2seq, use library from tf.contrib.seq2seq.

Exporting Tensorflow Models for Eigen Only Environments

Has anyone seen any work done on this? I'd think this would be a reasonably common use-case. Train model in python, export the graph and map to a sequence of eigen instructions?
I don't believe anything like this is available, but it is definitely something that would be useful. There are some obstacles to overcome though:
Not all operations are implemented by Eigen.
We'd need to know how to generate code for all operations we want to support.
The glue code to allocate buffers and schedule work can get pretty gnarly.
It's still a good idea though, and it might get more attention posted as a feature request on https://github.com/tensorflow/tensorflow/issues/