I'm following the RNN tutorial on Tensorflow site. However, I couldn't find the rnn file named ptb_word_lm.py.
In my ptb folder, there is only reader.py there. Where I can find ptb_word_lm.py?
Many thanks.
I believe this is a bug, which we're tracking at https://github.com/tensorflow/tensorflow/issues/6196
The short answer is that the code is now here: https://github.com/tensorflow/models/tree/master/tutorials/rnn/ptb
I found it here, by searching "ptb_word_lm.py" in Google.
It seems it was removed from the official repository in version recently, one can still find it in 0.12 revision branch here.
Related
I spent some time already to figure out how to get Mask R-CNN working properly.
I cloned the original Matterport implementation and a fork of it which has been modified to use TF 2.
The Matterport implementation seems to be somehow outdated with respect to the dependencies, and I could not make it work. I saw that some people could make it work using different versions of the required libraries or some code changes here and there... I thought I continue with the TF2 compatible version. There is a code change needed as well to make it work with the examples which have been provided with Mask R-CNN. I hope that this is sufficient and that I did not miss something else.
E.g. I ran the train_shapes.ipynb in samples folder. The generated shapes are trained on top of pretrained COCO weights. So far so good.
The notebook generates a sample image with shapes, and processes it. this is the result:
What can be the reason that so many shapes are detected which are not in the source image?
I was having the same issue. It is because model.detect does not work for TensorFlow 2.6 and above. When I downgraded to TensorFlow 2.4, everything worked. Check out this thread: https://github.com/matterport/Mask_RCNN/issues/2670
I was playing with the AWS instances and trying to deploy some locally trained Keras models, but I find no documentation on that. Has anyone already been able to do it?
I tried to use a similar approach to https://aws.amazon.com/pt/blogs/machine-learning/bring-your-own-pre-trained-mxnet-or-tensorflow-models-into-amazon-sagemaker/, but I had no success. I also found some examples for training keras models in the cloud, but I was not able to get the entry_point + artifacts right.
Thanks for your time!
Yes, it is possible, and yes, the official documentation is not much of help.
However, I wrote an article on that, and I hope it will help you.
Let me know if you need more details. Cheers!
AWS recently released a tutorial on this exact same topic which I find very easy and quicker than the docker image route. Hope this helps.
If you visit the http://projector.tensorflow.org/ you can use it with your own dataset (ie a TSV file). I am playing with N-D data and found useful to look at these visualisations after PCA reduction.
I am wondering how I can run my own Projector version on my machine.
Looking at the doc, it seems to be released only as a tensorboard plugin for seeing the embedding results...
Thanks
According to the replies made on https://github.com/tensorflow/tensorflow/issues/7562 more recently than the other answer here, you can get the standalone version at https://github.com/tensorflow/embedding-projector-standalone/ and edit the oss_demo_projector_config.json to point to your datasets.
The demo files are binary files ending in .bytes, which can be generated from a numpy array with .tofile:
vectors = numpy.zeros(vector_shape, dtype=numpy.float32)
vectors.tofile('my_tensors.bytes')
It has only been released as a TensorBoard plugin.
Are there instructions or some documentation somewhere or could somebody describe how to deploy the models available as "Parsey's Cousins" (see https://github.com/tensorflow/models/blob/master/syntaxnet/universal.md) with SyntaxNet under Tensorflow Serving? Even deploying just Parsey is a rather complex undertaking that is not really documented anywhere, but how to do this for the additional 40 languages?
This pull request partially addresses your request, but it still has some issues: https://github.com/tensorflow/models/pull/250.
We do have some tentative plans to provide easier integration between SyntaxNet and Tensorflow Serving, but no precise timeline.
Just for the benefit of anyone else who finds this question, after some digging around on GitHub, one can find the following issue started by Johann Petrak:
https://github.com/dsindex/syntaxnet/issues/7
a model from parsey's cousin is not able to export by that patch due to version mismatch
So whilst some people have been able to modify syntaxnet so that it works with Tensorflow Serving, this seems to be at the cost of using a version which is not compatible with Parsey's Cousins.
Currently the only way to get Tensorflow Serving working with languages other than English is to use something like dsindex's code and train your own models.
Does anyone know the original source code of the tensorflow_inception_graph.pb.
I really wan't to know the operation in the example project
The tensorflow/example/android - Read me.
Tensorflow Android Camera Demo
https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/android/
https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip
Thanks
This code has not yet been released. As the release announcement for the latest ImageNet model mentions, the team is working on releasing the complete Python training framework for this type of model, including the code for building the graph.
I think you the rb and inception labels assets directly from here.