I was wondering if it was possible to deploy Tensorflow custom ops or custom reader written in C++ inside cloud-ml.
It looks like cloud-ml does not accept running native code in its standard mode (I'm not really interested in using a virtualized environment), at least for Python package they only accept pure python with no C dependency.
Likely the easiest way to do this is to include as an extra package the build of the entire custom Tensorflow Wheel that includes the op. For specifying extra packages see: https://cloud.google.com/ml-engine/docs/how-tos/packaging-trainer#to_include_custom_dependencies_with_your_package
For building a TF wheel from source see: https://www.tensorflow.org/install/install_sources#build_the_pip_package
You could also try to download/install just the .so file for the new op, but that would require either downloading it inside the setup.py of your training package or inside the training python code itself.
Note that you can currently only upload custom packages during Training, and not during Batch or Online Prediction, so a model trained using a custom TF version may not work with the prediction service.
Related
I want to use Tensorflow 1.4 for my ML modeling needs. My use case requires:
Training the model on GPU <--- I know how to do this with TF
Deploying the trained model on an ordinary box - as an .exe on CPU running Windows (for inference) <----
I don't know how to do this.
Can somebody tell me if TF 1.4 supports this and if so, point me to a guide or explain how its done ?
This is a little late but this video on youtube covers it pretty well.
He uses pyinstaller which grabs everything needed and puts it all either into one executable without anything else, or a folder with the exe in there and other stuff.
I've tried this myself and it works pretty well, although since pyinstaller smashes everything needed into one folder which gets really huge, it includes the entire tensorflow library, the python interpreter and if you use tensorflow-gpu, it also includes the cudnn files as well which are like 600mb, effectively leaving you with over a 1gb worth of files in the end.
That can be reduced by excluding modules that you don't need, I recommend creating a virtual environment and start with a clean installation of python.
Hope this helps in anyway.
I cloned tensorflow repository to my pc. How should I setting up my environment for development?I have no I idea about avaialable files
If you want to start simple(not build tensorflow from source by yourslef), you can follow this link to install it.
Then you can go through this tutorial to get familiar with how tensorflow works.
I believe the best documents for tensorflow are all on its official site(As you can see, the two links above are all from the official site of tensorflow).
Is it possible to build the Tensorflow Python wheel with a different name than tensorflow?
I would like to build Tensorflow with SIMD instructions like SSE, AVX and FMA and distribute that internally in our repository. I've managed to build it, but the package name is tensorflow. To keep the package separate from the official package, I would like to call it tensorflow-optimized or something similar.
Is this possible with the bazel build system?
Or is there a way I could edit the wheel?
This is not part of the bazel build system, it is part of the tensorflow project's script. I think the relevant line is https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/pip_package/setup.py#L44. So you should be able to pass --project_name to override it.
I have some custom Tensorflow code I have in the contrib/ subdirectory of the project (all other parts of the code are standard Tensorflow from the official distribution).
I would like to be able to distribute this code as an external dependency on Tensorflow, such that I can distribute the library via pip and depend on the binary packages available for Tensorflow in pip as well.
My main goal is that I don't want to have users of my code have to compile the full Tensorflow tree (with my custom code only in contrib/) just to get my custom code / module.
Is this possible to do, and if so how?
I have built syntaxnet, and tensorflow-serving using bazel. Both embed their own (partial?) copy of tensorflow itself. I already have the problem where I'd like to "import" some parts of tensorflow-serving in a script that "lives" in the syntaxnet tree which I can't figure out (without doing some VERY ugly things).
Now I'd like "tensorboard", but that apparently doesn't get built as part of the embedded tensorflow inside of syntaxnet or tensorflow-serving.
So now I'm sure "I'm doing it wrong". How am I supposed to be combining the artifacts built by various separate bazel workspaces?
In particular, how can I build tensorflow (with tensorboard) AND syntaxnet AND tensorflow-serving and have them "installed" for use so I can start writing my own scripts in a completely separate directory/repository?
Is "./bazel-bin/blah" really the end-game with bazel? There is no "make install" equivalent?
You're right, currently Tensorboard targets are only exposed in the Tensorflow repo, and not the other two that use it. That means that to actually bring up Tensorboard, you'll need to checkout Tensorflow on its own and compile/run Tensorboard there (pointing it to the generated logdir).
Actually generating the training summary data in a log directory is done during training, in your case in the tensorflow/models repo. It looks like SummaryWriter is used in inception_train.py, so perhaps you can add something similar to syntaxnet. If that doesn't work and you're not able to link Tensorboard, I'd recommend filing an issue in tensorflow/models to add support for Tensorboard there. You shouldn't need Tensorboard in Tensorflow Serving.
Importing parts of Tensorflow Serving in syntaxnet would require you to add this new depedency as a submodule (like is done with tensorflow) or possibly a git_repository in the WORKSPACE file if that works. We've never tried this, so it's possible that something is broken for this untested use case. Please file issues if you encounter a problem with this.
As for just installing and running, Tensorflow Serving doesn't support that right now. It's a set of libraries that you link in directly into your server binary and compile (the repo offers some example servers and clients), but right now there is no simple "installed server". Tensorflow along with Tensorboard however can be installed and linked from anywhere.