Deploying Jupyter notebooks as a component in AI Platform pipelines - tensorflow

I have a Jupyter Notebook containing the model creation, model deployment on the AI Platform, and creating versions. I am able to get the predictions for my model. Now I am trying to build a CI/CD pipeline automating the entire process. Is there a way that I can pass in my entire Jupyter notebook as a component in AI Platform Pipelines?

You can use Papermill to create a parameterized notebook, which can then be executed via CI/CD. This article explains a bit more in detail in the 'Reproducible Notebooks' section.

Related

Best practice to run Tensorflow app on GCP?

I wish to run a Python app that uses Tensorflow to run simulations and outputs results to a csv file.
I want to run it on GCP. My plan was to run it in a Dataproc cluster using Tony.
It seems there are many GCP ways of doing ML stuff such as using AI Platform, and I wondered if there are easier/better ways of achieving my aim.
I would suggest to use Google Cloud AI platform to achieve your goal, because if you do not have dependency on Hadoop ecosystem there no need to use Tony on Dataproc and it should be much easier to use Google Cloud AI platform for your use case.

How to setting up environment for tensor flow developement

I cloned tensorflow repository to my pc. How should I setting up my environment for development?I have no I idea about avaialable files
If you want to start simple(not build tensorflow from source by yourslef), you can follow this link to install it.
Then you can go through this tutorial to get familiar with how tensorflow works.
I believe the best documents for tensorflow are all on its official site(As you can see, the two links above are all from the official site of tensorflow).

TensorFlow + cloud-ml : deploy custom native op / reader

I was wondering if it was possible to deploy Tensorflow custom ops or custom reader written in C++ inside cloud-ml.
It looks like cloud-ml does not accept running native code in its standard mode (I'm not really interested in using a virtualized environment), at least for Python package they only accept pure python with no C dependency.
Likely the easiest way to do this is to include as an extra package the build of the entire custom Tensorflow Wheel that includes the op. For specifying extra packages see: https://cloud.google.com/ml-engine/docs/how-tos/packaging-trainer#to_include_custom_dependencies_with_your_package
For building a TF wheel from source see: https://www.tensorflow.org/install/install_sources#build_the_pip_package
You could also try to download/install just the .so file for the new op, but that would require either downloading it inside the setup.py of your training package or inside the training python code itself.
Note that you can currently only upload custom packages during Training, and not during Batch or Online Prediction, so a model trained using a custom TF version may not work with the prediction service.

CNTK deployment for real time predictions

TensorFlow has a separate project for its production usage, as noted here, called TensorFlow Serving.
How should I use CNTK in a production environment, and how should I handle it's deployment? Hopefully one could deploy trained models in to a server/cluster that serves the model with RPC or HTTP REST API.
If no such tool exists, what should be the first steps to develop it and a good architecture to roll out on my own?
We do support serving CNTK models in a production environment. You can find information about model evaluation/inference: https://github.com/Microsoft/CNTK/wiki/CNTK-Evaluation-Overview. For deployment, you need deploy the dlls specified here. A tutorial for deploying CNTK in Azure is available here.
No such tool exists from the CNTK team, but the new APIs in C++ or python should make the task pretty easy once you have a trained model.

Create a script using Blender and LuxRender Python API?

I am working on Ubuntu 12.04.
Is it possible to create a script using Blender and LuxRender Python API? Can I use both APIs in the same script?
What should I install to start programming a script using their API if it is possible?
Thanks.
Blender has python integrated into it very well and makes extensive use of it, it includes a python console and text editor to write and execute python scripts within Blender. Python access to Blender from outside of Blender is limited/experimental at best.
The Luxrender project provides a Blender addon called luxblend25, which is what you will want.
So you want to install Blender, Luxrender and luxblend25 and do your scripting within Blender. You also have the option to use an external text editor of your choice and open the script in Blender to run it. The luxblend25 scripts are in python and are your best examples of accessing the Luxrender engine from within Blender.
Blender now has it's own stackexchange site - you may find it useful for blender specific help.