I want to use Tensorflow 1.4 for my ML modeling needs. My use case requires:
Training the model on GPU <--- I know how to do this with TF
Deploying the trained model on an ordinary box - as an .exe on CPU running Windows (for inference) <----
I don't know how to do this.
Can somebody tell me if TF 1.4 supports this and if so, point me to a guide or explain how its done ?
This is a little late but this video on youtube covers it pretty well.
He uses pyinstaller which grabs everything needed and puts it all either into one executable without anything else, or a folder with the exe in there and other stuff.
I've tried this myself and it works pretty well, although since pyinstaller smashes everything needed into one folder which gets really huge, it includes the entire tensorflow library, the python interpreter and if you use tensorflow-gpu, it also includes the cudnn files as well which are like 600mb, effectively leaving you with over a 1gb worth of files in the end.
That can be reduced by excluding modules that you don't need, I recommend creating a virtual environment and start with a clean installation of python.
Hope this helps in anyway.
Related
Trying to build ArrayFire examples, everything goes well until I get to the CUDA ones. They are supposed to be skipped, since I have an AMD processor/GPU. However, during the build process, the CUDA section is built anyway, failing for obvious reasons, interrupting the rest of the process.
I could manually change the CMakeLists.txt files. However, is there a higher level way to let the build system (cmake) know that I do not have a CUDA compatible GPU?
It looks like the ArrayFire_CUDA_FOUND and CUDA_FOUND macros are erroneously defined on my system.
The ArrayFire CMake build provides a flag to disable the CUDA backend. Simply set AF_BUILD_CUDA to NO via -DAF_BUILD_CUDA=NO at the command line to disable CUDA.
Say I have two saved models one from tensorflow 1.8 and the other from tensorflow 2.2. Serving both of those could run into compatibility issues.
Would it be possible to serve both of those in the same tensorflow/serving binary ?
My intuition suggests NO one cannot, at least not easily.
I am not an expert in bazel files but I presume compiling tensorflow/serving needs to build and link the tensorflow core lib. I am not sure whether one could link together two different versions of the tensorflow core library together.
I guess one could compile the tensorflow/serving binary in two different release points 1.8.0 and also 2.2.0 and deploy both of those binaries in your infrastructure separately. Then one needs to manage at the model discovery layer and request routing layer about which model needs to be loaded in which tensorflow/serving binary and also which predict request should talk to which tensorflow/serving endpoint.
I'm definitely not an expert on the deep inner workings of TensorFlow, so take this with a grain of salt. But I think what you want to do may actually be pretty easy.
My very approximate (and possibly completely incorrect) understanding is that the TensorFlow APIs are a sort of wrapper that creates a graph representing whatever computation you'd like to do, and that the compiled graph is cross-compatible between at least some versions, even if the APIs used to create and manipulate it aren't.
Empirically, I've been able to take models built with TensorFlow 1.15.x and put them into TensorFlow Serving on 2.3.0 with absolutely no problems at all.
I am trying to integrate Tensorflow library with C++ interface into my C++ application. The problem is that straightforward build with
bazel build (some options) //tensorflow:libtensorflow.so
Makes a libtensorflow.so file that is 168Mb. That's way too much for my app. I've found some guides on reducing library size for Android, but can't find any for general desktop builds targets.
I assume that libtensorflow.so has all the whistles of TF, but what I really need is an inference engine with basic Conv ops, etc.
Any suggestions?
Thanks!
You might want to experiment with the CMake build. It has two interesting build options for your case:
setting tensorflow_BUILD_CONTRIB_KERNELS=OFF does not build kernels from tf.contrib.
setting tensorflow_BUILD_ALL_KERNELS=OFF builds only a small parts of the most common kernels.
I have built syntaxnet, and tensorflow-serving using bazel. Both embed their own (partial?) copy of tensorflow itself. I already have the problem where I'd like to "import" some parts of tensorflow-serving in a script that "lives" in the syntaxnet tree which I can't figure out (without doing some VERY ugly things).
Now I'd like "tensorboard", but that apparently doesn't get built as part of the embedded tensorflow inside of syntaxnet or tensorflow-serving.
So now I'm sure "I'm doing it wrong". How am I supposed to be combining the artifacts built by various separate bazel workspaces?
In particular, how can I build tensorflow (with tensorboard) AND syntaxnet AND tensorflow-serving and have them "installed" for use so I can start writing my own scripts in a completely separate directory/repository?
Is "./bazel-bin/blah" really the end-game with bazel? There is no "make install" equivalent?
You're right, currently Tensorboard targets are only exposed in the Tensorflow repo, and not the other two that use it. That means that to actually bring up Tensorboard, you'll need to checkout Tensorflow on its own and compile/run Tensorboard there (pointing it to the generated logdir).
Actually generating the training summary data in a log directory is done during training, in your case in the tensorflow/models repo. It looks like SummaryWriter is used in inception_train.py, so perhaps you can add something similar to syntaxnet. If that doesn't work and you're not able to link Tensorboard, I'd recommend filing an issue in tensorflow/models to add support for Tensorboard there. You shouldn't need Tensorboard in Tensorflow Serving.
Importing parts of Tensorflow Serving in syntaxnet would require you to add this new depedency as a submodule (like is done with tensorflow) or possibly a git_repository in the WORKSPACE file if that works. We've never tried this, so it's possible that something is broken for this untested use case. Please file issues if you encounter a problem with this.
As for just installing and running, Tensorflow Serving doesn't support that right now. It's a set of libraries that you link in directly into your server binary and compile (the repo offers some example servers and clients), but right now there is no simple "installed server". Tensorflow along with Tensorboard however can be installed and linked from anywhere.
I am new to writing kernel modules, so facing few non-technical problems.
Since for creating kernel module for a specific kernel version ( say 3.0.0-10, 10 is patch number) requires same version kernel headers, so it looks straight to install kernel headers and start development over there.
But kernel headers for patched kernel version are not available.
As I have a guest kernel vmlinuz-3.0.0-10 running in machine and upon downloading kernel headers it says not found.
other approach is to get the source for that specific kernel, but again problem is same source for patched kernel is not available ( its not necessary to get sources of linux-kernel-3.0.0-10 or even linux-kernel-3.0.0 and 10th patch). In some situation it is possible to get source of running kernel, but not always possible.
another is to build kernel other than the running kernel and place built kernel in the machine. But it requires to build the modules of that kernel that is time-consuming and space-consuming process.
So intention of asking this is to know what are the preferences of kernel driver developers. Are there other alternatives ?
Is it possible to compile kernel module in one version and run in another version ( though it is going to give error, but are there any workaround for this ?)
So, building a new kernel is not a good option as it will require :
building kernel
building modules and firmware
building headers
Moving all of above things in appropriate location (if your machine is not same on which you are going to develop module)
So if you have kernel headers for running system then you dont need to download a source code for any kernel version, and while making module use
make -C /lib/modules/kernel-headers-x.y.z/build M=`pwd` modules
and your module will be ready.
If there would be better answers, i will not hesitate to accept any of them.
I know it's a long time since this question asked. I am new in the kernel development. I have also encountered the same error. But now I am able to load my module in the different kernel by which I have built it. Following is the solution:
download the kernel-devel related to the image that you are running. It should have version as close as possible.
Check that the functions you are using in the module are mapped with the header files you have in the kernel-devel.
change the include/generated/utsrelease.h file for UTS_RELEASE value. change it to the version of kernel image running on your HW.
Compile the module using this kernel tree.
Now you can insert your module inside kernel.
Note:: It may cause some unwanted events to be happened as Shahbaz mentioned above. But if you are doing this just for experiments I think its good to go. :)
There is a way to build a module on one kernel and insert it in another. It is by turning off a certain configuration. I am not telling you which configuration it is because this is ABSOLUTELY DANGEROUS. The reason is that there may be changes between the kernels that could cause your module to behave differently, often resulting in a total freeze.
What you should do is to build the module against an already-built kernel (or at least a configured one). If you have a patched kernel, the best thing you can do is to build that kernel and boot your OS with that.
I know this is time consuming. I have done it many many times and I know how boring it can get, but once you do it right, it makes your life much easier. Kernel compilation takes about 2 hours or so, but you can parallelize it if you have a multi-core CPU. Also, you can always let it compile before you leave the office (or if at home, before going to bed) and let it work at night.
In short, I strongly recommend that you build the kernel you are interested in yourself.