CNTK: Python vs C# API for model consumption - cntk

We have trained a model using CNTK. We are building a service that is going to load this model and respond to requests to classify sentences. What is the best API to use regarding performance? We would prefer to build a C# service as in https://github.com/Microsoft/CNTK/tree/master/Examples/Evaluation/CSEvalClient but alternatively we are considering building a Python service that is going to load the model in python.
Do you have any recommendations towards one or the other approach? (regarding which API is faster, actively maintained or other parameters you can think of). The next step would be to set up an experiment measuring the performance of both API calls, but was wondering if there is some prior knowledge here that could help us decide.
Thank you

Both APIs are well developed/maintained. For text data I would go with the C# API.
In C# the main focus is fast and easy evaluation and for text loading the data is straightforward.
The Python API is good for development/training of models and at this time not much attention has been paid to evaluation. Furthermore, because of the wealth of packages loading data in exotic formats is easier in Python than C#.

The new C# Eval API based on CNTKLibrary will be available very soon (the first beta is probably next week). This API has functional parity with the C++ and Python API regarding evaluation.
This API supports using multiple threads to serve multiple evaluation requests in parallel, and even better, model parameters of the same loaded model is shared between these threads, which will significantly reduce memory usage in a service environment.
We have also a turorial about how to use Eval API in ASP.Net environment. It still refers to EvalDLL evaluation, but applies to the new C# API too. The document will be updated after the new C# API is released.

Related

TinkerPop: Adding Vertex Graph API v/s Traversal API

Background:
In one of the SO posts it is recommended to use Traversal API than Graph API to make mutation. So I tried out some tests and found Graph API seemed to be faster, I do totally believe the advice but I am trying to understand how its better.
I did try googling but did not find a similar post
Testing:
Query 1: Executed in 0.19734525680541992 seconds
g.addV('Test').property('title1', 'abc').property('title2', 'abc')
Query 2: Executed in 0.13838958740234375 seconds
graph.addVertex(label, "Test", "title1", "abc", "title2", "abc")
Question:
Which one is better and why?
If both are same then why the performance difference?
The Graph API is meant for graph providers and the Traversal API (which is really the Gremlin language) is meant for users. You definitely reduce the portability of your code by using the Graph API. The "server graphs" out there, like Amazon Neptune, DSE Graph, CosmosDB, etc., present environments that don't give you access to the Graph API and therefore you would never be able to switch to those if you felt like doing so. You also start to build your application around two APIs thus creating a non-unified approach to your development (i.e. in some cases you will pass around a Graph object for the Graph API and in some cases GraphTraversalSource for the Traversal API).
I don't know how you executed your tests, but it doesn't surprise me that much that you see a small difference in performance in micro-benchmarks. There is some cost to the Traversal API, but TinkerPop continues to improve in that area - consider the recently closed TINKERPOP-1950 as an example of something recent. I don't know for sure that this will help for your specific benchmark, as benchmarks are tricky things, but the point is that we haven't stopped trying to optimize in that area.
Finally, if discussions in the TinkerPop community continue in the direction they have been going for the past year, I would fully expect to see the Graph API disappear in TinkerPop 4.x. There is no timeline for this release and it is only in the discussion phase, but I would imagine that if you intend for your application to live for many years to come, this information might be of interest to you.

What Tensorflow API to use for Seq2Seq

This year Google produced 5 different packages for seq2seq:
seq2seq (claimed to be general purpose but
inactive)
nmt (active but supposed to be just
about NMT probably)
legacy_seq2seq
(clearly legacy)
contrib/seq2seq
(not complete probably)
tensor2tensor (similar purpose, also
active development)
Which package is actually worth to use for the implementation? It seems they are all different approaches but none of them stable enough.
I've had too a headache about some issue, which framework to choose? I want to implement OCR using Encoder-Decoder with attention. I've been trying to implement it using legacy_seq2seq (it was main library that time), but it was hard to understand all that process, for sure it should not be used any more.
https://github.com/google/seq2seq: for me it looks like trying to making a command line training script with not writing own code. If you want to learn Translation model, this should work but in other case it may not (like for my OCR), because there is not enough of documentation and too little number of users
https://github.com/tensorflow/tensor2tensor: this is very similar to above implementation but it is maintained and you can add more of own code for ex. reading own dataset. The basic usage is again Translation. But it also enable such task like Image Caption, which is nice. So if you want to try ready to use library and your problem is txt->txt or image->txt then you could try this. It should also work for OCR. I'm just not sure it there is enough documentation for each case (like using CNN at feature extractor)
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/seq2seq: apart from above, this is just pure library, which can be useful when you want to create a seq2seq by yourself using TF. It have a function to add Attention, Sequence Loss etc. In my case I chose that option as then I have much more freedom of choosing the each step of framework. I can choose CNN architecture, RNN cell type, Bi or Uni RNN, type of decoder etc. But then you will need to spend some time to get familiar with all the idea behind it.
https://github.com/tensorflow/nmt : another translation framework, based on tf.contrib.seq2seq library
From my perspective you have two option:
If you want to check the idea very fast and be sure that you are using very efficient code, use tensor2tensor library. It should help you to get early results or even very good final model.
If you want to make a research, not being sure how exactly the pipeline should look like or want to learn about idea of seq2seq, use library from tf.contrib.seq2seq.

Converting a deep learning model from GPU powered framework, such as Theano, to a common, easily handled one, such as Numpy

I have been playing around with building some deep learning models in Python and now I have a couple of outcomes I would like to be able to show friends and family.
Unfortunately(?), most of my friends and family aren't really up to the task of installing any of the advanced frameworks that are more or less necessary to have when creating these networks, so I can't just send them my scripts in the present state and hope to have them run.
But then again, I have already created the nets, and just using the finished product is considerably less demanding than making it. We don't need advanced graph compilers or GPU compute powers for the show and tell. We just need the ability to make a few matrix multiplications.
"Just" being a weasel word, regrettably. What I would like to do is convert the the whole model (connectivity,functions and parameters) to a model expressed in e.g. regular Numpy (which, though not part of standard library, is both much easier to install and easier to bundle reliably with a script)
I fail to find any ready solutions to do this. (I find it difficult to pick specific keywords on it for a search engine). But it seems to me that I can't be the first guy who wants to use a ready-made deep learning model on a lower-spec machine operated by people who aren't necessarily inclined to spend months learning how to set the parameters in an artificial neural network.
Are there established ways of transferring a model from e.g. Theano to Numpy?
I'm not necessarily requesting those specific libraries. The main point is I want to go from a GPU-capable framework in the creation phase to one that is trivial to install or bundle in the usage phase, to alleviate or eliminate the threshold the dependencies create for users without extensive technical experience.
An interesting option for you would be to deploy your project to heroku, like explained on this page:
https://github.com/sugyan/tensorflow-mnist

What's the difference between INuiFusionColorReconstruction::IntegrateFrame and ProcessFrame?

I'm learning KinectFusion, hope to use it to build a reconstruction application for 3D print.
Currently i'm confused by IntegrateFrame and ProcessFrame method of INuiFusionColorReconstruction.
ProcessFrame has one more parameter named maxAlignIterationCount, does it mean that ProcessFrame will do integrate for multiple times while IntegrateFrame only do integrate one time? Since ProcessFrame also only take one reference frame, what's the benefit to do integrate for multiple times?
From Microsoft's website:
"...the ProcessFrame function in the INuiFusionReconstruction interface... encapsulates the camera tracking (AlignDepthFloatToReconstruction) and the data integration step (IntegrateFrame) in one function call to be easier to call and more efficient as all processing takes place on the GPU without upload and readback for the individual steps as would occur when calling separately."
Kinect for Windows 1.7, 1.8

What is the relationship between Sesame & Alibaba?

I am a beginner in this & I am having a hard time understanding this.
What is Alibaba and Sesame?
In the above two, which one does the query optimization and which one does the part of creating repositories.
Any kind of input will be fine. Thanks.
"AliBaba is a RESTful subject-oriented client/server library for distributed persistence of files and data using RDF metadata. AliBaba is the beta version of the next generation of the Elmo codebase. It is a collection of modules that provide simplified RDF store abstractions to accelerate development and facilitate application maintenance."
http://www.openrdf.org/alibaba.jsp
"Sesame is a de-facto standard framework for processing RDF data. This includes parsing, storing, inferencing and querying of/over such data. It offers an easy-to-use API that can be connected to all leading RDF storage solutions."
http://www.openrdf.org/about.jsp
I imagine the query engine, query optimization and storage are part of Sesame, not Alibaba. Alibaba is application code which sits on top of Sesame.
There are also alternatives in Java, such as Apache Jena:
http://incubator.apache.org/jena/
Guess what I use? ;-)