Microsoft Azure Translation - api

I am building a translator machine using microsoft translator API and HUB. I am trying to build a Japanese to English translator, so I trained the system from japanese to english. I have trained some documents into my translator system. However, i got some confusion about the MT and reference.
My question is, why API translation from microsoft is different either with MT or Reference?
I thought the API provided from microsoft equals to MT which will become Reference after the training? Could you give me some idea?

The Ref: text that you see is part of the training/test data has been provided. The MT text that you see is the translation of the same text that has been produced from the trained translation model.
This is immediately after you have trained a system.
Once you have deployed a trained system, the MT text that you see will be the result of a translate API operation.
The current API translation that you see may be different because of a differently trained model being present there.

Related

How am I supposed to use the TF model garden beta API?

The TF garden library provides vision-related beta features in https://github.com/tensorflow/models/tree/master/official/vision/beta.
I am using this because the training of ResNet-RS model is known to be able in this library. However, the API seems to have a very different interface and internal mechanisms from the original API(image classification in particular). Especially, they are not documented and the code seems to be updated almost every day. The README.MD file contains a single sentence: This directory contains the new design of TF model garden vision framework.
Are users supposed to use the beta API? Or are they a work-in-progress and do I need to make a custom implementation? Is there documentation somewhere else?

Is this the correct way of using YOLO for image classification in a custom project?

I'm a beginner in computer vision. Could anyone tell me whether what I'm considering to do is correct or not? I wanted to detect a certain cyst in teeth. So my dataset consists of a part of the dental x-ray that contains that cyst. I train my model with these pictures. The one with the colored area contains cyst (infected teeth), and the one below it is the uninfected teet.
Image with cyst
Uninfected teeth
After training my model, I want to use it on a full dental x-ray, and determine if this picture has the cyst or not. A full dental x-ray is shown below.
Full dental X-Ray
Does this work? Or I'm completely wrong?
Instead of treating this as an object detection problem, you would get far better results if you were to treat this as a classification problem.
There are already various architectures for such classification tasks.
There are various architectures in TensorFlow to get you started.
Take a look at this. If you have enough data you can train them from scratch instead of using pre-trained weights
Note - The architecture provided in TensorFlow will almost always give you better results than the architectures that you create.
Object detection is suitable for cases where you have well-defined objects. If you take a look at recently published research papers you can see that these types of problems are considered as classification problems instead of object detection problems.

(Microsoft Azure custom vision service: Object Detection) How to find bounding box info of training data?

I am using Microsoft custom vision service in object detection to extract the wanted objects. And I would like to make a regression test to compare the results. However, I cannot find a place to export the training picture with the bounding box that user defined by GUI.
The model training is done within the custom vision platform provided by Microsoft (https://www.customvision.ai/). Within this platform we can add the images and then tag the objects. I have tried to export the model, but I am not sure where to find the info of training pictures along with their tag(s) and bounding box(es).
I expect that in this platform, user can export the not only the trained model but also the training data (images with tags and bounding boxes.) But I was not able to find them.
All the data that you are looking for is available through Custom Vision Training API. Currently the latest API is v3.0, its portal is here.
More in details, GetTaggedImages method will give you the associations of images and regions bounding box
Sample result of this method with one of my demos:
With these details, you will be able to get the image and place the boundingBox that was used for training.
Please see the following link for export your model. Custom Vision Service exports compact domains. The models generated by compact domains are optimized for the constraints of real-time classification on mobile devices. If the user wants to export the user training data from the custom vision please see the following link.

Tensorflow Stored Learning

I haven't tried Tensorflow yet but still curious, how does it store, and in what form, data type, file type, the acquired learning of a machine learning code for later use?
For example, Tensorflow was used to sort cucumbers in Japan. The computer used took a long time to learn from the example images given about what good cucumbers look like. In what form the learning was saved for future use?
Because I think it would be inefficient if the program should have to re-learn the images again everytime it needs to sort cucumbers.
Ultimately, a high level way to think about a machine learning model is three components - the code for the model, the data for that model, and metadata needed to make this model run.
In Tensorflow, the code for this model is written in Python, and is saved in what is known as a GraphDef. This uses a serialization format created at Google called Protobuf. Common serialization formats include Python's native Pickle for other libraries.
The main reason you write this code is to "learn" from some training data - which is ultimately a large set of matrices, full of numbers. These are the "weights" of the model - and this too is stored using ProtoBuf, although other formats like HDF5 exist.
Tensorflow also stores Metadata associated with this model - for instance, what should the input look like (eg: an image? some text?), and the output (eg: a class of image aka - cucumber1, or 2? with scores, or without?). This too is stored in Protobuf.
During prediction time, your code loads up the graph, the weights and the meta - and takes some input data to give out an output. More information here.
Are you talking about the symbolic math library, or the idea of tensor flow in general? Please be more specific here.
Here are some resources that discuss the library and tensor flow
These are some tutorials
And here is some background on the field
And this is the github page
If you want a more specific answer, please give more details as to what sort of work you are interested in.
Edit: So I'm presuming your question is more related to the general field of tensor flow than any particular application. Your question still is too vague for this website, but I'll try to point you toward a few resources you might find interesting.
The tensorflow used in image recognition often uses an ANN (Artificial Neural Network) as the object on which to act. What this means is that the tensorflow library helps in the number crunching for the neural network, which I'm sure you can read all about with a quick google search.
The point is that tensorflow isn't a form of machine learning itself, it more serves as a useful number crunching library, similar to something like numpy in python, in large scale deep learning simulations. You should read more here.

Tensorflow serving using the client

I successfully created a server that receives a TF saved_model, but now I want to send it queries and get predictions.
However, I'm having a hard time of understanding how the client works and how to implement it.
All I found online is the basic tutorial, but they only give the client code for mnist, and it doesn't fit my own mdoel.
So can anyone refer me to how to use or implement the client for a different model?
Thanks
I really thank google to make tensorflow serving open source, it is so helpful for people like me to put prediction models into production. But I have to admit tensorflow serving did poorly in documentation, or, they assume people who use it should already have pretty good knowledge in tensorflow. I stuck for a long time in order to get understand how it works. In their website they introduced concepts and examples well, but there is something missing in between.
I will recommend the tutorial here. This is the first part, and you can also follow the second part, the link will be in that article.
In general, when you export your .ckpt files to a servable model(.pb file and variables folder), you have to define the input, output and method name of your model and save them as a signature in tf.saved_model.signature_def_utils.build_signature_def
In the article, you will find what I said above in this part:
tf.saved_model.signature_def_utils.build_signature_def(
inputs={‘images’: predict_tensor_inputs_info},
outputs={‘scores’: predict_tensor_scores_info},
method_name=\
tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
You can follow how the author defined input and output in the article, and do the same thing to your customized model.
After that, you have to in your client script to call the signature and feed input to server, server then will recognize which method to use and return the output. You can check how the author wrote the client script and find corresponding part of calling signature and feeding input.