Making UML sequence diagram in VS 2010RC I've observed that there is no activation rectangle in first object. Is this correct? Not according to my tutor and I have to quote him:
"Finally, you have no activation rectangle for the userInterface instance, so the initial message could never have been sent."
But I'm thinking that if guys from VS did that it must/should be correct.
The activation rectangle has been erased from UML 2 specification because this is not an UML metamodel element. This was just graphical and no metamodel equivalence was possible.
I think that this UML2 sequence diagram is more a regression compare to UML 1.x. I think that it is not because activation bar have not metamodel element that it is not an important graphical information !!!
Some tools still have activation rectangle with UML2 because they refused to take out this important UML grahical information !!
It means that VS 2010 is right and activation rectangle are not anymore included in UML2 but .......
Related
I need to classify an object into multiple classes. Normally we are familiar with multi-class classification with a single hierarchy, but in my case I have two levels of hierarchy. See the below images to get a clear picture of what I am talking about. so that if I want to classify an image, it should give me all three classes: one is the main class and two subclasses. For example, Class-1-1-1, or Class-1-2-3.
Solution for any framework will work either Tensorflow or PyTorch. Thank You.
If you know your hierarchy tree, wouldn't it be ok for you to do multi-class classification on the leaves (the final classes), then check what are the parent classes in the tree, for a given prediction ?
I am currently testing out custom object detection using the Tensorflow API. But I don't quite seem to understand the theory behind it.
So if I for example download a version of MobileNet and use it to train on, lets say, red and green apples. Does it forget all the things that is has already been trained on? And if so, why does it then benefit to use MobileNet over building a CNN from scratch.
Thanks for any answers!
Does it forget all the things that is has already been trained on?
Yes, if you re-train a CNN previously trained on a large database with a new database containing fewer classes it will "forget" the old classes. However, the old pre-training can help learning the new classes, this is a training strategy called "transfert learning" of "fine tuning" depending on the exact approach.
As a rule of thumb it is generally not a good idea to create a new network architecture from scratch as better networks probably already exist. You may want to implement your custom architecture if:
You are learning CNN's and deep learning
You have a specific need and you proved that other architectures won't fit or will perform poorly
Usually, one take an existing pre-trained network and specialize it for their specific task using transfert learning.
A lot of scientific literature is available for free online if you want to learn. you can start with the Yolo series and R-CNN, Fast-RCNN and Faster-RCNN for detection networks.
The main concept behind object detection is that it divides the input image in a grid of N patches, and then for each patch, it generates a set of sub-patches with different aspect ratios, let's say it generates M rectangular sub-patches. In total you need to classify MxN images.
In general the idea is then analyze each sub-patch within each patch . You pass the sub-patch to the classifier in your model and depending on the model training, it will classify it as containing a green apple/red apple/nothing. If it is classified as a red apple, then this sub-patch is the bounding box of the object detected.
So actually, there are two parts you are interested in:
Generating as many sub-patches as possible to cover as many portions of the image as possible (Of course, the more sub-patches, the slower your model will be) and,
The classifier. The classifier is normally an already exisiting network (MobileNeet, VGG, ResNet...). This part is commonly used as the "backbone" and it will extract the features of the input image. With the classifier you can either choose to training it "from zero", therefore your weights will be adjusted to your specific problem, OR, you can load the weigths from other known problem and use them in your problem so you won't need to spend time training them. In this case, they will also classify the objects for which the classifier was training for.
Take a look at the Mask-RCNN implementation. I find very interesting how they explain the process. In this architecture, you will not only generate a bounding box but also segment the object of interest.
I'm trying to work out what's the best model to adapt for an open named entity recognition problem (biology/chemistry, so no dictionary of entities exists but they have to be identified by context).
Currently my best guess is to adapt Syntaxnet so that instead of tagging words as N, V, ADJ etc, it learns to tag as BEGINNING, INSIDE, OUT (IOB notation).
However I am not sure which of these approaches is the best?
Syntaxnet
word2vec
seq2seq (I think this is not the right one as I need it to learn on two aligned sequences, whereas seq2seq is designed for sequences of differing lengths as in translation)
Would be grateful for a pointer to the right method! thanks!
Syntaxnet can be used to for named entity recognition, e.g. see: Named Entity Recognition with Syntaxnet
word2vec alone isn't very effective for named entity recognition. I don't think seq2seq is commonly used either for that task.
As drpng mentions, you may want to look at tensorflow/tree/master/tensorflow/contrib/crf. Adding an LSTM before the CRF layer would help a bit, which gives something like:
LSTM+CRF code in TensorFlow: https://github.com/Franck-Dernoncourt/NeuroNER
I know this question has been asked several times before but I didn't find much on google except a few packages written by several authors. In any case is there any plan of including the roi pooling layer (officially) in tensorflow as it is a vital component for object detection and other tasks and not having access to it is a pain while using tensorflow.
Any comments or alternate implementation (if verified) are welcomed.
I was able to find answer to my question with the paper above. You can use tf.image.crop_and_resize function to crop any part of the network and resize it. Similar to ROI pooling you can crop a bounding box (scale it down by the number of downsampling steps e.g. 32 in VGG16) and resize it to NxN (e.g. 7x7 in VGG16) which can then be fed to the Fully Connected layer.
I created a model in WinBUGS. But when clicking the Model->Update menu, I noticed that the adapting option is disabled. So the inference will include all MCMC samples from the very beginning. I noticed that this is not the case for some WinBUGS examples. Has anyone seen this problem before? What a model setup can trigger disabling the adapting option?
Not all models require an adaptation phase - it depends on what samplers are being used, and if these sampling algorithms require any tuning - e.g. a Metropolis(-Hastings) algorithm requires tuning of the proposal distribution variance, but a Gibbs sampler has no parameter to tune. The choice of sampler (in JAGS and I assume BUGS also) is determined by the software from the structure of your model at compile time. An extreme example is if you don't have any data in the model - in which case everything is simply forward sampled from the priors and no adaptation takes place.
Note that you should still be able to burn in the model (run the model without sampling) - so it is not true that all mcmc samples need to be included from the beginning. Adaptation is usually a relatively small part of the burnin phase anyway.