Can Rasa NLU share the same spacy model among multiple models - spacy

I am using Rasa NLU. I have 3 models trained using the same pipeline with different training datasets. The pipeline uses Spacy for tokenization and to build the WordVec.
When I load all of those 3 models to memory, Exactly how many times, Rasa loads Spacy en_core_web_lg model to the memory? Can we share the same Spacy model between multiple trained NLU models?

The Spacy model will be loaded into memory each time you train a model using it. It will however only be downloaded once, in which sense the same model is used for all NLU models trained in the same environment.

Related

What is the difference between TFHub and Model Garden?

TensorFlow Hub is a repository for pre-trained models. Model Garden (Model Zoo) also keeps SOTA models and provides facilities for downloading and leveraging its models like TfHub, and both of them are created by TensorFlow.
Why did Tensorflow make two concepts for a model repository?
When should we use TfHub for retrieving a well-known model, and when should we use Model Garden to download a model? What is the difference between them?
TF Hub provides trained models in SavedModel, TFLite, or TF.js format. These artifacts can be used for inference and some can be used in code for fine-tuning. TF Hub does not provide modeling library code to train your own models from scratch.
Model Garden is a modeling library for training BERT, image classification models, and more. Model Garden provides code for training your own models from scratch as well as some checkpoints to start from.

Tensorflow vs Tensorflow Lite for mobile apps ML data pipeline

I want to build a ML data pipeline for a recommender system in a dating mobile app.
Currently, I am in a very early stage trying to figure out the infrastructure but I am confused with tensorflow and tensorflow lite.
Can I build the system using tensorflow and then after training, hyperparameter tuning etc. deploy the model in backend?
Is it mandatory to use tensorfow lite whenever wanting to use ML for mobile or that is used only when you actually want to train the model in a phone device?
TensorFlow Lite is mainly for inference use cases. After training TF models in the desktop/server side, you can convert the trained TF model to the corresponding TensorFlow Lite model to deploy them to mobile with some techniques, for example, quantizations.

How to see differences between 2 Tflite models

I have 2 Tensorflow Lite models (they are Yolo V2 Tiny models):
Model A) Downloaded from the internet, detects and classifies objects
with 80 classes. The .tflite files weights 44,9mb.
Model B) Trained by myself using Darknet, detects and classifies objects with 52
classes. The .tflite files weights 20,8mb. The model is converted
to TFLite using Darkflow.
However both on a mobile phone and on a computer model B takes 10x more time to predict than model A (even if model B detects within less classes and its file is lighter). Also, models seem to work with input images of size 416x416 and use float numbers.
What could be the reason for model A being faster than model B?
How can I find out why model A is faster?
One of the problems I have is that for model A, since I have not trained it myself, I don't have its .cfg file with the whole setup...
You should try the following two approaches to gain more insight, as the reasons to why a model happens to be slower than expected could be several.
Inspect both networks with a tool like Netron. You can upload your flatbuffer (TF Lite) model file and visualize the network architecture after TF Lite conversion.
There you can see where the difference between the two models lies. If e.g. there happen to be additional Reshape operations or alike in Model B compared to A, that could likely be a reason. To download Netron follow https://github.com/lutzroeder/netron.
Measure the time spent by the model on each of its layers. For this you can use the TF Lite benchmark tool provided directly in the Tensorflow repository.
Check it out here https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/benchmark/README.md.

How can I get access to intermediate activation maps of the pre-trained models in NiftyNet?

I could download and successfully test brain parcellation demo of NiftyNet package. However, this only gives me the ultimate parcellation result of a pre-trained network, whereas I need to get access to the output of the intermediate layers too.
According to this demo, the following line downloads a pre-trained model and a test MR volume:
wget -c https://www.dropbox.com/s/rxhluo9sub7ewlp/parcellation_demo.tar.gz -P ${demopath}
where ${demopath} is the path to the demo folder. Extracting the downloaded file will create a .ckpt file which seems to contain a pre-trained tensorflow model, however I could not manage to load it into a tensorflow session.
Is there a way that I can load the pre-trained model and have access to the all its intermediate activation maps? In other words, how can I load the pre-trained models from NiftyNet library into a tensorflow session such that I can explore through the model or probe certain intermediate layer for a any given input image?
Finally, in NiftyNet's website it is mentioned that "a number of models from the literature have been (re)implemented in the NiftyNet framework". Are pre-trained weights of these models also available? The demo is using a pre-trained model called HighRes3DNet. If the pre-trained weights of other models are also available, what is the link to download those weights or saved tensorflow models?
To answer your 'Finally' question first, NiftyNet has some network architectures implemented (e.g., VNet, UNet, DeepMedic, HighRes3DNet) that you can train on your own data. For a few of these, there are pre-trained weights for certain applications (e.g. brain parcellation with HighRes3DNet and abdominal CT segmentation with DenseVNet).
Some of these pre-trained weights are linked from the demos, like the parcellation one you linked to. We are starting to collect the pre-trained models into a model zoo, but this is still a work in progress.
Eli Gibson [NiftyNet developer]

Can we use spacy with MXnet

Can we use spacy with MXnet to build a deep neural network(NLP)
We are building an application using mxnet. How to use spacy with Mxnet
Spacy and MXNet serialize their models differently so they are not directly compatible.
You can leverage the pre-trained models of Spacy as part of a preprocessing step for your text data though, and then feed into an MXNet model. Aim to get your text data into an NDArray format (using mx.nd.array).
Also take a look at MXNet's Model Zoo (https://mxnet.apache.org/model_zoo/index.html) which contains a number of models for NLP tasks; Word2Vec embedding being one example.