Can we use spacy with MXnet - spacy

Can we use spacy with MXnet to build a deep neural network(NLP)
We are building an application using mxnet. How to use spacy with Mxnet

Spacy and MXNet serialize their models differently so they are not directly compatible.
You can leverage the pre-trained models of Spacy as part of a preprocessing step for your text data though, and then feed into an MXNet model. Aim to get your text data into an NDArray format (using mx.nd.array).
Also take a look at MXNet's Model Zoo (https://mxnet.apache.org/model_zoo/index.html) which contains a number of models for NLP tasks; Word2Vec embedding being one example.

Related

How to convert classification model from fastText to a TensorFlow SavedModel to be used in BigQuery ML?

I have a trained model saved from fastText. The model is saved with model.save_model("model.bin")
I need to load the model to BigQuery ML and run it there. BigQuery ML allows to import a TensorFlow model and that is why I want to convert my fastText model to a TensorFlow model. I have searched a lot and am conceived that it can be done but I have not really found a clear explanation on how to do it. Any help would be appreciated

How to do fine tuning on TFlite model

I would like to fine tune a model on my own data. However the model is distributed by tflite format. Is there anyway to extract the model architecture and parameters out of the tflite file?
One approach could be to convert the TFLite file to another format, and import into a deep learning framework that supports training.
Something like ONNX, using tflite2onnx, and then import into a framework of your choice. Not all frameworks can import from ONNX (e.g. PyTorch). I believe you can train with ONNXRuntime, and MXNet. Unsure if you can train using TensorFlow.
I'm not sure to understand what you need. But if you want to know the exact architecture of your model you can use neutron to find out.
You will get something like the this :
And for your information TensorFlow Lite is not meant to be finetuned. You need to finetune a classic TensorFlow model and then convert it to TensorFlow Lite.

how to fine tune spacys word vectors

I am predicting similarities of documents using the pre trained spacy word embeddings. Because I have a lot of domain specific words, I want to fine tune my vectors on a rather small data set containing my domain specific vocabulary.
My idea was to just train the spacy model again with my data. But since the word vectors in spacy are built-in, I am not sure how to do that. Is there a way to train the spacy model again with my data?
After some research, I found out, that I can train my own vectors using Gensim. There I would have to download a pre trained model for example the Google News dataset model and afterwards I could train it again with my data set. Is this the only way? Or is there a way to proceed with my spacy model?
Any help is greatly appreciated.
update: the right term here was "incremental training" and thats not possible with the pre-trained spacy models.
It is however possible, to perform incremental training on a gensim model. I did that with the help of another pretrained vector set (i went with the fasttext model) and then I trained this gensim model trained with the fasttext vectors again with my own corpus. This worked pretty well
If you pre-trained word embeddings with fasttext in your domain and would like to use them with spaCy you can extend/replace the tokens from an existing spaCy model with your new fasttext vocabulary&vectors using something similar to this:
https://github.com/explosion/spaCy/issues/2538#issuecomment-404888091
or from scratch:
https://spacy.io/usage/vectors-similarity#converting
The advantage of this approach is that (1) you can keep using spacy and (2) if some tokens were present in the pre-trained spaCy but not in your corpus you will still be able to use them

How to import trained DNNClassifier using C_API

I have trained DNNClassifier using Python (conda tensorflow installation). The trained model needs to be used for evaluation using C_API. Is there a way to load both graph and weights of the trained model using C_API?
There is a way to load h5 and any data for C_API. Maybe some googling could help. I've found this article to be helpful.
And for DNNClassifier on C_API I think you should Implement it manually using pure Tensor Array on C_API. cmiimw

Why use keras as backend instead of using tensorflow?

I see that there are many similar functions between tensorflow and keras like argmax, boolean_mask...I wonder why people have to use keras as backend along with tensorflow instead of using tensorflow alone.
Keras is not a backend, but it is a high-level API for building and training Neural Networks. Keras is capable of running on top of Tensorflow, Theano and CNTK. Most of the people prefer Keras due to its simplicity compared to other libraries like Tensorflow. I recommend Keras for beginners in Deep Learning.
A Keras tensor is a tensor object from the underlying backend (Theano,
TensorFlow or CNTK), which we augment with certain attributes that
allow us to build a Keras model just by knowing the inputs and outputs
of the model.
Theano vs Tensorflow
Tensorflow is necessary if you wish to use coremltools. Apple has promised support for architectures created using Theano but I haven't seen it yet.
Keras will require unique syntax sugar depending on the backend in use. I like the flexibility of Tensorflow input layers and easy-access to strong Google neural networks.