Can you integrate your pre-trained word embeddings in a custom spaCy model? - spacy

Currently I am trying to develop a spaCy model for NER in the romanian legal domain. I was suggested to use specific WE that are presented at the following link (the links to download the WE are on the last pages - slides 25, 26, 27):
https://www1.ids-mannheim.de/fileadmin/kl/CoRoLa_based_Word_Embeddings.pdf
I already trained and tested a model without "touching" the pre-implemented WE but I do not know how to use external WE in computing a new spaCy model. Any relevant advice is appreciated. Although, an example of code will be preferable.

Yes, convert your vectors from word2vec text format with spacy init vectors and then specify that model as [initialize.vectors] in your config along with include_static_vectors = true for the relevant tok2vec models.
A config excerpt:
[components.tok2vec.model.embed]
#architectures = "spacy.MultiHashEmbed.v1"
width = ${components.tok2vec.model.encode.width}
attrs = ["ORTH", "SHAPE"]
rows = [5000, 2500]
include_static_vectors = true
[initialize]
vectors = "my_vector_model"
You can also use spacy init config -o accuracy config.cfg to generate a sample config including vectors that you can edit and adjust as you need.
See:
https://spacy.io/api/cli#init-vectors
https://spacy.io/usage/embeddings-transformers#static-vectors

Related

Different results in TfLite model vs model before quantization

I have taken Object Detection model from TF zoo v2,
I took mobilenet and trained it on my own TFrecords
I am using mobilenet because it is often found in the examples of converting it to Tflite and this is what I need because I run it on RPi3.
I am following ideas from the official example from Sagemaker docs
and github you can find here
What is interesting the accuracy done after step 2) training and 3) deploying is pretty nice! My trucks are discovered nicely with the custom trained model.
However, when converted to tflite the accuracy goes down no matter if I use tfliteconvert tool or using python tf.lite.Converter.
What is more, all detections are on borders of images, and usually in the bottom-right corner. Maybe I am not preparing images correctly? Or some misunderstanding of results?
You can check images I uploaded.
https://ibb.co/fSzfZvz
https://ibb.co/0GF101s
What could possibly go wrong?
I was lacking proper preprocessing of image.
After I have used pipeline config to build detection object which has preprocess function I utilized to build tensor before feeding it into Interpreter.
num_classes = 2
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
model_config = configs['model']
model_config.ssd.num_classes = num_classes
model_config.ssd.freeze_batchnorm = True
detection_model = model_builder.build(
model_config=model_config, is_training=True)

How to use Hugging Face transfomers with spaCy 3.0

Let's say that I want to include distilbert https://huggingface.co/distilbert-base-uncased from Hugging Face into spaCy 3.0 pipeline. I think that this is possible and I found some code on how to convert this model for spaCy 2.0 but it doesn't work in v3.0. What I really want is to load this model using something like this
nlp = spacy.load('path_to_distilbert')
Is it even possible and could you please provide the exact steps to do that.
You can use spacy-transformers to this end. In spaCy v3, you can train custom pipelines using a config file, where you would define the transformer component using any HF model you like in components.transformer.model.name:
[components.transformer]
factory = "transformer"
max_batch_items = 4096
[components.transformer.model]
#architectures = "spacy-transformers.TransformerModel.v1"
name = "bert-base-cased"
tokenizer_config = {"use_fast": true}
[components.transformer.model.get_spans]
#span_getters = "spacy-transformers.doc_spans.v1"
[components.transformer.set_extra_annotations]
#annotation_setters = "spacy-transformers.null_annotation_setter.v1"
You can then train any other component (NER, textcat, ...) on top of this pretrained transformer model, and the transformer weights will be further finetuned, too.
You can read more about this in the docs here: https://spacy.io/usage/embeddings-transformers#transformers-training
It appears that the only transformer that will work out of the box is their roberta-base model. In the docs it mentions being able to connect thousands of Huggingface models but there is no mention of how to add them to a SpaCy pipeline.
In the meantime if you wanted to use the roberta model you can do the following.
# install using spacy transformers
pip install spacy[transformers]
python -m spacy download en_core_web_trf
import spacy
nlp = spacy.load("en_core_web_trf")

Can I use the spaCy command line tools to train an NER model containing an additional entity type?

I am trying to train spaCy models using just the python -m spacy train command line tool without writing any code of my own.
I have a training set of documents to which I have added OIL_COMPANY entity spans. I used gold.docs_to_json to create training files in the JSON-serializable format.
I can train starting from an empty model. However, if I try to extend the existing en_core_web_lg model I see the following error.
KeyError: "[E022] Could not find a transition with the name 'B-OIL_COMPANY' in the NER model."
So I need to be able to tell the command line tool to add OIL_COMPANY to an existing list of NER labels. The discussion in Training an additional entity type shows how to do this in code by calling add_label on the NER pipeline, but I don't see any command line option that does this.
Is it possible to extend an existing NER model to new entities with just the command line training tools, or do I have to write code?
Ines answered this for me on the Prodigy support forum.
I think what's happening here is that the spacy train command expects
the base model you want to update to already have all labels added
that you want to train. (It processes the data as a stream, so it's
not going to compile all labels upfront and silently add them on the
fly.) So if you want to update an existing pretrained model and add a
new label, you should be able to just add the label and save out the
base model:
ner = nlp.get_pipe("ner") ner.add_label("YOUR_LABEL")
nlp.to_disk("./base-model")
This isn't quite writing no code but it's pretty close.
See this link for the CLI in spaCy.
Train a model. Expects data in spaCy’s JSON format. On each epoch, a model will be saved out to the directory. Accuracy scores and model details will be added to a meta.json to allow packaging the model using the package command.
python -m spacy train [lang] [output_path] [train_path] [dev_path]
[--base-model] [--pipeline] [--vectors] [--n-iter] [--n-early-stopping]
[--n-examples] [--use-gpu] [--version] [--meta-path] [--init-tok2vec]
[--parser-multitasks] [--entity-multitasks] [--gold-preproc] [--noise-level]
[--orth-variant-level] [--learn-tokens] [--textcat-arch] [--textcat-multilabel]
[--textcat-positive-label] [--verbose]

Updating a BERT model through Huggingface transformers

I am attempting to update the pre-trained BERT model using an in house corpus. I have looked at the Huggingface transformer docs and I am a little stuck as you will see below.My goal is to compute simple similarities between sentences using the cosine distance but I need to update the pre-trained model for my specific use case.
If you look at the code below, which is precisely from the Huggingface docs. I am attempting to "retrain" or update the model and I assumed that special_token_1 and special_token_2 represent "new sentences" from my "in house" data or corpus. Is this correct? In summary, I like the already pre-trained BERT model but I would like to update it or retrain it using another in house dataset. Any leads will be appreciated.
import tensorflow as tf
import tensorflow_datasets
from transformers import *
model = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
SPECIAL_TOKEN_1="dogs are very cute"
SPECIAL_TOKEN_2="dogs are cute but i like cats better and my
brother thinks they are more cute"
tokenizer.add_tokens([SPECIAL_TOKEN_1, SPECIAL_TOKEN_2])
model.resize_token_embeddings(len(tokenizer))
#Train our model
model.train()
model.eval()
BERT is pre-trained on 2 tasks: masked language modeling (MLM) and next sentence prediction (NSP). The most important of those two is MLM (it turns out that the next sentence prediction task is not really that helpful for the model's language understanding capabilities - RoBERTa for example is only pre-trained on MLM).
If you want to further train the model on your own dataset, you can do so by using BERTForMaskedLM in the Transformers repository. This is BERT with a language modeling head on top, which allows you to perform masked language modeling (i.e. predicting masked tokens) on your own dataset. Here's how to use it:
from transformers import BertTokenizer, BertForMaskedLM
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased', return_dict=True)
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
You can update the weights of BertForMaskedLM using loss.backward(), which is the main way of training PyTorch models. If you don't want to do this yourself, the Transformers library also provides a Python script which allows you perform MLM really quickly on your own dataset. See here (section "RoBERTa/BERT/DistilBERT and masked language modeling"). You just need to provide a training and test file.
You don't need to add any special tokens. Examples of special tokens are [CLS] and [SEP], which are used for sequence classification and question answering tasks (among others). These are added by the tokenizer automatically. How do I know this? Because BertTokenizer inherits from PretrainedTokenizer, and if you take a look at the documentation of its __call__ method here, you can see that the add_special_tokens parameter defaults to True.

How to generate .tf/.tflite files from python

I am trying to generate the custom tensor flow model (tf/tflite file) which i wanted to use for my mobile application.
I have gone through few machine learning and tensor flow blogs, from there I started to generate a simple ML model.
https://www.datacamp.com/community/tutorials/tensorflow-tutorial
https://www.edureka.co/blog/tensorflow-object-detection-tutorial/
https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
https://www.youtube.com/watch?v=ICY4Lvhyobk
All these are really nice and they guided me to do the below steps,
i)Install all necessary tools (TensorFlow,Python,Jupyter,etc).
ii)Load the Training and testing Data.
iii)Run the tensor flow session for train and evaluate the results.
iv)Steps to increase the accuracy
But i am not able to generate the .tf/.tflite files.
I tried the following code, but that generates an empty file.
converter = tf.contrib.lite.TFLiteConverter.from_session(sess,[],[])
model = converter.convert()
file = open( 'model.tflite' , 'wb' )
file.write( model )
I have checked few answers in stackoverflow and according to my understanding in-order to generate the .tf files we need to create the pb files, freezing the pb file and then generating the .tf files.
But how can we achieve this?
Tensorflow provides Tflite converter to convert saved model to Tflite model.For more details find here.
tf.lite.TFLiteConverter.from_saved_model() (recommended): Converts a SavedModel.
tf.lite.TFLiteConverter.from_keras_model(): Converts a Keras model.
tf.lite.TFLiteConverter.from_concrete_functions(): Converts concrete functions.