Spacy - English language model outruns german language model on german text? - spacy

Is it by design that the english language model performs better on german salution entities than the german model?
# pip install spacy
# python -m spacy download en
# python -m spacy download de
nlp = spacy.load('en')
# Uncomment line below to get less good results
# nlp = spacy.load('de')
# Process text
text = (u"Das Auto kauft Herr Müller oder Frau Meier, Frank Muster")
doc = nlp(text)
# Find named entities
for entity in doc.ents:
print(entity.text, entity.label_)
expected result if using nlp = spacy.load('en'). All three PERSON is returned
Das Auto ORG
Herr Müller PERSON
Frau Meier PERSON
Frank Muster PERSON
unexpected result if using nlp = spacy.load('de'). Only one of three PERSON is returned
Frank Muster PERSON
Info about spaCy
spaCy version:** 2.0.12
Platform:** Linux-4.17.2-1-ARCH-x86_64-with-arch-Arch-Linux
Python version:** 3.6.5
Models:** en, de

It's not by design, but it's certainly possible that this is a side-effect of the training data and the statistical predictions. The English model is trained on a larger NER corpus with more entity types, while the German model uses NER data based on Wikipedia.
In Wikipedia text, full names like "Frank Muster" are quite common, whereas things like "Herr Muster" are typically avoided. This might explain why the model only labels the full name as a person and not the others. The example sentence also makes it easy for the English model to guess correctly – in English, capitalization is a much stronger indicator of a named entity than it is in German. This might explain why the model consistently labels all capitalised multi-word spans as entities.
In any case, this is a good example of how subtle language-specific or stylistic conventions end up influencing a model's predictions. It also shows why you almost always want to fine-tune a model with more examples specific to your data. I do think that the German model will likely perform better on German texts overall, but if references like "Herr Müller" are common in your texts, you probably want to update the model with more examples of them in different contexts.

Related

Patient name extraction using MedSpacy

I was looking for some guidence on NER using medspacy. Aware of disease extraction using MedSpacy but the aim is to extract patient name from medical report using medspacy.
Text supposed to be :
patient:Jeromy, David (DOB)
Date range 2020 to 2022. Visited Dr Brian. Suffered from ...
This type of dataset is there, want to extract patient name from all the pages of medical reports using MedSpacy. I know target rules can be helpful but any clarified guidence will be appreciated.
Thanks & regards
If you find that the default SpaCy NER model is not sufficient, as it will not pick up names such as "Byrn, John", I have a couple of suggestions:
Train a custom NER component using SpaCy's Prodigy annotation tool, which you can use to easily label some examples of names. This is a rather simple task, so you can likely train a model with less than 100 diverse examples. Note: Prodigy is a paid tool, so see my other suggestions if you do not have access/are not willing to pay.
Train a custom NER component without Prodigy. Similar to the above approach, but slightly more involved. This Medium article provides a beginner-friendly introduction to doing so, and you can also refer to SpaCy's own documentation. You can provide SpaCy with some examples of texts and the entities you want extracted, like so:
TRAIN_DATA = [
('Patient: Byrn, John', {
'entities': [(9, 19, 'PERSON')]
}),
('John Byrn received 10mg of advil', {
'entities': [(0, 10, 'PERSON')]
})
]
Build rules based on existing SpaCy components. You can leverage existing SpaCy pipeline components (you don't necessarily need MedSpaCy for this), such as POS tagging and Dependency Parsing. For example, you can look for proper nouns in your documents to identify names. Check out the docs on POS tagging here.
Try other pretrained NER models. There may be other models that are better suited to your task. Check out other models on SpaCy Universe, or even better, on HuggingFaceHub, which contains some of the best models out there for every use case. Added bonus of HF Hub is that you can try out the models on each model model page, and assess the performance on some examples before you decide.
Hope this helps!

Why don't spacy transformer models do NER for non-english models?

Why is it that spacy transformer models for languages like spanish (es_dep_news_trf) don't do named entity recognition.
However, for english (en_core_web_trf) it does.
In code:
import spacy
nlp=spacy.load("en_core_web_trf")
doc=nlp("my name is John Smith and I work at Apple and I like visiting the Eiffel Tower")
print(doc.ents)
(John Smith, Apple, the Eiffel Tower)
nlp=spacy.load("es_dep_news_trf")
doc=nlp("mi nombre es John Smith y trabajo en Apple y me gusta visitar la Torre Eiffel")
print(doc.ents)
()
Why doesn't spanish extract entities but english does?
It has to do with the available training data. ner is only included for the trf models if the training data has NER annotation on the exact same data as for tagging and parsing.
Training trf models on partial annotation does not work well in practice and an independent NER component (as in the CNN pipelines) would mean including an additional transformer component in the pipeline, which would make the pipeline a lot larger and slower.
The spaCy models vary with regards to which NLP features they provide - this is just a result of how the respective authors created/trained them. I.e., https://spacy.io/models/en#en_core_web_trf lists "ner" in its components, but https://spacy.io/models/es#es_dep_news_trf does not.
The Spanish https://spacy.io/models/es#es_core_news_lg (as well the two smaller variants) does list "ner" in its components, so they show named entities:
>>> import spacy
>>> nlp=spacy.load("es_core_news_sm")
>>> doc=nlp("mi nombre es John Smith y trabajo en Apple y me gusta visitar la Torre Eiffel")
>>> print(doc.ents)
(John Smith, Apple, Torre Eiffel)

Discrepancy documentation and implementation of spaCy vectors for German words?

According to documentation:
spaCy's small models (all packages that end in sm) don't ship with
word vectors, and only include context-sensitive tensors. [...]
individual tokens won't have any vectors assigned.
But when I use the de_core_news_sm model, the tokens Do have entries for x.vector and x.has_vector=True.
It looks like these are context_vectors, but as far as I understood the documentation only word vectors are accessible through the vector attribute and sm models should have none. Why does this work for a "small model"?
has_vector behaves differently than you expect.
This is discussed in the comments on an issue raised on github. The gist is, since vectors are available, it is True, even though those vectors are context vectors. Note that you can still use them, eg to compute similarity.
Quote from spaCy contributor Ines:
We've been going back and forth on how the has_vector should behave in
cases like this. There is a vector, so having it return False would be
misleading. Similarly, if the model doesn't come with a pre-trained
vocab, technically all lexemes are OOV.
Version 2.1.0 has been announced to include German word vectors.

Spacy says dependency parser not loaded

I installed spaCy v2.0.2 on Ubuntu 16.04. I then used
sudo python3 -m spacy download en
to download the English model.
After that I use Spacy as follows:
from spacy.lang.en import English
p = English(parser=True, tagger=True, entity=True)
d = p("This is a sentence. I am who I am.")
print(list(d.sents))
I get this error however:
File "doc.pyx", line 511, in __get__
ValueError: Sentence boundary detection requires the dependency parse, which requires a statistical model to be installed and loaded. For more info, see the documentation:
https://spacy.io/usage/models
I really can't figure out what is going on. I have this version of the 'en' model installed:
https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz
which I think is the default. Any help is appreciated. Thank you.
I think the problem here is quite simple – when you call this:
p = English(parser=True, tagger=True, entity=True)
... spaCy will load the English language class containing the language data and special-case rules, but no model data and weights, which enable the parser, tagger and entity recognizer to make predictions. This is by design, because spaCy has no way of knowing if you want to load in model data and if so, which package.
So if you want to load an English model, you'll have to use spacy.load(), which will take care of loading the data, and putting together the language and processing pipeline:
nlp = spacy.load('en_core_web_sm') # name of model, shortcut name or path
Under the hood, spacy.load() will look for an installed model package called en_core_web_sm, load it and check the model's meta data to determine which language the model needs (in this case, English) and which pipeline it supports (in this case, tagger, parser and NER). It then initialises an instance of English, creates the pipeline, loads in the binary data from the model package and returns the object so you can call it on your text. See this section for a more detailed explanation of this.

SpaCy similarity score makes no sense

I am trying to figure out if I trust SpaCy's similarity function and I am getting confused. Here's my toy example:
import spacy
nlp = spacy.load('en')
doc1 = nlp(u'Unsalted butter')
doc2 = nlp(u'babi carrot peel babi carrot grim french babi fresh babi roundi fresh exot petit petit peel shred carrot dole shred')
doc1.similarity(doc2)
I get the similarity of 0.64. How can it be this high for two sentences with no overlapping tokens? Could someone please explain this to me? Thank you!
The problem is that you are using en model which is most likely linked to en_core_web_sm.
The en_core_web_sm model doesn't have pretrained glove vectors, so you are using the vectors produced by NER, PoS and DEP tagger to compute similarity.
These vectors encode structural information, e.g. having the same PoS tag or DEP role in the sentence. There is no semantic information encoded in these vectors, so the result you are getting is as expected, weird.
Have also a look here.