Why don't spacy transformer models do NER for non-english models? - spacy

Why is it that spacy transformer models for languages like spanish (es_dep_news_trf) don't do named entity recognition.
However, for english (en_core_web_trf) it does.
In code:
import spacy
nlp=spacy.load("en_core_web_trf")
doc=nlp("my name is John Smith and I work at Apple and I like visiting the Eiffel Tower")
print(doc.ents)
(John Smith, Apple, the Eiffel Tower)
nlp=spacy.load("es_dep_news_trf")
doc=nlp("mi nombre es John Smith y trabajo en Apple y me gusta visitar la Torre Eiffel")
print(doc.ents)
()
Why doesn't spanish extract entities but english does?

It has to do with the available training data. ner is only included for the trf models if the training data has NER annotation on the exact same data as for tagging and parsing.
Training trf models on partial annotation does not work well in practice and an independent NER component (as in the CNN pipelines) would mean including an additional transformer component in the pipeline, which would make the pipeline a lot larger and slower.

The spaCy models vary with regards to which NLP features they provide - this is just a result of how the respective authors created/trained them. I.e., https://spacy.io/models/en#en_core_web_trf lists "ner" in its components, but https://spacy.io/models/es#es_dep_news_trf does not.
The Spanish https://spacy.io/models/es#es_core_news_lg (as well the two smaller variants) does list "ner" in its components, so they show named entities:
>>> import spacy
>>> nlp=spacy.load("es_core_news_sm")
>>> doc=nlp("mi nombre es John Smith y trabajo en Apple y me gusta visitar la Torre Eiffel")
>>> print(doc.ents)
(John Smith, Apple, Torre Eiffel)

Related

spacy model does not recognize street names as entity

I have tried several models and the results are as follows
en_core_web_sm
10030 W. Olivia Terrace DATE
en_core_web_md
W. Olivia Terrace FACT
en_core_web_lg
10030 CARDINAL
W. Olivia Terrace PERSON
how i train a model with an entity to recognize streets ?
Should I use regular expressions to identify these entities?
The English models are not trained to recognize street names. You can find the list of pretrained NER labels from the models page:
CARDINAL, DATE, EVENT, FAC, GPE, LANGUAGE, LAW, LOC, MONEY, NORP, ORDINAL, ORG, PERCENT, PERSON, PRODUCT, QUANTITY, TIME, WORK_OF_ART
To recognize a new type of entity, you can follow the documentation here: https://spacy.io/usage/training#example-new-entity-type. You'll have to create custom training data and update the model by calling nlp.update with the gold-standard annotations.
If your entities are very regular, it might be possible to just use a pattern-matching approach. In which case you can explore the docs here: https://spacy.io/usage/rule-based-matching

spacy model en_core_web_sm does not detect language entities

I wrote a small program to extract language entity from a text. I am using 'en_core_web_sm' but it does detect anything other than DATE from following sentence.
From spacy doc https://spacy.io/models/en, I can see that 'en_core_web_sm' support LANGUAGE entity.
I think thatI am doing some obvious mistake. Could someone please point out what am I doing wrong?
nlp2 = spacy.load("en_core_web_sm")
test_text = "korean chinese english spanish 2019-2-13 india america 2 years 6 months united states"
doc2 = nlp2(test_text)
for ent in doc2.ents:
print(ent.label_, ent.text)
print("\n")
Output
DATE 2 years 6 months
Which version of spaCy are you using? If I run this exact code snippet with the current latest version, 2.2.4, I get this print out:
NORP korean
NORP chinese
LANGUAGE english
GPE india
GPE america
DATE 2 years 6 months
As you can see, the pre-trained model en_core_web_sm does indeed recognise LANGUAGE entities.
As a quick tip: spaCy's NER module works better on actual sentences, which is what it was trained on. From my results, you see that it works also on non-grammatical sequences of words, like in your input, but it will make more mistakes because there is no grammatical context.

SpaCy NER doesn't seem to correctly recognize hyphenated names

1st thing 1st: I'm new to SpaCy and just started to test it. I have to say that I'm impressed by its simplicity and the doc quality. Thanks!
Now, I'm trying to identify PER in a French text. It seems to work pretty well for most but I saw a recurring incorrect pattern: names with a hyphen are not correctly recognized (ex: Pierre-Louis Durand will appear as two PER: "Pierre" and "Louis Durand").
See example:
import spacy
# nlp = spacy.load('fr')
nlp = spacy.load('fr_core_news_md')
description = ('C\'est Jean-Sébastien Durand qui leur a dit. Pierre Dupond n\'est pas venu à Boston comme attendu. '
'Louis-Jean s\'est trompé. Claire a bien choisi.')
text = nlp(description)
labels = set([w.label_ for w in text.ents])
for label in labels:
entities = [e.string for e in text.ents if label==e.label_]
entities = list(entities)
print(label, entities)
output is:
LOC ['Boston ']
PER ['Jean', 'Sébastien Durand ', 'Pierre Dupond ', 'Louis', 'Jean ', 'Claire ']
It should be: "Jean-Sébastien Durand" and "Louis-Jean".
I'm not sure what to do here:
change the way tokens are extracted (I'm wondering about the side effect for non PER) - I don't think this is the issue as a PER can be an aggregation of multiple tokens
apply a magic setting somewhere so that hyphen can be used in NER for PER
train the model
go back to school ;-)
Thanks for your help (and yes I'm investigating by reading more, I love it)!
-TC
I initially thought this would be a mismatch between the tokenizer and the training data, but it's actually a problem with how the regex that handles some words with hyphens is loaded from the saved model.
A temporary fix for spacy v2.2 models (which you have to do every time after loading a French model) is to replace the problematic tokenizer setting with the correct default setting:
import spacy
from spacy.lang.fr import French
nlp = spacy.load("fr_core_news_md")
nlp.tokenizer.token_match = French.Defaults.token_match
description = ('C\'est Jean-Sébastien Durand qui leur a dit. Pierre Dupond n\'est pas venu à Boston comme attendu. '
'Louis-Jean s\'est trompé. Claire a bien choisi.')
text = nlp(description)
labels = set([w.label_ for w in text.ents])
for label in labels:
entities = [e.text for e in text.ents if label==e.label_]
entities = list(entities)
print(label, entities)
Output:
PER ['Jean-Sébastien Durand', 'Pierre Dupond']
LOC ['Boston', 'Louis-Jean', 'Claire']
(The French NER model is trained on data from Wikipedia, so it still doesn't do very well on the entity types for this particular text.)

Spacy - English language model outruns german language model on german text?

Is it by design that the english language model performs better on german salution entities than the german model?
# pip install spacy
# python -m spacy download en
# python -m spacy download de
nlp = spacy.load('en')
# Uncomment line below to get less good results
# nlp = spacy.load('de')
# Process text
text = (u"Das Auto kauft Herr Müller oder Frau Meier, Frank Muster")
doc = nlp(text)
# Find named entities
for entity in doc.ents:
print(entity.text, entity.label_)
expected result if using nlp = spacy.load('en'). All three PERSON is returned
Das Auto ORG
Herr Müller PERSON
Frau Meier PERSON
Frank Muster PERSON
unexpected result if using nlp = spacy.load('de'). Only one of three PERSON is returned
Frank Muster PERSON
Info about spaCy
spaCy version:** 2.0.12
Platform:** Linux-4.17.2-1-ARCH-x86_64-with-arch-Arch-Linux
Python version:** 3.6.5
Models:** en, de
It's not by design, but it's certainly possible that this is a side-effect of the training data and the statistical predictions. The English model is trained on a larger NER corpus with more entity types, while the German model uses NER data based on Wikipedia.
In Wikipedia text, full names like "Frank Muster" are quite common, whereas things like "Herr Muster" are typically avoided. This might explain why the model only labels the full name as a person and not the others. The example sentence also makes it easy for the English model to guess correctly – in English, capitalization is a much stronger indicator of a named entity than it is in German. This might explain why the model consistently labels all capitalised multi-word spans as entities.
In any case, this is a good example of how subtle language-specific or stylistic conventions end up influencing a model's predictions. It also shows why you almost always want to fine-tune a model with more examples specific to your data. I do think that the German model will likely perform better on German texts overall, but if references like "Herr Müller" are common in your texts, you probably want to update the model with more examples of them in different contexts.

SpaCy similarity score makes no sense

I am trying to figure out if I trust SpaCy's similarity function and I am getting confused. Here's my toy example:
import spacy
nlp = spacy.load('en')
doc1 = nlp(u'Unsalted butter')
doc2 = nlp(u'babi carrot peel babi carrot grim french babi fresh babi roundi fresh exot petit petit peel shred carrot dole shred')
doc1.similarity(doc2)
I get the similarity of 0.64. How can it be this high for two sentences with no overlapping tokens? Could someone please explain this to me? Thank you!
The problem is that you are using en model which is most likely linked to en_core_web_sm.
The en_core_web_sm model doesn't have pretrained glove vectors, so you are using the vectors produced by NER, PoS and DEP tagger to compute similarity.
These vectors encode structural information, e.g. having the same PoS tag or DEP role in the sentence. There is no semantic information encoded in these vectors, so the result you are getting is as expected, weird.
Have also a look here.