Spacy tokenizer add exception for n't - spacy

I want to convert n't to not using this code:
doc = nlp(u"this. isn't ad-versere")
special_case = [{ORTH: u"not"}]
nlp.tokenizer.add_special_case(u"n't",specia_case)
print [text.orth_ for text in doc]
But I get the output as:
[u'this', u'.', u'is', u"n't", u'ad', u'-', u'versere']
n't is still n't
How to solve the problem?

The reason your logic doesn't work is because spaCy uses non-destructive tokenization. This means that it'll always keep a reference to the original input text, and you'll never lose any information.
The tokenizer exceptions and special cases let you define rules for how to split a string of text into a sequence of tokens – but they won't let you modify the original string. The ORTH values of the tokens plus whitespace always needs to match the original text. So the tokenizer can split "isn't" into ["is", "n't"], but not into ["is", "not"].
To define a "normalised" form of the string, spaCy uses the NORM attribute, available as token.norm_. You can see this in the source of the tokenizer exceptions here – the norm of the token "n't" is "not". The NORM attribute is also used as a feature in the model, to ensure that tokens with the same norm receive similar representations (even if one is more frequent in the training data than the other).
So if you're interested in the normalised form, you can simply use the norm_ attribute instead:
>>> [t.norm_ for t in doc]
['this', '.', 'is', 'not', 'ad', '-', 'versere']

Related

Spacy: How to specify/enforce a token's POS to be treated as PROPN in Spacy POS?

I'm analyzing a dataset which has a particular brand name. Instead of training POS from scratch, is there a way where we can supply the POS values of this brand name as PROPN? So that its treated as PROPN always.
Not exactly. You can post-process your doc to correct the POS for individual tokens to PROPN, but the POS annotation for the surrounding tokens won't be affected. The tagger doesn't support providing partial annotation as a starting point, so there's no way to influence the tagger by providing it in advance.
To correct the POS for individual tokens, you can use the attribute_ruler component or you can write your own small custom component.
Note that the parser and NER components don't use the POS tags as features, so their analysis will not change even if you modify the POS.

Mapping huggingface tokens to original input text

How can I map the tokens I get from huggingface DistilBertTokenizer to the positions of the input text?
e.g. I have a new GPU -> ["i", "have", "a", "new", "gp", "##u"] -> [(0, 1), (2, 6), ...]
I'm interested in this because suppose that I have some attention values assigned to each token, I would like to show which part of the original text it actually corresponds to, since the tokenized version is not non-ML people friendly.
I have not found any solution to this yet especially when there is [UNK] token. Any insights would be appreciated. Thank you!
In the newer versions of Transformers (it seems like since 2.8), calling the tokenizer returns an object of class BatchEncoding when methods __call__, encode_plus and batch_encode_plus are used. You can use method token_to_chars that takes the indices in the batch and returns the character spans in the original string.

How to assign lexical features to new unanalyzable tokens in spaCy?

I'm working with spaCy, version 2.3. I have a not-quite-regular-expression scanner which identifies spans of text which I don't want analyzed any further. I've added a pipe at the beginning of the pipeline, right after the tokenizer, which uses the document retokenizer to make these spans into single tokens. I'd like to remainder of the pipeline to treat these tokens as proper nouns. What's the right way to do this? I've set the POS and TAG attrs in my calls to retokenizer.merge(), and those settings persist in the resulting sentence parse, but the dependency information on these tokens makes me doubt that my settings have had the desired impact. Is there a way to update the vocabulary so that the POS tagger knows that the only POS option for these tokens is PROPN?
Thanks in advance.
The tagger and parser are independent (the parser doesn't use the tags as features), so modifying the tags isn't going to affect the dependency parse.
The tagger doesn't overwrite any existing tags, so if a tag is already set, it doesn't modify it. (The existing tags don't influence its predictions at all, though, so the surrounding words are tagged the same way they would be otherwise.)
Setting TAG and POS in the retokenizer is a good way to set those attributes. If you're not always retokenizing and you want to set the TAG and/or POS based on a regular expression for the token text, then the best way to do this is a custom pipeline component that you add before the tagger that sets tags for certain words.
The transition-based parsing algorithm can't easily deal with partial dependencies in the input, so there isn't a straightforward solution here. I can think of a few things that might help:
The parser does respect pre-set sentence boundaries. If your skipped tokens are between sentences, you can set token.is_sent_start = True for that token and the following token so that the skipped token always ends up in its own sentence. If the skipped tokens are in the middle of a sentence or you want them to be analyzed as nouns in the sentence, then this won't help.
The parser does use the token.norm feature, so if you set the NORM feature in the retokenizer to something extremely PROPN-like, you might have a better chance of getting the intended analysis. For example, if you're using a provided English model like en_core_web_sm, use a word you think would be a frequent similar proper noun in American newspaper text from 20 years ago, so if the skipped token should be like a last name, use "Bush" or "Clinton". It won't guarantee a better parse, but it could help.
If you using a model with vectors like en_core_web_lg, you can also set the vectors for the skipped token to be the same as a similar word (check that the similar word has a vector first). This is how to tell the model to refer to the same row in the vector table for UNKNOWN_SKIPPED as Bush.
The simpler option (that duplicates the vectors in the vector table internally):
nlp.vocab.set_vector("UNKNOWN_SKIPPED", nlp.vocab["Bush"].vector)
The less elegant version that doesn't duplicate vectors underneath:
nlp.vocab.vectors.add("UNKNOWN_SKIPPED", row=nlp.vocab["Bush"].rank)
nlp.vocab["UNKNOWN_SKIPPED"].rank = nlp.vocab["Bush"].rank
(The second line is only necessary to get this to work for a model that's currently loaded. If you save it as a custom model after the first line with nlp.to_disk() and reload it, then only the first line is necessary.)
If you just have a small set of skipped tokens, you could update the parser with some examples containing these tokens, but this can be tricky to do well without affecting the accuracy of the parser for other cases.
The NORM and vector modifications will also influence the tagger, so it's possible if you choose those well, you might get pretty close to the results you want.

Spacy tokenizer to handle final period in sentence

I'm using Spacy to tokenize sentences, and I know that the text I pass to the tokenizer will always be a single sentence.
In my tokenization rules, I would like non-final periods (".") to be attached to the text before it so I updated the suffix rules to remove the rules that split on periods (this gets abbreviations correctly).
The exception, however, is that the very last period should be split into a separate token.
I see that the latest version of Spacy allows you to split tokens after the fact, but I'd prefer to do this within the Tokenizer itself so that other pipeline components are processing the correct tokenization.
Here is one solution that uses some post processing after the tokenizer:
I added "." to suffixes so that a period is always split into its own token.
I then used a regex to find non-final periods, generated a span with doc.char_span, and merged the span to a single token with span.merge.
Would be nice to be able to do this within the tokenizer if anyone knows how to do that.

How to treat numbers inside text strings when vectorizing words?

If I have a text string to be vectorized, how should I handle numbers inside it? Or if I feed a Neural Network with numbers and words, how can I keep the numbers as numbers?
I am planning on making a dictionary of all my words (as suggested here). In this case all strings will become arrays of numbers. How should I handle characters that are numbers? how to output a vector that does not mix the word index with the number character?
Does converting numbers to strings weakens the information i feed the network?
Expanding your discussion with #user1735003 - Lets consider both ways of representing numbers:
Treating it as string and considering it as another word and assign an ID to it when forming a dictionary. Or
Converting the numbers to actual words : '1' becomes 'one', '2' as 'two' and so on.
Does the second one change the context in anyway?. To verify it we can find similarity of two representations using word2vec. The scores will be high if they have similar context.
For example,
1 and one have a similarity score of 0.17, 2 and two have a similarity score of 0.23. They seem to suggest that the context of how they are used is totally different.
By treating the numbers as another word, you are not changing the
context but by doing any other transformation on those numbers, you
can't guarantee its for better. So, its better to leave it untouched and treat it as another word.
Note: Both word-2-vec and glove were trained by treating the numbers as strings (case 1).
The link you provide suggests that everything resulting from a .split(' ') is indexed -- words, but also numbers, possibly smileys, aso. (I would still take care of punctuation marks). Unless you have more prior knowledge about your data or your problem you could start with that.
EDIT
Example literally using your string and their code:
corpus = {'my car number 3'}
dictionary = {}
i = 1
for tweet in corpus:
for word in tweet.split(" "):
if word not in dictionary: dictionary[word] = i
i += 1
print(dictionary)
# {'my': 1, '3': 4, 'car': 2, 'number': 3}
The following paper can be helpful: http://people.csail.mit.edu/mcollins/6864/slides/bikel.pdf
Specifically, page 7.
Before they use an <unknown> tag they try to replace alphanumeric symbol combination with common pattern names tags, such as:
FourDigits (good for years)
I've tried to implement it and it gave great results.