Training a spacy model for NER in french resumes dont give any results - spacy

Sample of trainning data(input.json), the full json has only 100 resumes.
{"content": "Resume 1 text in french","annotation":[{"label":["diplomes"],"points":[{"start":1233,"end":1423,"text":"1995-1996 : Lycée Dar Essalam Rabat \n Baccalauréat scientifique option sciences Expérimentales "}]},{"label":["diplomes"],"points":[{"start":1012,"end":1226,"text":"1996-1998 : Faculté des Sciences Rabat \n C.E.U.S (Certificat des Etudes universitaires Supérieurs) option physique et chimie "}]},{"label":["diplomes"],"points":[{"start":812,"end":1004,"text":"1999-2000 : Faculté des Sciences Rabat \n Licence es sciences physique option électronique "}]},{"label":["diplomes"],"points":[{"start":589,"end":805,"text":"2002-2004 : Faculté des Sciences Rabat \nDESA ((Diplôme des Etudes Supérieures Approfondies) en informatique \n\ntélécommunication multimédia "}]},{"label":["diplomes"],"points":[{"start":365,"end":582,"text":"2014-2017 : Institut National des Postes et Télécommunications INPT Rabat \n Thèse de doctorat en informatique et télécommunication "}]},{"label":["adresse"],"points":[{"start":122,"end":157,"text":"Rue 34 n 17 Hay Errachad Rabat Maroc"}]}],"extras":null,"metadata":{"first_done_at":1586140561000,"last_updated_at":1586140561000,"sec_taken":0,"last_updated_by":"wP21IMXff9TFSNLNp5v0fxbycFX2","status":"done","evaluation":"NONE"}}
{"content": "Resume 2 text in french","annotation":[{"label":["diplomes"],"points":[{"start":1251,"end":1345,"text":"Lycée Oued El Makhazine - Meknès \n\n- Bachelier mention très bien \n- Option : Sciences physiques"}]},{"label":["diplomes"],"points":[{"start":1122,"end":1231,"text":"Classes préparatoires Moulay Youssef - Rabat \n\n- Admis au Concours National Commun CNC \n- Option : PCSI - PSI "}]},{"label":["diplomes"],"points":[{"start":907,"end":1101,"text":"Institut National des Postes et Télécommunications INPT - Rabat \n\n- Ingénieur d’État en Télécommunications et technologies de l’information \n- Option : MTE Management des Télécoms de l’entreprise"}]},{"label":["adresse"],"points":[{"start":79,"end":133,"text":"94, Hay El Izdihar, Avenue El Massira, Ouislane, MEKNES"}]}],"extras":null,"metadata":{"first_done_at":1586126476000,"last_updated_at":1586325851000,"sec_taken":0,"last_updated_by":"wP21IMXff9TFSNLNp5v0fxbycFX2","status":"done","evaluation":"NONE"}}
{"content": "Resume 3 text in french","annotation":[{"label":["adresse"],"points":[{"start":2757,"end":2804,"text":"N141 Av. El Hansali Agharass \nBouargane \nAgadir "}]},{"label":["diplomes"],"points":[{"start":262,"end":369,"text":"2009-2010 : Baccalauréat Scientifique, option : Sciences Physiques au Lycée Qualifiant \nIBN MAJJA à Agadir."}]},{"label":["diplomes"],"points":[{"start":125,"end":259,"text":"2010-2016 : Diplôme d’Ingénieur d’Etat, option : Génie Informatique, à l’Ecole \nNationale des Sciences Appliquées d’Agadir (ENSAA). "}]}],"extras":null,"metadata":{"first_done_at":1586141779000,"last_updated_at":1586141779000,"sec_taken":0,"last_updated_by":"wP21IMXff9TFSNLNp5v0fxbycFX2","status":"done","evaluation":"NONE"}}
{"content": "Resume 4 text in french","annotation":[{"label":["diplomes"],"points":[{"start":505,"end":611,"text":"2012 Baccalauréat Sciences Expérimentales option Sciences Physiques, Lycée Hassan Bno \nTabit, Ouled Abbou. "}]},{"label":["diplomes"],"points":[{"start":375,"end":499,"text":"2012–2015 Diplôme de licence en Informatique et Gestion Industrielle, IGI, Faculté des sciences \net Techniques, Settat, LST. "}]},{"label":["diplomes"],"points":[{"start":272,"end":367,"text":"2015–2017 Master Spécialité BioInformatique et Systèmes Complexes, BISC, ENSA , Tanger, \n\nBac+5."}]},{"label":["adresse"],"points":[{"start":15,"end":71,"text":"246 Hay Pam Eljadid OULED ABBOU \n26450 BERRECHID, Maroc "}]}],"extras":null,"metadata":{"first_done_at":1586127374000,"last_updated_at":1586327010000,"sec_taken":0,"last_updated_by":"wP21IMXff9TFSNLNp5v0fxbycFX2","status":"done","evaluation":"NONE"}}
{"content": "Resume 5 text in french","annotation":null,"extras":null,"metadata":{"first_done_at":1586139511000,"last_updated_at":1586139511000,"sec_taken":0,"last_updated_by":"wP21IMXff9TFSNLNp5v0fxbycFX2","status":"done","evaluation":"NONE"}}
Code that transformes this json data to spacy format
input_file="input.json"
output_file="output.json"
training_data = []
lines=[]
with open(input_file, 'r', encoding="utf8") as f:
lines = f.readlines()
for line in lines:
data = json.loads(line)
print(data)
text = data['content']
entities = []
for annotation in data['annotation']:
point = annotation['points'][0]
labels = annotation['label']
if not isinstance(labels, list):
labels = [labels]
for label in labels:
entities.append((point['start'], point['end'] + 1 ,label))
training_data.append((text, {"entities" : entities}))
with open(output_file, 'wb') as fp:
pickle.dump(training_data, fp)
Code for training the spacy model
def train_spacy():
TRAIN_DATA = training_data
nlp = spacy.load('fr_core_news_md') # create blank Language class
# create the built-in pipeline components and add them to the pipeline
# nlp.create_pipe works for built-ins that are registered with spaCy
# if 'ner' not in nlp.pipe_names:
# ner = nlp.create_pipe('ner')
# nlp.add_pipe(ner, last=True)
ner = nlp.get_pipe("ner")
# add labels
for _, annotations in TRAIN_DATA:
for ent in annotations.get('entities'):
ner.add_label(ent[2])
# get names of other pipes to disable them during training
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
with nlp.disable_pipes(*other_pipes): # only train NER
optimizer = nlp.begin_training()
for itn in range(20):
print("Statring iteration " + str(itn))
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in TRAIN_DATA:
nlp.update(
[text], # batch of texts
[annotations], # batch of annotations
# drop=0.2, # dropout - make it harder to memorise data
sgd=optimizer, # callable to update weights
losses=losses)
print(itn, dt.datetime.now(), losses)
output_dir = "new-model"
if output_dir is not None:
output_dir = Path(output_dir)
if not output_dir.exists():
output_dir.mkdir()
nlp.meta['name'] = "addr_edu" # rename model
nlp.to_disk(output_dir)
print("Saved model to", output_dir)
train_spacy()
When I test the model this is what happens
import spacy
nlp = spacy.load("new-model")
doc = nlp("Text of a Resume already trained on")
print(doc.ents)
# It prints out this ()
doc = nlp("Text of a Resume not trained on")
print(doc.ents)
# It prints out this ()
What I expect it to give me is the entities adresse(address) and diplomes(academic degrees) present in the text
Edit 1
The sample data(input.json) in the very top is part of the data that I get after I annotate resumes on a text annotation platform.
But I should transform it to spacy format, so I can give to model for training.
This is what a resume with annotations looks like when I give it to the model
training_data = [(
'Dr.XXXXXX XXXXXXX \n\n \nEmail : XXXXXXXXXXXXXXXXXXXXXXXX \n\nGSM : XXXXXXXXXX \n\nAdresse : XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \n \n\n \n\nETAT CIVIL \n \n\nSituation de famille : célibataire \n\nNationalité : Marocaine \n\nNé le : 10 février 1983 \n\nLieu de naissance : XXXXXXXXXXXXXXXX \n\n \n FORMATION \n\n• 2014-2017 : Institut National des Postes et Télécommunications INPT Rabat \n Thèse de doctorat en informatique et télécommunication \n \n\n• 2002-2004 : Faculté des Sciences Rabat \nDESA ((Diplôme des Etudes Supérieures Approfondies) en informatique \n\ntélécommunication multimédia \n \n\n• 1999-2000 : Faculté des Sciences Rabat \n Licence es sciences physique option électronique \n \n\n• 1996-1998 : Faculté des Sciences Rabat \n C.E.U.S (Certificat des Etudes universitaires Supérieurs) option physique et chimie \n \n\n• 1995-1996 : Lycée Dar Essalam Rabat \n Baccalauréat scientifique option sciences Expérimentales \n\nSTAGE DE FORMATION \n\n• Du 03/03/2004 au 17/09/2004 : Stage de Projet de Fin d’Etudes à l’ INPT pour \nl’obtention du DESA (Diplôme des Etudes Supérieures Approfondies). \n\n Sujet : AGENT RMON DANS LA GESTION DE RESEAUX. \n\n• Du 03/06/2002 au 17/01/2003: Stage de Projet de Fin d’année à INPT \n Sujet : Mécanisme d’Authentification Kerbéros Dans un Réseau Sans fils sous Redhat. \n\nPUBLICATION \n\n✓ Ababou, Mohamed, Rachid Elkouch, and Mostafa Bellafkih and Nabil Ababou. "New \n\nstrategy to optimize the performance of epidemic routing protocol." International Journal \n\nof Computer Applications, vol. 92, N.7, 2014. \n\n✓ Ababou, Mohamed, Rachid Elkouch, and Mostafa Bellafkih and Nabil Ababou. "New \n\nStrategy to optimize the Performance of Spray and wait Routing Protocol." International \n\nJournal of Wireless and Mobile Networks v.6, N.2, 2014. \n\n✓ Ababou, Mohamed, Rachid Elkouch, and Mostafa Bellafkih and Nabil Ababou. "Impact of \n\nmobility models on Supp-Tran optimized DTN Spray and Wait routing." International \n\njournal of Mobile Network Communications & Telematics ( IJMNCT), Vol.4, N.2, April \n\n2014. \n\n✓ M. Ababou, R. Elkouch, M. Bellafkih and N. Ababou, "AntProPHET: A new routing \n\nprotocol for delay tolerant networks," Proceedings of 2014 Mediterranean Microwave \n\nSymposium (MMS2014), Marrakech, 2014, IEEE. \n\nmailto:XXXXXXXXXXXXXXXXXXXXXXXX\n\n\n✓ Ababou, Mohamed, et al. "BeeAntDTN: A nature inspired routing protocol for delay \n\ntolerant networks." Proceedings of 2014 Mediterranean Microwave Symposium \n\n(MMS2014). IEEE, 2014. \n\n✓ Ababou, Mohamed, et al. "ACDTN: A new routing protocol for delay tolerant networks \n\nbased on ant colony." Information Technology: Towards New Smart World (NSITNSW), \n\n2015 5th National Symposium on. IEEE, 2015. \n\n✓ Ababou, Mohamed, et al. "Energy-efficient routing in Delay-Tolerant Networks." RFID \n\nAnd Adaptive Wireless Sensor Networks (RAWSN), 2015 Third International Workshop \n\non. IEEE, 2015. \n\n✓ Ababou, Mohamed, et al. "Energy efficient and effect of mobility on ACDTN routing \n\nprotocol based on ant colony." Electrical and Information Technologies (ICEIT), 2015 \n\nInternational Conference on. IEEE, 2015. \n\n✓ Mohamed, Ababou et al. "Fuzzy ant colony based routing protocol for delay tolerant \n\nnetwork." 10th International Conference on Intelligent Systems: Theories and Applications \n\n(SITA). IEEE, 2015. \n\nARTICLES EN COURS DE PUBLICATION \n\n✓ Ababou, Mohamed, Rachid Elkouch, and Mostafa Bellafkih and Nabil Ababou.”Dynamic \n\nUtility-Based Buffer Management Strategy for Delay-tolerant Networks. “International \n\nJournal of Ad Hoc and Ubiquitous Computing, 2017. ‘accepté par la revue’ \n\n✓ Ababou, Mohamed, Rachid Elkouch, and Mostafa Bellafkih and Nabil Ababou. "Energy \n\nefficient routing protocol for delay tolerant network based on fuzzy logic and ant colony." \n\nInternational Journal of Intelligent Systems and Applications (IJISA), 2017. ‘accepté par la \n\nrevue’ \n\nCONNAISSANCES EN INFORMATIQUE \n\n \n\nLANGUES \n\nArabe, Français, anglais. \n\nLOISIRS ET INTERETS PERSONNELS \n\n \n\nVoyages, Photographie, Sport (tennis de table, footing), bénévolat. \n\nSystèmes : UNIX, DOS, Windows \n\nLangages : Séquentiels ( C, Assembleur), Requêtes (SQL), WEB (HTML, PHP, MySQL, \n\nJavaScript), Objets (C++, DOTNET,JAVA) , I.A. (Lisp, Prolog) \n\nLogiciels : Open ERP (Enterprise Resource Planning), AutoCAD, MATLAB, Visual \n\nBasic, Dreamweaver MX. \n\nDivers : Bases de données, ONE (Opportunistic Network Environment), NS3, \n\nArchitecture réseaux,Merise,... \n\n',
{'entities': [(1233, 1424, 'diplomes'), (1012, 1227, 'diplomes'), (812, 1005, 'diplomes'), (589, 806, 'diplomes'), (365, 583, 'diplomes'), (122, 158, 'adresse')]}
)]
I agree is better if we try to train the model just on one resume, and test with it to see if he learns.
I've changed the code, the difference is now I try to train a blank model.
def train_spacy():
TRAIN_DATA = training_data
nlp = spacy.blank('fr')
ner = nlp.create_pipe("ner")
nlp.add_pipe(ner, last=True)
ner = nlp.get_pipe("ner")
# add labels
for _, annotations in TRAIN_DATA:
for ent in annotations.get('entities'):
ner.add_label(ent[2])
optimizer = nlp.begin_training()
for itn in range(20):
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in TRAIN_DATA:
nlp.update(
[text], # batch of texts
[annotations], # batch of annotations
drop=0.1, # dropout - make it harder to memorise data
sgd=optimizer, # callable to update weights
losses=losses
)
print(itn, dt.datetime.now(), losses)
return nlp
Here are the losses I get in the training
Here is the test, here I test the same resume used for training.
The good thing is now I dont have the empty tuple, the model actually recognized something correctly, in this case the "adresse" entity.
But I wont recognize the "diplomes" entity, which I web have 5 of them in this resume, even though its trained on it.

Related

QGis: Clipping a single raster by a shapefile multiple features

I am trying to make raster clipping code in Qgis. The intention is, through a shapefile with 6 different polygons, that the code goes through this shapefile and cuts the raster into 6 different images that correspond to the 6 polygons. Therefore the question is a code for massive clipping of images from a raster to a shapefile with 6 entities.
I've tried looping through the shapefile and including it as a vectorLayer for image cropping, but it gives me the error:
"in_parse_args a = os.fspath(a)
TypeError: expect str, bytes or os.PathLike objectx, not list"
How do I solve the problem to clip the raster? Any ideas to find the code? Any solution for my problem?
from osgeo import gdal
import os
from pathlib import Path
#Variable para la ruta al directorio
carpeta = r'C:\Users\User\Desktop\...\Script\Shapes'
#Lista vacia para incluir los ficheros
listaArchivos = []
#Lista con todos los ficheros del directorio:
lstDir = os.walk(carpeta) #os.walk()Lista directorios y ficheros
#Crea una lista de los ficheros shp que existen en el directorio y los incluye a la lista.
for root, dirs, files in lstDir:
for fichero in files:
(nombreFichero, extension) = os.path.splitext(fichero)
if(extension == ".shp"):
listaArchivos.append(nombreFichero+extension)
#print (nombreFichero+extension)
print(listaArchivos)
print ('LISTADO FINALIZADO')
print ("longitud de la lista = ", len(listaArchivos))
basepath = Path().home().joinpath(r'C:\Users\User\Desktop\...\Script')
rasterPath = basepath.joinpath("Ortos")
vectorPath = basepath.joinpath(listaArchivos)
#Guarda el recorte en una carpeta
dir_recorte = rasterPath.joinpath('Carpeta recortes')
if not os.path.exists(dir_recorte):
os.mkdir(dir_recorte)
#Recorte de la imagen
for lyr in rasterPath.glob("*.tif"):
output_path = dir_recorte.joinpath("ID_uno_{}".format(lyr.name))
parameters = {
'ALPHA_BAND': False,
'CROP_TO_CUTLINE': True,
'DATA_TYPE': 0,
'INPUT': lyr.as_posix(),
'KEEP_RESOLUTION': False,
'MASK': vectorPath.as_posix(),
'MULTITHREADING': False,
'OPTIONS': '',
'OUTPUT': output_path.as_posix(),
'SET_RESOLUTION': False,
'SOURCE_CRS': None,
'TARGET_CRS': None,
'X_RESOLUTION': None,
'Y_RESOLUTION': None
}
resultado = processing.run("gdal:cliprasterbymasklayer", parameters)
QgsProject.instance().addMapLayer(QgsRasterLayer(resultado['OUTPUT'], output_path.stem, 'gdal'))

UnboundLocalError: local variable 'range' referenced before assignment

I'm new to Python.
I jsut found an eroor in my code and I have no idea what's the problem, I already searched on multiple websites with the error message, but I haven't gotten any solution.
I didin't even put range into a variable so it's kinda weird.
import turtle # j'ai pas mis de couleur pour l'instant car jsp comment faire , à mon avis il faut choisir une rangée de couleur et ensuite utiliser la commande rand
import random
couleur=[(255,127,36),
(238,118,33),
(205,102,29),
(255,114,86),
(238,106,80),
(205,91,69),
(255,127,0),
(238,118,0),
(205,102,0),
(139,69,0),
(139,69,19)]
def random_color():
return random.choice(couleur)
def briques(): # cette fonction permet de tracer la ligné de brique
for i in range (12): # j'aurais pu mettre nblong , mais sa aurait était bizarre, c'est le nb de ligne de brique
for i in range (12): # c'est pour la ligne de 1ere brique
random_color()
color(couleur)
fillcolor(couleur)
begin_fill()
By the way this is a project on turtle, we have to build a house (in 2d) with turtles features.
Maybe try defining i with no value
for (I) in Range
define I
ex.
i=" " or i=0

Scrapy - how send resquest form

i newbies in web scraping, i try to send request with cookies but i think i do something wrong.
i recive cookies with filter value in research field.
fisrt tell me is i need to download extract package for send cookies in request.
second do i need to activate something in the setting.
Here is my code
i'm french so the command and the website is french
import scrapy
from ..items import DuproprioItem
from scrapy.http import FormRequest
#for remove encoding \xa0 = var.replace(u'\xa0', ' ')
# fonctionnement de scrapy avec multi pagination et liens multiples sur chaque page
# 1- on doit toujours cree la premiere fonction avec ne nom : 'parse'
# 2- dans cette fonction on commence par extraire les URL de tous les liens et on les renvoie dans une nouvelles fonction pour ensuite extraire les donner
# 3- on recupere les URL de toute les pages pour cree un pagination
# 4- on cree une pagination qui va permettre de recupere tous les fiches de tous les pages
# 4- on cree une nouvelle fonction qui extrait les donners des fiches
class BlogSpider(scrapy.Spider):
name = 'remax_cookies'
start_urls = [
'https://www.remax-quebec.com/fr/infos/nous-joindre.rmx'
]
def parse (self, response):
yield scrapy.http.Request("https://www.remax-quebec.com/fr/recherche/residentielle/resultats.rmx", callback=self.parse_with_form,method='POST',cookies={
'mode': 'criterias',
'order':'prix_asc',
'query': '',
'categorie':'residentielle',
'selectItemcategorie': 'residentielle',
'minPrice':'100000',
'selectItemminPrice': '100000',
'maxPrice':'200000',
'selectItemmaxPrice': '200000',
'caracResi7':'_',
'caracResi1': '_',
'caracResi4':'_',
'caracResi8': '_',
'caracResi2':'_',
'caracResi12': '_',
'caracComm4':'_',
'caracComm2': '_',
'caracComm5':'_',
'caracFarm3': '_',
'caracFarm1':'_',
'caracLand1': '_',
'caracResi5':'_',
'caracResi9': '_',
'caracResi10':'_',
'caracResi3': '_',
'caracResi6':'_',
'caracResi13': '_',
'caracComm3':'_',
'caracComm1': '_',
'caracFarm2':'_',
'uls': ''
})
def parse_with_form(self,response):
#je vais chercher tous les liens sur la page a suivre pour extraire les fiches des maisons
links_fiche = response.css('a.property-thumbnail::attr(href)').extract()
#j'extract tous les liens avec une boucle
for fiche in links_fiche:
#je renvoie tous les viens vers une fonction scraper exterieur pour ensuite extraire les donner des fiches recu(parse_fiche)
yield response.follow(fiche, self.parse_fiche)
#je scrape les donner recu de
def parse_fiche(self, response):
items = DuproprioItem()
price = response.css('h4.Caption__Price span').css('::text').get().strip()
items['price'] = price
yield items

Python beginner : Preprocessing a french text in python and calculate the polarity with a lexicon

I am writing an algorithm in python which processes a column of sentences and then gives the polarity (positive or negative) of each cell of my column of sentences. The script uses a list of negative and positive word from the NRC emotion lexicon (French version) I am having a problem writing the preprocess function. I have already written the count function and the polarity function but since I have some difficulty writing the preprocess function, I am not really sure if those functions works.
The positive and negative words were in the same file (lexicon) but I export positive and negztive words separately because I did not know how to use the lexicon as it was.
My function count occurrence of positive and negative does not work and I do not know why it Always sends me 0. I Added positive word in each sentence so the should appear in the dataframe:
stacktrace :
[4 rows x 6 columns]
id Verbatim ... word_positive word_negative
0 15 Je n'ai pas bien compris si c'était destiné a ... ... 0 0
1 44 Moi aérien affable affaire agent de conservati... ... 0 0
2 45 Je affectueux affirmative te hais et la Foret ... ... 0 0
3 47 Je absurde accidentel accusateur accuser affli... ... 0 0
=>
def count_occurences_Pos(text, word_list):
'''Count occurences of words from a list in a text string.'''
text_list = process_text(text)
intersection = [w for w in text_list if w in word_list]
return len(intersection)
csv_df['word_positive'] = csv_df['Verbatim'].apply(count_occurences_Pos, args=(lexiconPos, ))
This my csv_data : line 44 , 45 contains positive words and line 47 more negative word but in the column of positive and negative word , it is alwaqys empty, the function does not return the number of words and the final column is Always positive whereas the last sentence is negative
id;Verbatim
15;Je n'ai pas bien compris si c'était destiné a rester
44;Moi aérien affable affaire agent de conservation qui ne agraffe connais rien, je trouve que c'est s'emmerder pour rien, il suffit de mettre une multiprise
45;Je affectueux affirmative te hais et la Foret enchantée est belle de milles faux et les jeunes filles sont assises au bor de la mer
47;Je absurde accidentel accusateur accuser affliger affreux agressif allonger allusionne admirateur admissible adolescent agent de police Comprends pas la vie et je suis perdue
Here the full code :
# -*- coding: UTF-8 -*-
import codecs
import re
import os
import sys, argparse
import subprocess
import pprint
import csv
from itertools import islice
import pickle
import nltk
from nltk import tokenize
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
import pandas as pd
try:
import treetaggerwrapper
from treetaggerwrapper import TreeTagger, make_tags
print("import TreeTagger OK")
except:
print("Import TreeTagger pas Ok")
from itertools import islice
from collections import defaultdict, Counter
csv_df = pd.read_csv('test.csv', na_values=['no info', '.'], encoding='Cp1252', delimiter=';')
#print(csv_df.head())
stopWords = set(stopwords.words('french'))
tagger = treetaggerwrapper.TreeTagger(TAGLANG='fr')
def process_text(text):
'''extract lemma and lowerize then removing stopwords.'''
text_preprocess =[]
text_without_stopwords= []
text = tagger.tag_text(text)
for word in text:
parts = word.split('\t')
try:
if parts[2] == '':
text_preprocess.append(parts[1])
else:
text_preprocess.append(parts[2])
except:
print(parts)
text_without_stopwords= [word.lower() for word in text_preprocess if word.isalnum() if word not in stopWords]
return text_without_stopwords
csv_df['sentence_processing'] = csv_df['Verbatim'].apply(process_text)
#print(csv_df['word_count'].describe())
print(csv_df)
lexiconpos = open('positive.txt', 'r', encoding='utf-8')
print(lexiconpos.read())
def count_occurences_pos(text, word_list):
'''Count occurences of words from a list in a text string.'''
text_list = process_text(text)
intersection = [w for w in text_list if w in word_list]
return len(intersection)
#csv_df['word_positive'] = csv_df['Verbatim'].apply(count_occurences_pos, args=(lexiconpos, ))
#print(csv_df)
lexiconneg = open('negative.txt', 'r', encoding='utf-8')
def count_occurences_neg(text, word_list):
'''Count occurences of words from a list in a text string.'''
text_list = process_text(text)
intersection = [w for w in text_list if w in word_list]
return len(intersection)
#csv_df['word_negative'] = csv_df['Verbatim'].apply(count_occurences_neg, args= (lexiconneg, ))
#print(csv_df)
def polarity_score(text):
''' give the polarity of each text based on the number of positive and negative word '''
positives_text =count_occurences_pos(text, lexiconpos)
negatives_text =count_occurences_neg(text, lexiconneg)
if positives_text > negatives_text :
return "positive"
else :
return "negative"
csv_df['polarity'] = csv_df['Verbatim'].apply(polarity_score)
#print(csv_df)
print(csv_df)
If you could also see if the rest of the code is good to thank you.
I have found your error!
It comes from the Polarity_score function.
It's just a typo :
In your, if statement you were comparing count_occurences_Pos and count_occurences_Neg which are function instead of comparing the results of the function count_occurences_pos and count_occurences_peg
Your code should be like this :
def Polarity_score(text):
''' give the polarity of each text based on the number of positive and negative word '''
count_text_pos =count_occurences_Pos(text, word_list)
count_text_neg =count_occurences_Neg(text, word_list)
if count_occurences_pos > count_occurences_peg :
return "Positive"
else :
return "negative"
In the future, you need to learn how to have meaningful names for your variables to avoid those kinds of errors
With correct variables names, your function should be :
def polarity_score(text):
''' give the polarity of each text based on the number of positive and negative word '''
positives_text =count_occurences_pos(text, word_list)
negatives_text =count_occurences_neg(text, word_list)
if positives_text > negatives_text :
return "Positive"
else :
return "negative"
Another improvement you can make in your count_occurences_pos and count_occurences_neg function is to use set instead of the list. Your text and world_list can be converted to sets and you can use the set intersection to retrieve the positive texts in them.Because set are faster than lists

Embed a pdf in a R Markdown file and adapt pagination

I am finishing my PhD, and I need to embed some papers (in pdf format) in somewhere in the middle of my R Markdown text.
When converting the R Markdown into PDF, I would like those PDF papers to be embed in the conversion.
However, I would like those PDF papers to be also numbered according to the rest of the Markdown text.
How can I do it?
UPDATE: New error
By using \includepdf, I get this error:
output file: Tesis_doctoral_-_TEXTO.knit.md
! Undefined control sequence.
l.695 \includepdf
[pages=1-10, angle=90, pagecommand={}]{PDF/Paper1.pdf}
Here is how much of TeX's memory you used:
12157 strings out of 495028
174654 string characters out of 6181498
273892 words of memory out of 5000000
15100 multiletter control sequences out of 15000+600000
40930 words of font info for 89 fonts, out of 8000000 for 9000
14 hyphenation exceptions out of 8191
31i,4n,35p,247b,342s stack positions out of 5000i,500n,10000p,200000b,80000s
Error: Failed to compile Tesis_doctoral_-_TEXTO.tex. See Tesis_doctoral_-_TEXTO.log for more info.
Execution halted
EXAMPLE of the R Markdown code
---
title: Histología dental de los homininos de la Sierra de Atapuerca (Burgos, España)
y patrón de estrategia de vida
author: "Mario Modesto-Mata"
date: "20 September 2018"
output:
pdf_document:
highlight: pygments
number_sections: yes
toc: yes
toc_depth: 4
word_document:
toc: yes
toc_depth: '4'
html_document: default
csl: science.csl
bibliography: references.bib
header-includes:
- \usepackage{pdfpages}
---
```{r opciones_base_scripts, message=FALSE, warning=FALSE, include=FALSE, paged.print=FALSE}
library(captioner)
tabla_nums <- captioner(prefix = "Tabla")
figura_nums <- captioner(prefix = "Figura")
anx_tabla_nums <- captioner(prefix = "Anexo Tabla")
```
# Resumen
Los estudios de desarrollo dental en homínidos han sido sesgados involuntariamente en especies pre-Homo y algunos especímenes Homo tempranos, que representan la condición primitiva con tiempos de formación dental más rápidos, respetan a los Neandertales posteriores y a los humanos modernos, que comparativamente tienen tiempos de formación más lentos.
## PDF Article
\includepdf[pages=1-22, pagecommand={}]{PDF/Paper1.pdf}
## Bayes
El desarrollo dental relativo se evaluó empleando un enfoque estadístico bayesiano (31).
This is the link to download the PDF
NEW IMAGE
I had to remove a few things from your example, but after that it worked without problems:
---
title: Histología dental de los homininos de la Sierra de Atapuerca (Burgos, España)
y patrón de estrategia de vida
author: "Mario Modesto-Mata"
date: "20 September 2018"
output:
pdf_document:
highlight: pygments
number_sections: yes
toc: yes
toc_depth: 4
keep_tex: yes
word_document:
toc: yes
toc_depth: '4'
html_document: default
header-includes:
- \usepackage{pdfpages}
---
# Resumen
Los estudios de desarrollo dental en homínidos han sido sesgados involuntariamente en especies pre-Homo y algunos especímenes Homo tempranos, que representan la condición primitiva con tiempos de formación dental más rápidos, respetan a los Neandertales posteriores y a los humanos modernos, que comparativamente tienen tiempos de formación más lentos.
## PDF Article
\includepdf[pages=1-22, pagecommand={}, scale = 0.9]{Paper1.pdf}
## Bayes
El desarrollo dental relativo se evaluó empleando un enfoque estadístico bayesiano (31).
Result:
BTW, for something like a thesis I would use bookdown, since this gives you cross-referencing etc.
If that does not work for you, I suggest first looking at plain LaTeX, i.e. does the following LaTeX document work for you:
\documentclass{article}
\usepackage{pdfpages}
\begin{document}
foo
\includepdf[pages=1-22, pagecommand={}, scale = 0.9]{Paper1.pdf}
bar
\end{document}