UnboundLocalError: local variable 'range' referenced before assignment - variables

I'm new to Python.
I jsut found an eroor in my code and I have no idea what's the problem, I already searched on multiple websites with the error message, but I haven't gotten any solution.
I didin't even put range into a variable so it's kinda weird.
import turtle # j'ai pas mis de couleur pour l'instant car jsp comment faire , à mon avis il faut choisir une rangée de couleur et ensuite utiliser la commande rand
import random
couleur=[(255,127,36),
(238,118,33),
(205,102,29),
(255,114,86),
(238,106,80),
(205,91,69),
(255,127,0),
(238,118,0),
(205,102,0),
(139,69,0),
(139,69,19)]
def random_color():
return random.choice(couleur)
def briques(): # cette fonction permet de tracer la ligné de brique
for i in range (12): # j'aurais pu mettre nblong , mais sa aurait était bizarre, c'est le nb de ligne de brique
for i in range (12): # c'est pour la ligne de 1ere brique
random_color()
color(couleur)
fillcolor(couleur)
begin_fill()
By the way this is a project on turtle, we have to build a house (in 2d) with turtles features.

Maybe try defining i with no value
for (I) in Range
define I
ex.
i=" " or i=0

Related

Why am I getting error TypeError: 'numpy.ndarray' object is not callable?

I am trying to use the Maclaurin series to estimate the sine of a number. This is my function:
from math import sin, factorial
import numpy as np
fac = np.vectorize(factorial)
def sin_x(x, terms=10):
""" Calcul une approximation de sin(x) en utlisant
un nombre donné de termes de la serie de Maclaurin"""
n = np.arange(terms)
return np.sum[((-1)**n)(x**(2*n+1)) / fac(2*n+1)]
if __name__ == "__main__":
print("Valeur actuelle:", sin(5)) # En utilisant la fonction sinus de la librairie math
print("N (terms)\tMaclaurin\tErreur")
for n in range(1, 6):
maclaurin = sin_x(5, terms=n)
print(f"{n}\t\t{maclaurin:.03f}\t\t{sin(10) - maclaurin:.03f}")
and this is the error I get
PS C:\Users\tapef\Desktop\NUMPY-TUT> python maclaurin_sin.py
Valeur actuelle: -0.9589242746631385
N (terms) Maclaurin Erreur
Traceback (most recent call last):
File "C:\Users\tapef\Desktop\NUMPY-TUT\maclaurin_sin.py", line 20, in <module>
maclaurin = sin_x(5, terms=n)
File "C:\Users\tapef\Desktop\NUMPY-TUT\maclaurin_sin.py", line 12, in sin_x
return np.sum[((-1)**n)(x**(2*n+1)) / fac(2*n+1)]
TypeError: 'numpy.ndarray' object is not callable
How do I get rid of this error? Thanks
I've tried to use brackets instead of parenthetis.
You forgot the multiplication between (-1)**n) and (x**(2*n+1). Furthermore, np.sum() needs to be called as a function, so your return statement becomes:
return np.sum(((-1)**n)*(x**(2*n+1)) / fac(2*n+1))

Why does parallelization of panda dataframes (which I manipulate using apply()) takes forever when using multiprocessing, and how to solve it?

I have a very large panda dataframe (915000 rows) and two columns, and each row represents a newspaper article I have scraped (the entire database has around 2.5 gb). I have to manipulate it and read that parralelization could offer great gains. I first defined a function, add_features(df) which takes a dataframe and applies several transformations, returning the transformed dataframe. Apply() is used, and all other functions I created that I call in add_features(df) are written inside the function add_features.
To perform the parallelization process, I used the following code (which can be found in https://towardsdatascience.com/make-your-own-super-pandas-using-multiproc-1c04f41944a1 and in several other stackoverflow pages):
from multiprocessing import Pool #for parallelization
#importing other conventional packages
def add_features(df)
#several functions being created
#apply() being used on several columns of the dataframe, using functions previously created
#return (df)
def parallelize_dataframe(df, func, n_cores=4):
df_split = np.array_split(df, n_cores)
pool = Pool(n_cores)
df = pd.concat(pool.map(func, df_split))
pool.close()
pool.join()
return df
transformed_df = parallelize_dataframe(toy_df, add_features)
where toy_df is a very small subset (100 rows) of my originally large database.
When I apply add_features() on toy_df, I get all results I desire. However, when I perform the code above, it never stops running. I read a few other questions, the answers to which recommended me to write add_features() in a py file and import it. I did that, checked if the imported function was working on toy_df (it was), but when I performed the code above (except for the part where I define add_features, as I actually imported it using from add_features_file import *, add_features_file.py being the file in which I defined the function), I also faced the same problem.
What I want is to simply be able to use this function to clean my dataframe through the parallelization process, basically how to avoid it lasting forever.
EDIT: I was asked to write the code behind the function add_features(x), but my main concern is with the parralelization process, as the function add_features works (I have tested it on a subset of my dataframe). The dataframe contains newspaper articles, the dates in which they were released, as well as website links.
import pandas as pd
df = pd.DataFrame({'day_dates': ['09/04/2020', '09/04/2020', '09/04/2020'],
'day_everything': ["Petrobrás diz que 53 trabalhadores de plataforma entraram em greve. Estado é de tensão.", "CNI e centrais sindicais podem acompanhar ação judicial. Ainda há dúvidas sobre prazo.", "Bancos públicos e privados negociam empréstimo. Prazo médio está sendo negociado."],'day_links': [link_1, link_2, link_3]})
The function of interest is quite large. For the most part I am defining individual functions inside of it, and in the end I use these individual functions in the pandas.apply() method to generate the final dataframe of interest. I did it this way because, when I defined the intermediate functions and then defined another one consisting of the apply method (which use the intermediate functions in single rows), there was a problem with local/implicitly global variables such that the parralelization part wouldn`t even run ("name is not defined error" would arise when intermediate functions were called by the add_features functions, and were not defined inside of it). Thus I wrote all the intermediate functions inside the add_features function, the final chunck of which is effectively applied over the dataframe and performs textual transformations.
def add_features(df):
def at_least_one_lower_case(sentence):
if sentence.isupper():
return ''
else:
return sentence
def no_dirty_expressions(article):
#palavras contendo partes abaixo sao completamente removidas
palavras_a_tirar_caso_contenham=['-----------------------------------------------------------------','---------------------------------------------------------------','--------------------------------------------------------','-------------------------------------------------------','--------------------------------------------','------------------------------------------','------------------------------------','segunda-feira','terça-feira','quarta-feira','quinta-feira','sexta-feira','2ª-feira','3ª-feira','4ª-feira','5ª-feira','6ª-feira','2.ª','3.ª','4.ª','5.ª','6.ªs','6.ª','sábado','domingo','janeiro','fevereiro','março','abril','maio','junho','julho','agosto','setembro','outubro','novembro','dezembro','jan','fev','mar','abr','mai','jun','jul','ago','set','out','nov','dez','http','www.','quadrissemana','**','0-ações','2ª-feiraA','semestreAtuais','#','e-mail',]
#https://economia.estadao.com.br/noticias/negocios,iea-preco-agricola-sobe-1-87-na-terceira-semana-de-novembro,20041122p1280 https://economia.estadao.com.br/noticias/geral,taxas-cobradas-por-fundos-diminuem-ganhos,20021003p13232 https://economia.estadao.com.br/noticias/geral,saldo-em-conta-corrente-fica-em-us-725-milhoes-em-fevereiro,20060321p34175
palavras_a_tirar_caso_contenham_higher=[word.capitalize() for word in palavras_a_tirar_caso_contenham]
palavras_a_tirar_caso_contenham=list(set(palavras_a_tirar_caso_contenham+palavras_a_tirar_caso_contenham_higher))
pattern= ['/'] #colocar \u00C0-\u00FF depois de letras pra accent insensitivity #caso eu queira tirar algo especifico da palavra, mas nao elimina-la
combined_pattern = r'|'.join(pattern)
#palavras abaixo sao tiradas pontualmente de frases
palavras_soltas_a_tirar=['--leia mais em','ler mais na pág','leia também','Leia Também','leia o especial','Leia o especial','Leia também','Leia Também', 'leia o','Leia o','(cotação)','(variação)','CORREÇÃO (OFICIAL)','CORREÇÃO(OFICIAL)', 'Este texto foi atualizado às']
palavras_soltas_a_tirar_higher=[word.capitalize() for word in palavras_soltas_a_tirar]
palavras_soltas_a_tirar=list(set(palavras_soltas_a_tirar+palavras_soltas_a_tirar_higher))
#Ultima frase que contëm expressoes abaixo sera eliminada
bad_last_sentences=['(com','(Por','(por','Por','por','colaborou', 'Colaborou', 'COLABORARAM', 'colaboraram','CONFIRA','Confira','confira','Não deixe de ver','veja abaixo as cotações de fechamento','*', '(Edição de','Este texto foi atualizado às','As informações são da Dow Jones','É proibido todo tipo de reprodução sem autorização por escrito do The New York Times']
bad_last_sentences_higher=[word.capitalize() for word in bad_last_sentences]
bad_last_sentences=list(set(bad_last_sentences+bad_last_sentences_higher))
#Frases que contëm expressoes abaixo serao eliminadas
bad_expressions=['TRADUÇÃO DE','Matéria alterada','matéria foi atualizada','Reportagem de','reportagem de','as informações são da Dow Jones','as informações são do jornal','BBC Brasil - Todos os direitos reservados','é proibido todo tipo de reprodução sem autorização por escrito da BBC','as informações são da Dow Jones','DOW JONES NEWSWIRES','CONFIRA','Confira','confira','não deixe de ver','veja abaixo as cotações de fechamento','telefones das lojas','veja abaixo o','Veja, abaixo, tabela','semestreAtuais','industrialAnoVagas','VarejoGrupoFevereiro','Ibovespa19951996199719981999200020012002','5319,5Ibovespa-1']
bad_expressions_higher=[word.capitalize() for word in bad_expressions]
bad_expressions=list(set(bad_expressions+bad_expressions_higher))
article=article.strip() #garantindo que nao ha excesso de espaco nas bordas
if article[-1] != '.': #garantindo que ultimo caractere eh ponto final
article=article+'.'
#tiro palavras sujas e que podem conter ponto ou virgula ou dois pontos ou exclamacao ou interrogacao. Faco isso agora pois se nao, corro o risco de criar mais palavras sujas na parte de correct_error()
pre_token=article.split()
replaced_token=[re.sub(combined_pattern,'', word) for word in pre_token if not any (symbol in word for symbol in palavras_a_tirar_caso_contenham)]
article_sem_palavras_que_contenham_sujeira=' '.join(replaced_token)
#tiro palavras soltas de dentro de sentences
def palavras_soltas_a_tirar_fun(sentence):
for sujeira in palavras_soltas_a_tirar:
if sujeira in sentence:
sentence=sentence.replace(sujeira,'')
return (sentence)
sentences=article_sem_palavras_que_contenham_sujeira.split('.')
replaced_sentences=[palavras_soltas_a_tirar_fun(sentence) for sentence in sentences]
article_sem_palavras_soltas='.'.join(replaced_sentences)
#tiro palavras sujas identificaveis via regex pertencendo a estas palavras, ao inves de uma substring pertencendo a elas
def fun_remove_exact_regex(word):
exact_regex= [r'[0-9]{4}[-]+[0-9]{4}']
for pat in exact_regex:
if not re.search(pat,word):
return(word)
else:
return ''
pre_token_2=article_sem_palavras_soltas.split()
replaced_token_2=[fun_remove_exact_regex(word) for word in pre_token_2 ]
article_sem_palavras_que_contenham_exact_regex=' '.join(replaced_token_2)
article_sem_palavras_que_contenham_exact_regex=article_sem_palavras_que_contenham_exact_regex.strip() #garantindo que nao ha excesso de espaco nas bordas
if article_sem_palavras_que_contenham_exact_regex[-1] != '.': #garantindo que ultimo caractere eh ponto final
article_sem_palavras_que_contenham_exact_regex=article_sem_palavras_que_contenham_exact_regex+'.'
#tiro a última frase caso esta seja suja
sentences=article_sem_palavras_que_contenham_exact_regex.split('.')
last_sentence=article_sem_palavras_que_contenham_exact_regex.split('.')[-2]
no_dirty_last_sentence=sentences[:-2] if any (bad_last_sentence in last_sentence for bad_last_sentence in bad_last_sentences) else sentences
no_dirty_last_sentence='.'.join(no_dirty_last_sentence)
#tiro alguma frase suja central restante
sentences=no_dirty_last_sentence.split('.')
clean_sentences=[sentence for sentence in sentences if not any(expression in sentence for expression in bad_expressions)]
not_upper_clean_sentences=[at_least_one_lower_case(sentence) for sentence in clean_sentences]
clean_text='.'.join(not_upper_clean_sentences)
return clean_text
def fun_split(text):
return text.split()
def at_least_one_letter(word):
if any(character.isalpha() for character in word):
return word
else:
return ''
def fun_no_tokens_with_no_letters_and_bad_symbols(list_of_tokens):
bad_symbols=['#','$'] #o ultimo elemento engloba tokens formados por letras maisculas dentro de parentesis, como (PIB). Eu perderia (PIB) no caminho, mas tudo bem desde que esteja depois de produto interno bruto. PIB solto eu não perco. Faco isso pq mts vezes tem tags de acoes da bovespa escritos dessa forma. Eu antes substituia isso por espacos vazios, dessa forma acabava tendo uma porrada de palavras vazias. agora elimino tokens assim por completo (e dificilmente uma palavra normal contera nela parenthesis com letras maiusculas, a nao ser q esquecao o espaco)
pattern=[''] #colocar \u00C0-\u00FF depois das letras pra accent insensititity#r'\([A-Z]+?\)' decidi nao eliminar palavras como (PIB)
combined_pattern = r'|'.join(pattern)
clean=[re.sub(combined_pattern,'',at_least_one_letter(word)) for word in list_of_tokens if not any(symbol in word for symbol in bad_symbols)]
return clean
def fun_clean_extreme(list_of_tokens):
def clean_extreme(word):
if any(character.isalnum() for character in word):
while not (word[0].isalnum()):
word=word[1:]
while not (word[-1].isalnum()):
word=word[:-1]
return (word)
else: #Pois se nao levasse em conta tokens nao alfanumericos, daria problema
return ''
clean=[clean_extreme(word) for word in list_of_tokens]
return clean
def flatten_list(_2d_list):
flat_list = []
# Iterate through the outer list
for element in _2d_list:
if type(element) is list:
# If the element is of type list, iterate through the sublist
for item in element:
flat_list.append(item)
else:
flat_list.append(element)
return flat_list
def no_initial_repetition_of_vowel(word): #perderei siglas neste passo
if len(word)>=2 and word not in ['Aaabar','Aaa.br','Aa3.br']:
if (word[0]=='A' and word[1]=='A') or (word[0]=='A' and word[1]=='a') or (word[0]=='a' and word[1]=='A') or (word[0]=='a' and word[1]=='a'):
index=1
splitat= index
l, r = word[:splitat], word[splitat:]
return [l,r]
else:
return(word)
else:
return(word)
def correct_dot(word):
if '.' in word:
index=word.find('.') #vai estar no meio
if word[index+1].isupper():
splitat= index
l, r = word[:splitat], word[splitat+1:]
return [l,r]
else:
return(word)
else:
return(word)
def correct_comma(word):
if ',' in word:
index=word.find(',') #vai estar no meio
splitat= index
l, r = word[:splitat], word[splitat+1:]
return [l,r]
else:
return(word)
def correct_comma_dot(word):
if ';' in word:
index=word.find(';') #vai estar no meio
splitat= index
l, r = word[:splitat], word[splitat+1:]
return [l,r]
else:
return(word)
def correct_exclamation(word):
if '!' in word:
index=word.find('!') #vai estar no meio
splitat= index
l, r = word[:splitat], word[splitat+1:]
return [l,r]
else:
return(word)
def correct_question_mark(word):
if '?' in word:
index=word.find('?') #vai estar no meio
splitat= index
l, r = word[:splitat], word[splitat+1:]
return [l,r]
else:
return(word)
def correct_two_dots(word):
if ':' in word:
index=word.find(':') #vai estar no meio
splitat= index
l, r = word[:splitat], word[splitat+1:]
return [l,r]
else:
return(word)
def no_two_hifens(word): #nao quero '2000--tesouro' #assumo que mais hifens alem de dois entre palavras pode ser sujeira de tabela
pattern=r'^.*[0-9]*[ A-Za-z\u00C0-\u00FF]*.*[-]{2}.*[0-9]*[ A-Za-z\u00C0-\u00FF]*.*[ A-Za-z\u00C0-\u00FF]*$' #pra accent insensitivy
if re.match(pattern,word):
index=word.find('--')
splitat= index
l, r = word[:splitat], word[splitat+2:]
return [l,r]
else:
return(word)
def single_hifen_exception(word):
special_list=['19,5%-Unica','13,7%-Anfavea','6,22%-IBGE']
if word in special_list:
index=word.find('-')
splitat= index
l, r = word[:splitat], word[splitat+1:]
return [r]
else:
return(word)
def correct_error(article):
article=flatten_list([no_initial_repetition_of_vowel(word) for word in article])
article=flatten_list([correct_dot(word) for word in article])
article=flatten_list([correct_comma(word) for word in article])
article=flatten_list([correct_two_dots(word) for word in article])
article=flatten_list([correct_exclamation(word) for word in article])
article=flatten_list([correct_question_mark(word) for word in article])
article=flatten_list([no_initial_repetition_of_vowel(word) for word in article])
article=flatten_list([no_two_hifens(word) for word in article])
article=flatten_list([single_hifen_exception(word) for word in article])
return article
def fun_no_symbols_except_hyphens_replaced_without_space(list_of_tokens):
patterns=[r'[^ \nA-Za-z0-9À-ÖØ-öø-ÿ-/]+'] #\u00C0-\u00FF
combined_pattern = r'|'.join(patterns)
clean=[re.sub(combined_pattern, '', word) for word in list_of_tokens]
return clean
def fun_no_spaces_in_middle(list_of_tokens):
clean=flatten_list([word.split() for word in list_of_tokens])
return clean
def fun_not_the_same_character(list_of_tokens):
def not_the_same_character(word):
if len(word)>=2:
if word == len(word) * word[0]:
return ''
else:
return word
else:
return ''
clean=[not_the_same_character(word) for word in list_of_tokens]
return clean
#ELIMINATING CERTAIN VOCALUBARY
def fun_sep_letters_and_numb_hifen(list_of_tokens):
def separate_right_letters_from_left_numbers_when_hifen(word):
pat_numbers_before_letters_hifen=r'^[0-9]+[-]{1}[0-9]*[ A-Za-z\u00C0-\u00FF]+$' #damos espaco para hifen. por exemplo, 145paises-membros viraria paises-membros. A regra so vale se o numero NAO eh seguido de hifen. Faco isso pois assumo que neste caso pode ter havido uma falta de espaco do jornal. Posso inclusive botar isso mais no inicio
if re.match(pat_numbers_before_letters_hifen,word):
index=word.find('-')
splitat= index
l, r = word[:splitat], word[splitat+1:]
return r
else:
return word
def separate_left_letters_from_right_numbers_when_hifen(word):
pat_letters_before_numbers_hifen=r'^[ A-Za-z\u00C0-\u00FF]+[-]{1}[0-9]+$' #o hifen nao pode estar colado na letra, pois caso contrario perderiamos covid-19 so para covid, o que nao eh de todo pior.
if re.match(pat_letters_before_numbers_hifen,word) and word not in ['covid-19','Covid-19']:
index=word.find('-')
splitat= index
l, r = word[:splitat], word[splitat:]
return l
else:
return word
clean=[separate_right_letters_from_left_numbers_when_hifen(separate_left_letters_from_right_numbers_when_hifen(word)) for word in list_of_tokens]
return clean
def sep_letter_and_numb(list_of_tokens):
def separate_right_letters_from_left_numbers(word):
pat_numbers_before_letters=r'^[0-9]+[ A-Za-z\u00C0-\u00FF]*[-]*[ A-Za-z\u00C0-\u00FF]+$' #damos espaco para hifen. por exemplo, 145paises-membros viraria paises-membros. A regra so vale se o numero NAO eh seguido de hifen. Faco isso pois assumo que neste caso pode ter havido uma falta de espaco do jornal. Posso inclusive botar isso mais no inicio
if re.match(pat_numbers_before_letters,word):
index=word.find(next(filter(str.isalpha,word)))
splitat= index
l, r = word[:splitat], word[splitat:]
return r
else:
return word
def separate_left_letters_from_right_numbers(word):
pat_letters_before_numbers=r'^[ A-Za-z\u00C0-\u00FF]+[0-9]*[0-9]+$' #o hifen nao pode estar colado na letra, pois caso contrario perderiamos covid-19 so para covid, o que nao eh de todo pior.
if re.match(pat_letters_before_numbers,word) and word not in ['G20','G8']:
index=word.find(next(filter(str.isnumeric,word)))
splitat= index
l, r = word[:splitat], word[splitat:]
return l
else:
return word
clean=[separate_left_letters_from_right_numbers(separate_right_letters_from_left_numbers(word)) for word in list_of_tokens]
return clean
def fun_no_letters_surrounded_by_numbers(list_of_tokens):
patterns=[r'[0-9]+[ A-Za-z\u00C0-\u00FF]+[0-9]+[ A-Za-z\u00C0-\u00FF]*']
combined_pattern = r'|'.join(patterns)
clean=[word for word in list_of_tokens if not re.match(combined_pattern,word)]
return clean
def fun_no_single_character_token(list_of_tokens):
clean=[word for word in list_of_tokens if len(word) > 2]
return clean
def fun_no_stopwords(list_of_tokens):
nltk_stopwords = nltk.corpus.stopwords.words('portuguese') #é uma lista
personal_stopwords = ['Esse','Essa','Essas','Esses','De','A','As','O','Os','Em','Também','Um','Uma','Uns','Umas','Eu','eu','Ele','ele','Nós','nós','Você','você','entanto','Entanto','ser','Ser','com','Com','como','Como','se','Se','portanto','Portanto','Enquanto','enquanto','No','no','Na','na','Nessa','Nesse','Nesses','Nessas','As','A','Os','às','Às','ante','Ante','entre','Entre','sim','não','Sim','Não','Onde','onde','Aonde','aonde','após','Após','ser','Ser','hoje','Se','vai','Hoje','Por','Quando','Também','Depois','Mesmo','Numa','Numas','Pelos','Aquele','Aquela','Aqueles','Aquelas','Aquilos','Há','Ou','Isso','Segundo','segundo','pois','Pois','outro','Outro','outros','Outros','outra','Outras','outras','Outra','Este','cada','Cada','para','Para','disso','Disso','dessa','desse','deste','desta','Deste','Desta','Destes','Destas','já','Já','Mas','mas','ao','Ao','porém','Porém','Este','mesma','Mais','cujo','cuja','cujos','cujas','caso','quanto','a partir','cujo','caso','quanto','devido','pelo','pela','pelas','à','do','disto','mesmas','mesmos','nós','primeiro','primeiros','primeira','primeiras','segunda','quanto','neste','nesta','nestes','nestas']
list_of_numbers=list(range(0, 100))
lista_numeros_extenso=[num2words(number,lang='pt_BR') for number in list_of_numbers]
initial_stopwords=nltk_stopwords+personal_stopwords+lista_numeros_extenso
initial_stopwords_lower=[word.lower() for word in initial_stopwords]
exceptions=['são','fora'] #Assim, preservamos 'São Paulo' e 'Fora Temer'
initial_stopwords_higher=[word.capitalize() for word in initial_stopwords_lower if not word in exceptions]
stopwords=list(set(initial_stopwords_lower+initial_stopwords_higher))
stopwords.sort()
clean=[word for word in list_of_tokens if word not in stopwords]
return clean
#Lematizing and _lower_case
! python -m spacy download pt_core_news_sm
nlp = spacy.load("pt_core_news_sm") #, disable=['parser', 'ner'] # just keep tagger for lemmatization
def fun_verb_lematized_lower_case(list_of_tokens):
#nlp = spacy.load("pt_core_news_sm") #, disable=['parser', 'ner'] # just keep tagger for lemmatization
spacy_object=nlp(' '.join(list_of_tokens))
clean=[token.lemma_.lower() if token.pos_=='VERB' else token.text.lower() for token in spacy_object]
return clean
def fun_verb_adv_adj_pron_det_lematized_lower_case(list_of_tokens):
#nlp = spacy.load("pt_core_news_sm") #, disable=['parser', 'ner'] # just keep tagger for lemmatization
spacy_object=nlp(' '.join(list_of_tokens))
clean=[token.lemma_.lower() if (token.pos_=='VERB' or token.pos_== 'ADV' or token.pos_== 'ADJ' or token.pos_== 'DET' or token.pos_== 'PRON') else token.text.lower() for token in spacy_object]
return clean
def fun_completely_lematized_lower_case(list_of_tokens):
#nlp = spacy.load("pt_core_news_sm") #, disable=['parser', 'ner'] # just keep tagger for lemmatization
spacy_object=nlp(' '.join(list_of_tokens))
clean=[token.lemma_.lower() for token in spacy_object]
return clean
def function_unique_tokens(text):
clean=set(text)
return clean
print ('no_dirty_expression')
df['no_dirty_expression']=df['everything'].apply(lambda x: no_dirty_expressions(x))
print ('tokenized')
df['tokenized']=df['no_dirty_expression'].apply(lambda x: fun_split(x))
print ('no_tokens_with_no_letters_and_bad_symbols')
df['no_tokens_with_no_letters_and_bad_symbols']=df['tokenized'].apply(lambda x: fun_no_tokens_with_no_letters_and_bad_symbols(x))
print ('no_symbols_in_extreme')
df['no_symbols_in_extreme']=df['no_tokens_with_no_letters_and_bad_symbols'].apply(lambda x: fun_clean_extreme(x))
print ('correction_errors')
df['correction_errors']=df['no_symbols_in_extreme'].apply(lambda x: correct_error(x))
print ('no_symbols_except_hyphens_replaced_without_space')
df['no_symbols_except_hyphens_replaced_without_space']=df['correction_errors'].apply(lambda x: fun_no_symbols_except_hyphens_replaced_without_space(x))
print ('no_symbols_in_extreme_again')
df['no_symbols_in_extreme_again']=df['no_symbols_except_hyphens_replaced_without_space'].apply(lambda x: fun_clean_extreme(x))
print ('no_spaces_in_middle')
df['no_spaces_in_middle']=df['no_symbols_in_extreme_again'].apply(lambda x: fun_no_spaces_in_middle(x))
print ('no_same_character_word')
df['no_same_character_word']=df['no_spaces_in_middle'].apply(lambda x: fun_not_the_same_character(x))
print ('again_no_tokens_with_no_letters_and_bad_symbols')
df['again_no_tokens_with_no_letters_and_bad_symbols']=df['no_same_character_word'].apply(lambda x: fun_no_tokens_with_no_letters_and_bad_symbols(x))
print ('no_letters_and_numbers_divided_by_hyphen')
df['no_letters_and_numbers_divided_by_hyphen']=df['again_no_tokens_with_no_letters_and_bad_symbols'].apply(lambda x: fun_sep_letters_and_numb_hifen(x))
print ('no_letters_and_numbers_glued')
df['no_letters_and_numbers_glued']=df['no_letters_and_numbers_divided_by_hyphen'].apply(lambda x: sep_letter_and_numb(x))
print ('no_letters_surrounded_by_numbers')
df['no_letters_surrounded_by_numbers']=df['no_letters_and_numbers_glued'].apply(lambda x: fun_no_letters_surrounded_by_numbers(x))
print ('no_single_character_token')
df['no_single_character_token']=df['no_letters_surrounded_by_numbers'].apply(lambda x: fun_no_single_character_token(x))
print ('no_stopwords')
df['no_stopwords']=df['no_single_character_token'].apply(lambda x: fun_no_stopwords(x))
print ('once_more_no_tokens_with_no_letters_and_bad_symbols')
df['once_more_no_tokens_with_no_letters_and_bad_symbols']=df['no_stopwords'].apply(lambda x: fun_no_tokens_with_no_letters_and_bad_symbols(x))
print ('verb_lematized_lower_case')
df['verb_lematized_lower_case']=df['once_more_no_tokens_with_no_letters_and_bad_symbols'].apply(lambda x: fun_verb_lematized_lower_case(x))
print ('verb_adv_adj_pron_det_lematized_lower_case')
df['verb_adv_adj_pron_det_lematized_lower_case']=df['once_more_no_tokens_with_no_letters_and_bad_symbols'].apply(lambda x: fun_verb_adv_adj_pron_det_lematized_lower_case(x))
print ('completely_lematized_lower_case')
df['completely_lematized_lower_case']=df['once_more_no_tokens_with_no_letters_and_bad_symbols'].apply(lambda x: fun_completely_lematized_lower_case(x))
print ('verb_final_no_tokens_with_no_letters_and_bad_symbols')
df['verb_final_no_tokens_with_no_letters_and_bad_symbols']=df['verb_lematized_lower_case'].apply(lambda x: fun_no_tokens_with_no_letters_and_bad_symbols(x))
print ('all_but_noun_final_no_tokens_with_no_letters_and_bad_symbols')
df['all_but_noun_final_no_tokens_with_no_letters_and_bad_symbols']=df['verb_adv_adj_pron_det_lematized_lower_case'].apply(lambda x: fun_no_tokens_with_no_letters_and_bad_symbols(x))
print ('complete_final_no_tokens_with_no_letters_and_bad_symbols')
df['complete_final_no_tokens_with_no_letters_and_bad_symbols']=df['completely_lematized_lower_case'].apply(lambda x: fun_no_tokens_with_no_letters_and_bad_symbols(x))
print ('unique_verb_tokens')
df['unique_verb_tokens']=df['verb_final_no_tokens_with_no_letters_and_bad_symbols'].apply(lambda x: function_unique_tokens(x))
print ('unique_all_but_nouns_tokens')
df['unique_all_but_nouns_tokens']=df['all_but_noun_final_no_tokens_with_no_letters_and_bad_symbols'].apply(lambda x: function_unique_tokens(x))
print ('unique_all_tokens')
df['unique_all_tokens']=df['complete_final_no_tokens_with_no_letters_and_bad_symbols'].apply(lambda x: function_unique_tokens(x))
return df
Once again, my main concern, given also a lack of time for me to finnish my thesis, is to be able to perform the parralelization process with THIS function (though if I find some time, it will be a pleasure to try to make it more efficient). The only problem I have faced is that the parralelization chunck never stops running, although the function works and I have tested it on smaller dataframes.

Scrapy - how send resquest form

i newbies in web scraping, i try to send request with cookies but i think i do something wrong.
i recive cookies with filter value in research field.
fisrt tell me is i need to download extract package for send cookies in request.
second do i need to activate something in the setting.
Here is my code
i'm french so the command and the website is french
import scrapy
from ..items import DuproprioItem
from scrapy.http import FormRequest
#for remove encoding \xa0 = var.replace(u'\xa0', ' ')
# fonctionnement de scrapy avec multi pagination et liens multiples sur chaque page
# 1- on doit toujours cree la premiere fonction avec ne nom : 'parse'
# 2- dans cette fonction on commence par extraire les URL de tous les liens et on les renvoie dans une nouvelles fonction pour ensuite extraire les donner
# 3- on recupere les URL de toute les pages pour cree un pagination
# 4- on cree une pagination qui va permettre de recupere tous les fiches de tous les pages
# 4- on cree une nouvelle fonction qui extrait les donners des fiches
class BlogSpider(scrapy.Spider):
name = 'remax_cookies'
start_urls = [
'https://www.remax-quebec.com/fr/infos/nous-joindre.rmx'
]
def parse (self, response):
yield scrapy.http.Request("https://www.remax-quebec.com/fr/recherche/residentielle/resultats.rmx", callback=self.parse_with_form,method='POST',cookies={
'mode': 'criterias',
'order':'prix_asc',
'query': '',
'categorie':'residentielle',
'selectItemcategorie': 'residentielle',
'minPrice':'100000',
'selectItemminPrice': '100000',
'maxPrice':'200000',
'selectItemmaxPrice': '200000',
'caracResi7':'_',
'caracResi1': '_',
'caracResi4':'_',
'caracResi8': '_',
'caracResi2':'_',
'caracResi12': '_',
'caracComm4':'_',
'caracComm2': '_',
'caracComm5':'_',
'caracFarm3': '_',
'caracFarm1':'_',
'caracLand1': '_',
'caracResi5':'_',
'caracResi9': '_',
'caracResi10':'_',
'caracResi3': '_',
'caracResi6':'_',
'caracResi13': '_',
'caracComm3':'_',
'caracComm1': '_',
'caracFarm2':'_',
'uls': ''
})
def parse_with_form(self,response):
#je vais chercher tous les liens sur la page a suivre pour extraire les fiches des maisons
links_fiche = response.css('a.property-thumbnail::attr(href)').extract()
#j'extract tous les liens avec une boucle
for fiche in links_fiche:
#je renvoie tous les viens vers une fonction scraper exterieur pour ensuite extraire les donner des fiches recu(parse_fiche)
yield response.follow(fiche, self.parse_fiche)
#je scrape les donner recu de
def parse_fiche(self, response):
items = DuproprioItem()
price = response.css('h4.Caption__Price span').css('::text').get().strip()
items['price'] = price
yield items

Training a spacy model for NER in french resumes dont give any results

Sample of trainning data(input.json), the full json has only 100 resumes.
{"content": "Resume 1 text in french","annotation":[{"label":["diplomes"],"points":[{"start":1233,"end":1423,"text":"1995-1996 : Lycée Dar Essalam Rabat \n Baccalauréat scientifique option sciences Expérimentales "}]},{"label":["diplomes"],"points":[{"start":1012,"end":1226,"text":"1996-1998 : Faculté des Sciences Rabat \n C.E.U.S (Certificat des Etudes universitaires Supérieurs) option physique et chimie "}]},{"label":["diplomes"],"points":[{"start":812,"end":1004,"text":"1999-2000 : Faculté des Sciences Rabat \n Licence es sciences physique option électronique "}]},{"label":["diplomes"],"points":[{"start":589,"end":805,"text":"2002-2004 : Faculté des Sciences Rabat \nDESA ((Diplôme des Etudes Supérieures Approfondies) en informatique \n\ntélécommunication multimédia "}]},{"label":["diplomes"],"points":[{"start":365,"end":582,"text":"2014-2017 : Institut National des Postes et Télécommunications INPT Rabat \n Thèse de doctorat en informatique et télécommunication "}]},{"label":["adresse"],"points":[{"start":122,"end":157,"text":"Rue 34 n 17 Hay Errachad Rabat Maroc"}]}],"extras":null,"metadata":{"first_done_at":1586140561000,"last_updated_at":1586140561000,"sec_taken":0,"last_updated_by":"wP21IMXff9TFSNLNp5v0fxbycFX2","status":"done","evaluation":"NONE"}}
{"content": "Resume 2 text in french","annotation":[{"label":["diplomes"],"points":[{"start":1251,"end":1345,"text":"Lycée Oued El Makhazine - Meknès \n\n- Bachelier mention très bien \n- Option : Sciences physiques"}]},{"label":["diplomes"],"points":[{"start":1122,"end":1231,"text":"Classes préparatoires Moulay Youssef - Rabat \n\n- Admis au Concours National Commun CNC \n- Option : PCSI - PSI "}]},{"label":["diplomes"],"points":[{"start":907,"end":1101,"text":"Institut National des Postes et Télécommunications INPT - Rabat \n\n- Ingénieur d’État en Télécommunications et technologies de l’information \n- Option : MTE Management des Télécoms de l’entreprise"}]},{"label":["adresse"],"points":[{"start":79,"end":133,"text":"94, Hay El Izdihar, Avenue El Massira, Ouislane, MEKNES"}]}],"extras":null,"metadata":{"first_done_at":1586126476000,"last_updated_at":1586325851000,"sec_taken":0,"last_updated_by":"wP21IMXff9TFSNLNp5v0fxbycFX2","status":"done","evaluation":"NONE"}}
{"content": "Resume 3 text in french","annotation":[{"label":["adresse"],"points":[{"start":2757,"end":2804,"text":"N141 Av. El Hansali Agharass \nBouargane \nAgadir "}]},{"label":["diplomes"],"points":[{"start":262,"end":369,"text":"2009-2010 : Baccalauréat Scientifique, option : Sciences Physiques au Lycée Qualifiant \nIBN MAJJA à Agadir."}]},{"label":["diplomes"],"points":[{"start":125,"end":259,"text":"2010-2016 : Diplôme d’Ingénieur d’Etat, option : Génie Informatique, à l’Ecole \nNationale des Sciences Appliquées d’Agadir (ENSAA). "}]}],"extras":null,"metadata":{"first_done_at":1586141779000,"last_updated_at":1586141779000,"sec_taken":0,"last_updated_by":"wP21IMXff9TFSNLNp5v0fxbycFX2","status":"done","evaluation":"NONE"}}
{"content": "Resume 4 text in french","annotation":[{"label":["diplomes"],"points":[{"start":505,"end":611,"text":"2012 Baccalauréat Sciences Expérimentales option Sciences Physiques, Lycée Hassan Bno \nTabit, Ouled Abbou. "}]},{"label":["diplomes"],"points":[{"start":375,"end":499,"text":"2012–2015 Diplôme de licence en Informatique et Gestion Industrielle, IGI, Faculté des sciences \net Techniques, Settat, LST. "}]},{"label":["diplomes"],"points":[{"start":272,"end":367,"text":"2015–2017 Master Spécialité BioInformatique et Systèmes Complexes, BISC, ENSA , Tanger, \n\nBac+5."}]},{"label":["adresse"],"points":[{"start":15,"end":71,"text":"246 Hay Pam Eljadid OULED ABBOU \n26450 BERRECHID, Maroc "}]}],"extras":null,"metadata":{"first_done_at":1586127374000,"last_updated_at":1586327010000,"sec_taken":0,"last_updated_by":"wP21IMXff9TFSNLNp5v0fxbycFX2","status":"done","evaluation":"NONE"}}
{"content": "Resume 5 text in french","annotation":null,"extras":null,"metadata":{"first_done_at":1586139511000,"last_updated_at":1586139511000,"sec_taken":0,"last_updated_by":"wP21IMXff9TFSNLNp5v0fxbycFX2","status":"done","evaluation":"NONE"}}
Code that transformes this json data to spacy format
input_file="input.json"
output_file="output.json"
training_data = []
lines=[]
with open(input_file, 'r', encoding="utf8") as f:
lines = f.readlines()
for line in lines:
data = json.loads(line)
print(data)
text = data['content']
entities = []
for annotation in data['annotation']:
point = annotation['points'][0]
labels = annotation['label']
if not isinstance(labels, list):
labels = [labels]
for label in labels:
entities.append((point['start'], point['end'] + 1 ,label))
training_data.append((text, {"entities" : entities}))
with open(output_file, 'wb') as fp:
pickle.dump(training_data, fp)
Code for training the spacy model
def train_spacy():
TRAIN_DATA = training_data
nlp = spacy.load('fr_core_news_md') # create blank Language class
# create the built-in pipeline components and add them to the pipeline
# nlp.create_pipe works for built-ins that are registered with spaCy
# if 'ner' not in nlp.pipe_names:
# ner = nlp.create_pipe('ner')
# nlp.add_pipe(ner, last=True)
ner = nlp.get_pipe("ner")
# add labels
for _, annotations in TRAIN_DATA:
for ent in annotations.get('entities'):
ner.add_label(ent[2])
# get names of other pipes to disable them during training
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
with nlp.disable_pipes(*other_pipes): # only train NER
optimizer = nlp.begin_training()
for itn in range(20):
print("Statring iteration " + str(itn))
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in TRAIN_DATA:
nlp.update(
[text], # batch of texts
[annotations], # batch of annotations
# drop=0.2, # dropout - make it harder to memorise data
sgd=optimizer, # callable to update weights
losses=losses)
print(itn, dt.datetime.now(), losses)
output_dir = "new-model"
if output_dir is not None:
output_dir = Path(output_dir)
if not output_dir.exists():
output_dir.mkdir()
nlp.meta['name'] = "addr_edu" # rename model
nlp.to_disk(output_dir)
print("Saved model to", output_dir)
train_spacy()
When I test the model this is what happens
import spacy
nlp = spacy.load("new-model")
doc = nlp("Text of a Resume already trained on")
print(doc.ents)
# It prints out this ()
doc = nlp("Text of a Resume not trained on")
print(doc.ents)
# It prints out this ()
What I expect it to give me is the entities adresse(address) and diplomes(academic degrees) present in the text
Edit 1
The sample data(input.json) in the very top is part of the data that I get after I annotate resumes on a text annotation platform.
But I should transform it to spacy format, so I can give to model for training.
This is what a resume with annotations looks like when I give it to the model
training_data = [(
'Dr.XXXXXX XXXXXXX \n\n \nEmail : XXXXXXXXXXXXXXXXXXXXXXXX \n\nGSM : XXXXXXXXXX \n\nAdresse : XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \n \n\n \n\nETAT CIVIL \n \n\nSituation de famille : célibataire \n\nNationalité : Marocaine \n\nNé le : 10 février 1983 \n\nLieu de naissance : XXXXXXXXXXXXXXXX \n\n \n FORMATION \n\n• 2014-2017 : Institut National des Postes et Télécommunications INPT Rabat \n Thèse de doctorat en informatique et télécommunication \n \n\n• 2002-2004 : Faculté des Sciences Rabat \nDESA ((Diplôme des Etudes Supérieures Approfondies) en informatique \n\ntélécommunication multimédia \n \n\n• 1999-2000 : Faculté des Sciences Rabat \n Licence es sciences physique option électronique \n \n\n• 1996-1998 : Faculté des Sciences Rabat \n C.E.U.S (Certificat des Etudes universitaires Supérieurs) option physique et chimie \n \n\n• 1995-1996 : Lycée Dar Essalam Rabat \n Baccalauréat scientifique option sciences Expérimentales \n\nSTAGE DE FORMATION \n\n• Du 03/03/2004 au 17/09/2004 : Stage de Projet de Fin d’Etudes à l’ INPT pour \nl’obtention du DESA (Diplôme des Etudes Supérieures Approfondies). \n\n Sujet : AGENT RMON DANS LA GESTION DE RESEAUX. \n\n• Du 03/06/2002 au 17/01/2003: Stage de Projet de Fin d’année à INPT \n Sujet : Mécanisme d’Authentification Kerbéros Dans un Réseau Sans fils sous Redhat. \n\nPUBLICATION \n\n✓ Ababou, Mohamed, Rachid Elkouch, and Mostafa Bellafkih and Nabil Ababou. "New \n\nstrategy to optimize the performance of epidemic routing protocol." International Journal \n\nof Computer Applications, vol. 92, N.7, 2014. \n\n✓ Ababou, Mohamed, Rachid Elkouch, and Mostafa Bellafkih and Nabil Ababou. "New \n\nStrategy to optimize the Performance of Spray and wait Routing Protocol." International \n\nJournal of Wireless and Mobile Networks v.6, N.2, 2014. \n\n✓ Ababou, Mohamed, Rachid Elkouch, and Mostafa Bellafkih and Nabil Ababou. "Impact of \n\nmobility models on Supp-Tran optimized DTN Spray and Wait routing." International \n\njournal of Mobile Network Communications & Telematics ( IJMNCT), Vol.4, N.2, April \n\n2014. \n\n✓ M. Ababou, R. Elkouch, M. Bellafkih and N. Ababou, "AntProPHET: A new routing \n\nprotocol for delay tolerant networks," Proceedings of 2014 Mediterranean Microwave \n\nSymposium (MMS2014), Marrakech, 2014, IEEE. \n\nmailto:XXXXXXXXXXXXXXXXXXXXXXXX\n\n\n✓ Ababou, Mohamed, et al. "BeeAntDTN: A nature inspired routing protocol for delay \n\ntolerant networks." Proceedings of 2014 Mediterranean Microwave Symposium \n\n(MMS2014). IEEE, 2014. \n\n✓ Ababou, Mohamed, et al. "ACDTN: A new routing protocol for delay tolerant networks \n\nbased on ant colony." Information Technology: Towards New Smart World (NSITNSW), \n\n2015 5th National Symposium on. IEEE, 2015. \n\n✓ Ababou, Mohamed, et al. "Energy-efficient routing in Delay-Tolerant Networks." RFID \n\nAnd Adaptive Wireless Sensor Networks (RAWSN), 2015 Third International Workshop \n\non. IEEE, 2015. \n\n✓ Ababou, Mohamed, et al. "Energy efficient and effect of mobility on ACDTN routing \n\nprotocol based on ant colony." Electrical and Information Technologies (ICEIT), 2015 \n\nInternational Conference on. IEEE, 2015. \n\n✓ Mohamed, Ababou et al. "Fuzzy ant colony based routing protocol for delay tolerant \n\nnetwork." 10th International Conference on Intelligent Systems: Theories and Applications \n\n(SITA). IEEE, 2015. \n\nARTICLES EN COURS DE PUBLICATION \n\n✓ Ababou, Mohamed, Rachid Elkouch, and Mostafa Bellafkih and Nabil Ababou.”Dynamic \n\nUtility-Based Buffer Management Strategy for Delay-tolerant Networks. “International \n\nJournal of Ad Hoc and Ubiquitous Computing, 2017. ‘accepté par la revue’ \n\n✓ Ababou, Mohamed, Rachid Elkouch, and Mostafa Bellafkih and Nabil Ababou. "Energy \n\nefficient routing protocol for delay tolerant network based on fuzzy logic and ant colony." \n\nInternational Journal of Intelligent Systems and Applications (IJISA), 2017. ‘accepté par la \n\nrevue’ \n\nCONNAISSANCES EN INFORMATIQUE \n\n \n\nLANGUES \n\nArabe, Français, anglais. \n\nLOISIRS ET INTERETS PERSONNELS \n\n \n\nVoyages, Photographie, Sport (tennis de table, footing), bénévolat. \n\nSystèmes : UNIX, DOS, Windows \n\nLangages : Séquentiels ( C, Assembleur), Requêtes (SQL), WEB (HTML, PHP, MySQL, \n\nJavaScript), Objets (C++, DOTNET,JAVA) , I.A. (Lisp, Prolog) \n\nLogiciels : Open ERP (Enterprise Resource Planning), AutoCAD, MATLAB, Visual \n\nBasic, Dreamweaver MX. \n\nDivers : Bases de données, ONE (Opportunistic Network Environment), NS3, \n\nArchitecture réseaux,Merise,... \n\n',
{'entities': [(1233, 1424, 'diplomes'), (1012, 1227, 'diplomes'), (812, 1005, 'diplomes'), (589, 806, 'diplomes'), (365, 583, 'diplomes'), (122, 158, 'adresse')]}
)]
I agree is better if we try to train the model just on one resume, and test with it to see if he learns.
I've changed the code, the difference is now I try to train a blank model.
def train_spacy():
TRAIN_DATA = training_data
nlp = spacy.blank('fr')
ner = nlp.create_pipe("ner")
nlp.add_pipe(ner, last=True)
ner = nlp.get_pipe("ner")
# add labels
for _, annotations in TRAIN_DATA:
for ent in annotations.get('entities'):
ner.add_label(ent[2])
optimizer = nlp.begin_training()
for itn in range(20):
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in TRAIN_DATA:
nlp.update(
[text], # batch of texts
[annotations], # batch of annotations
drop=0.1, # dropout - make it harder to memorise data
sgd=optimizer, # callable to update weights
losses=losses
)
print(itn, dt.datetime.now(), losses)
return nlp
Here are the losses I get in the training
Here is the test, here I test the same resume used for training.
The good thing is now I dont have the empty tuple, the model actually recognized something correctly, in this case the "adresse" entity.
But I wont recognize the "diplomes" entity, which I web have 5 of them in this resume, even though its trained on it.

Embed a pdf in a R Markdown file and adapt pagination

I am finishing my PhD, and I need to embed some papers (in pdf format) in somewhere in the middle of my R Markdown text.
When converting the R Markdown into PDF, I would like those PDF papers to be embed in the conversion.
However, I would like those PDF papers to be also numbered according to the rest of the Markdown text.
How can I do it?
UPDATE: New error
By using \includepdf, I get this error:
output file: Tesis_doctoral_-_TEXTO.knit.md
! Undefined control sequence.
l.695 \includepdf
[pages=1-10, angle=90, pagecommand={}]{PDF/Paper1.pdf}
Here is how much of TeX's memory you used:
12157 strings out of 495028
174654 string characters out of 6181498
273892 words of memory out of 5000000
15100 multiletter control sequences out of 15000+600000
40930 words of font info for 89 fonts, out of 8000000 for 9000
14 hyphenation exceptions out of 8191
31i,4n,35p,247b,342s stack positions out of 5000i,500n,10000p,200000b,80000s
Error: Failed to compile Tesis_doctoral_-_TEXTO.tex. See Tesis_doctoral_-_TEXTO.log for more info.
Execution halted
EXAMPLE of the R Markdown code
---
title: Histología dental de los homininos de la Sierra de Atapuerca (Burgos, España)
y patrón de estrategia de vida
author: "Mario Modesto-Mata"
date: "20 September 2018"
output:
pdf_document:
highlight: pygments
number_sections: yes
toc: yes
toc_depth: 4
word_document:
toc: yes
toc_depth: '4'
html_document: default
csl: science.csl
bibliography: references.bib
header-includes:
- \usepackage{pdfpages}
---
```{r opciones_base_scripts, message=FALSE, warning=FALSE, include=FALSE, paged.print=FALSE}
library(captioner)
tabla_nums <- captioner(prefix = "Tabla")
figura_nums <- captioner(prefix = "Figura")
anx_tabla_nums <- captioner(prefix = "Anexo Tabla")
```
# Resumen
Los estudios de desarrollo dental en homínidos han sido sesgados involuntariamente en especies pre-Homo y algunos especímenes Homo tempranos, que representan la condición primitiva con tiempos de formación dental más rápidos, respetan a los Neandertales posteriores y a los humanos modernos, que comparativamente tienen tiempos de formación más lentos.
## PDF Article
\includepdf[pages=1-22, pagecommand={}]{PDF/Paper1.pdf}
## Bayes
El desarrollo dental relativo se evaluó empleando un enfoque estadístico bayesiano (31).
This is the link to download the PDF
NEW IMAGE
I had to remove a few things from your example, but after that it worked without problems:
---
title: Histología dental de los homininos de la Sierra de Atapuerca (Burgos, España)
y patrón de estrategia de vida
author: "Mario Modesto-Mata"
date: "20 September 2018"
output:
pdf_document:
highlight: pygments
number_sections: yes
toc: yes
toc_depth: 4
keep_tex: yes
word_document:
toc: yes
toc_depth: '4'
html_document: default
header-includes:
- \usepackage{pdfpages}
---
# Resumen
Los estudios de desarrollo dental en homínidos han sido sesgados involuntariamente en especies pre-Homo y algunos especímenes Homo tempranos, que representan la condición primitiva con tiempos de formación dental más rápidos, respetan a los Neandertales posteriores y a los humanos modernos, que comparativamente tienen tiempos de formación más lentos.
## PDF Article
\includepdf[pages=1-22, pagecommand={}, scale = 0.9]{Paper1.pdf}
## Bayes
El desarrollo dental relativo se evaluó empleando un enfoque estadístico bayesiano (31).
Result:
BTW, for something like a thesis I would use bookdown, since this gives you cross-referencing etc.
If that does not work for you, I suggest first looking at plain LaTeX, i.e. does the following LaTeX document work for you:
\documentclass{article}
\usepackage{pdfpages}
\begin{document}
foo
\includepdf[pages=1-22, pagecommand={}, scale = 0.9]{Paper1.pdf}
bar
\end{document}