r tidygraph find first parent in hierarchy that meets criteria - tidygraph

Find the first approver within the reporting line which is a minimum of 2 grades higher. Eg grade 0 is two grades higher than grade 2.
Employee Graph
I'm trying to create a new node attribute Approver which should populate for Employees H,E,F,G which should all identify the approver as Employee A.
I've been trying to figure this out all day but don't really get how how to do a one hop traversal let alone several. My understanding is that a breath first search BFS will start from the leaf nodes and then work it's way up the tree. I'm getting really stuck on how to access different parts of the tree.
Any help much appreciated.
library(tidygraph)
library(visNetwork)
# sample graph
g <- tidygraph::create_tree(8,2) %>%
activate(nodes) %>%
mutate(Employee = LETTERS[1:8],
Grade = c(0,1,1,1,2,2,2,2),
label = paste("Emp",Employee,"Grade",Grade)
)
visIgraph(g,layout="layout_as_tree", flip.y=F,idToLabel = F)
g %>% activate(nodes) %>%
mutate(Approver = map_bfs_back(node_is_root(),.f = function(node,path,...){
#If starting node grade - node grade >= 2 then Approver Employee ID
}))

Took awhile to figure this out. Not sure if anyone else is interested in this sort of query? Hopefully helpful to someone.
g %>% activate(nodes) %>%
mutate(P = map_bfs_chr(node_is_root(),.f = function(node,path, ...){
.N()$Employee[tail(path$node[ .N()$Grade[node] - .N()$Grade[path$node] >= 2],1)[1]]
}))

Related

Pandas rolling window on an offset between 4 and 2 weeks in the past

I have a datafile with quality scores from different suppliers over a time range of 3 years. The end goal is to use machine learning to predict the quality label (good or bad) of a shipment based on supplier information.
I want to use the mean historic quality data over a specific period of time as an input feature in this model by using pandas rolling window. the problem with this method is that pandas only allows you to create a window from t=0-x until t=0 for you rolling window as presented below:
df['average_score t-2w'] = df['score'].rolling(window='14d',closed='left').mean()
And this is were the problem comes in. For my feature I want to use quality data from a period of 2 weeks, but these 2 weeks are not the 2 weeks before the corresponding shipment, but of 2 weeks, starting from t=-4weeks , and ending on t=-2weeks.
You would imagine that this could be solved by using the same string of code but changing the window as presented below:
df['average_score t-2w'] = df['score'].rolling(window='28d' - '14d',closed='left').mean()
This, or any other type of denotation of this specific window does not seem to work.
It seems like pandas does not offer a solution to this problem, so we made a work around it with the following solution:
def time_shift_week(df):
def _avg_score_interval_func(series):
current_time = series.index[-1]
result = series[(series.index > ( current_time- pd.Timedelta(value=4, unit='w')))
& (series.index < (current_time - pd.Timedelta(value=2, unit='w')))]
return result.mean() if len(result)>0 else 0.0
temp_df = df.groupby(by=["supplier", "timestamp"], as_index=False).aggregate({"score": np.mean}).set_index('timestamp')
temp_df["w-42"] = (
temp_df
.groupby(["supplier"])
.ag_score
.apply(lambda x:
x
.rolling(window='30D', closed='both')
.apply(_avg_score_interval_func)
))
return temp_df.reset_index()
This results in a new df in which we find the average score score per supplier per timestamp, which we can subsequently merge with the original data frame to obtain the new feature.
Doing it this way seems really cumbersome and overly complicated for the task I am trying to perform. Eventhough we have found a workaround, I am wondering if there is an easier method of doing this.
Is anyone aware of a less complicated way of performing this rolling window feature extraction?
While pandas does not have the custom date offset you need, calculating the mean is pretty simple: it's just sum divided by count. You can subtract the 14-day rolling window from the 28-day rolling window:
# Some sample data. All scores are sequential for easy verification
idx = pd.MultiIndex.from_product(
[list("ABC"), pd.date_range("2020-01-01", "2022-12-31")],
names=["supplier", "timestamp"],
)
df = pd.DataFrame({"score": np.arange(len(idx))}, index=idx).reset_index()
# Now we gonna do rolling avg on score with the custom window.
# closed=left mean the current row will be excluded from the window.
score = df.set_index("timestamp").groupby("supplier")["score"]
r28 = score.rolling("28d", closed="left")
r14 = score.rolling("14d", closed="left")
avg_score = (r28.sum() - r14.sum()) / (r28.count() - r14.count())

Is there any way to convert a portfolio class from portfolio analytics into a data frame

I'm trying to find the optimal weights for an especific target return using Portfolio Analytics library and ROI optimization; However, even that I know that that target return should be feasable and should be part of the efficient frontier, the ROI optimization does not find any solution.
The code that I'm using is the following:
for(i in 0:n){
target=minret+(i)*Del
p <- portfolio.spec(assets = colnames(t_EROAS)) #Specification of asset classes
p <- add.constraint(p, type = "full_investment") #An investment that has to sum 1
p <- add.constraint(portfolio=p, type="box", min=0, max=1) #No short position long-only
p <- add.constraint(p,
type="group",
groups=group_list,
group_min=VCONSMIN[,1],
group_max=VCONSMAX[,1])
p <- add.constraint(p, type = "return", name = "mean", return_target = target)
p <- add.objective(p, type="risk", name="var")
eff.opt <- optimize.portfolio(t_EROAS, p, optimize_method = "ROI",trace=TRUE)}
n=30 but is just finding 27 portfolios and the efficient frontier that I'm creating is looking empty from portfolio 27 to portfolio 30, the 28 and 29 seems to not have a solution but I'm not sure that this is correct.
What I want to have is an efficient frontier on a data frame format with a fixed number of portfolios, and it seems that the only way to achive this is by this method. Any help or any ideas that could help?

Create the hierarchy rank of all employees given supervisor ID using Networkx

I have a dataframe in the following format
I want to generate a new column which will tell me the rank of the employee in the company hierarchy. For example, the rank of the CEO is 0. Someone who reports directly to the CEO has a rank of 1. Someone just below that 2...and so on.
I have the supervisor ID for each employee. One employee can have only one supervisor. I have a feeling that I might be able to do this using Networkx, but I can't quite place how to.
I figured it out myself. It's actually quite trivial.
I need to construct a graph, where the CEO is one of the nodes like every other employee.
import networkx as nx
# Create a graph using the networx builtin function
G = nx.from_pandas_edgelist(df, 'employeeId', 'supervisorId')
# Calculate the distance of the CEO from each employee. Sepcify target = CEO
dikt_rank_of_employees = { x[0]: x[1] for x in nx.single_target_shortest_path_length(
G, target='c511b73c4ad30dde1b6d2d57ab2d4ddc') }
# Use this dictionary to create a column in the dataframe
df['rank_of_employee'] = df['employeeId'].map(dikt_rank_of_employees)
The result looks like this

Pandas manipulation: matching data from other columns to one column, applied uniquely to all rows

I have a model that predicts 10 words for a particular course in order of likelihood, and I'd like the first 5 words of those words that appear in the course's description.
This is the format of the data:
course_name course_title course_description predicted_word_10 predicted_word_9 predicted_word_8 predicted_word_7 predicted_word_6 predicted_word_5 predicted_word_4 predicted_word_3 predicted_word_2 predicted_word_1
Xmath 32 Precalculus Polynomial and rational functions, exponential... directed scholars approach build african different visual cultures placed global
Xphilos 2 Morality Introduction to ethical and political philosop... make presentation weekly european ways general range questions liberal speakers
My idea is for each row to start iterating from predicted_word_1 until I get the first 5 that are in the description. I'd like to save those words in the order they appear into additional columns description_word_1 ... description_word_5. (If there are <5 predicted words in the description I plan to return NAN in the corresponding columns).
To clarify with an example: if the course_description of a course is 'Polynomial and rational functions, exponential and logarithmic functions, trigonometry and trigonometric functions. Complex numbers, fundamental theorem of algebra, mathematical induction, binomial theorem, series, and sequences. ' and its first few predicted words are irrelevantword1, induction, exponential, logarithmic, irrelevantword2, polynomial, algebra...
I would want to return induction, exponential, logarithmic, polynomial, algebra for that in that order and do the same for the rest of the courses.
My attempt was to define an apply function that will take in a row and iterate from the first predicted word until it finds the first 5 that are in the description, but the part I am unable to figure out is how to create these additional columns that have the correct words for each course. This code will currently only keep the words for one course for all the rows.
def find_top_description_words(row):
print(row['course_title'])
description_words_index=1
for i in range(num_words_per_course):
description = row.loc['course_description']
word_i = row.loc['predicted_word_' + str(i+1)]
if (word_i in description) & (description_words_index <=5) :
print(description_words_index)
row['description_word_' + str(description_words_index)] = word_i
description_words_index += 1
df.apply(find_top_description_words,axis=1)
The end goal of this data manipulation is to keep the top 10 predicted words from the model and the top 5 predicted words in the description so the dataframe would look like:
course_name course_title course_description top_description_word_1 ... top_description_word_5 predicted_word_1 ... predicted_word_10
Any pointers would be appreciated. Thank you!
If I understand correctly:
Create new DataFrame with just 100 predicted words:
pred_words_lists = df.apply(lambda x: list(x[3:].dropna())[::-1], axis = 1)
Please note that, there are lists in each row with predicted words. The order is nice, I mean the first, not empty, predicted word is on the first place, the second on the second place and so on.
Now let's create a new DataFrame:
pred_words_df = pd.DataFrame(pred_words_lists.tolist())
pred_words_df.columns = df.columns[:2:-1]
And The final DataFrame:
final_df = df[['course_name', 'course_title', 'course_description']].join(pred_words_df.iloc[:,0:11])
Hope this works.
EDIT
def common_elements(xx, yy):
temp = pd.Series(range(0, len(xx)), index= xx)
return list(df.reindex(yy).sort_values()[0:10].dropna().index)
pred_words_lists = df.apply(lambda x: common_elements(x[2].replace(',','').split(), list(x[3:].dropna())), axis = 1)
Does it satisfy your requirements?
Adapted solution (OP):
def get_sorted_descriptions_words(course_description, predicted_words, k):
description_words = course_description.replace(',','').split()
predicted_words_list = list(predicted_words)
predicted_words = pd.Series(range(0, len(predicted_words_list)), index=predicted_words_list)
predicted_words = predicted_words[~predicted_words.index.duplicated()]
ordered_description = predicted_words.reindex(description_words).dropna().sort_values()
ordered_description_list = pd.Series(ordered_description.index).unique()[:k]
return ordered_description_list
df.apply(lambda x: get_sorted_descriptions_words(x['course_description'], x.filter(regex=r'predicted_word_.*'), k), axis=1)

How to query a dataframe using a column of other dataframe in R

I have 2 dataframes in R and I want to do a query using the dataframe "y" like parameter to dataframe "x".
I have this code:
x <- c('The book is on the table','I hear birds outside','The electricity
came back')
x <- data.frame(x)
colnames(x) <- c('text')
x
y <- c('book','birds','electricity')
y <- data.frame(y)
colnames(y) <- c('search')
y
r <- sqldf("select * from x where text IN (select search from y)")
r
I think to use "like" here, but I don´t know.
Can you helpme ?
If you want a sqldf solution, I think that this would work:
sqldf("select x.text, y.search FROM x JOIN y on x.text LIKE '%' || y.search || '%'")
## text search
## 1 The book is on the table book
## 2 I hear birds outside birds
## 3 The electricity \ncame back electricity
You could use the fuzzyjoin package:
library(dplyr)
library(fuzzyjoin)
regex_join(
mutate_if(x, is.factor, as.character),
mutate_if(y, is.factor, as.character),
by = c("text" = "search")
)
# text search
# 1 The book is on the table book
# 2 I hear birds outside birds
# 3 The electricity \ncame back electricity
It's hard to know if this is what you want without a more varied fixture. To add a little bit of variation, I added an extra word to y$search - y = c('book','birds','electricity', 'cat'). More variation would further clarify
Just know which words are in which statements? sapply and grepl
> m = sapply(y$search, grepl, x$text)
> rownames(m) = x$text
> colnames(m) = y$search
> m
book birds electricity cat
The book is on the table TRUE FALSE FALSE FALSE
I hear birds outside FALSE TRUE FALSE FALSE
The electricity \ncame back FALSE FALSE TRUE FALSE
Pulling out just the matching rows?
> library(magrittr) # To use the pipe, "%>%"
> x %>% data.table::setDT() # To return the result as a table easily
>
> x[(sapply(y$search, grepl, x$text) %>% rowSums() %>% as.logical()) * (1:nrow(x)), ]
text
1: The book is on the table
2: I hear birds outside
3: The electricity \ncame back
#Aurèle's solution will give the best result for matching text and the text it match to. Note that if back was also in y$search, the text The electricity \ncame back would get reported twice in the result for the different search terms matched, so this is better in the case that uniqueness is not important.
So it largely depends on your desired output.