Extract recommendations for user from pivot table - pandas

I have a following pivot table with user/items number of purchases that looks like this:
originalName Red t-shirt Black t-shirt ... Orange sweater Pink sweater
customer ...
165 NaN NaN ... NaN NaN
265 NaN 1.0 ... NaN NaN
288 NaN NaN ... NaN NaN
368 1.0 NaN ... NaN 2.0
396 NaN NaN ... 3.0 NaN
I wrote the method to get related items if I input one item, by using Pearson's correlation
def get_related_items(name, M, num):
number_of_orders = []
for title in M.columns:
if title == name:
continue
cor = pearson(M[name], M[title])
if np.isnan(cor):
continue
else:
number_of_orders.append((title, cor))
number_of_orders.sort(key=lambda tup: tup[1], reverse=True)
return number_of_orders[:num]
I am not sure what should be the logic to get the list of recommended items for a specific customer.
And how can I evaluate that?
Thanks!

import pandas as pd
import numpy as np
df=pd.DataFrame({'customer':[165,265,288,268,296],
'R_shirt':[np.nan,1.0,np.nan,1.0,np.nan],
'B_shirt':[np.nan,np.nan,2.0,np.nan,np.nan],
'X_shirt':[5.0,np.nan,2.0,np.nan,np.nan],
'Y_shirt':[3.0,np.nan,2.0,3.0,np.nan]
})
print(df)
customer R_shirt B_shirt X_shirt Y_shirt
0 165 NaN NaN 5.0 3.0
1 265 1.0 NaN NaN NaN
2 288 NaN 2.0 2.0 2.0
3 268 1.0 NaN NaN 3.0
4 296 NaN NaN NaN NaN
df['customer']=df['customer'].astype(str)
df=df.pivot_table(columns='customer')
customer = '165'
print(df)
customer 165 265 268 288
B_shirt NaN NaN NaN 2.0
R_shirt NaN 1.0 1.0 NaN
X_shirt 5.0 NaN NaN 2.0
Y_shirt 3.0 NaN 3.0 2.0
best_for_customer=df[customer][df[customer]!=np.nan].to_frame().sort_values(by=customer,ascending=False).dropna()
print(best_for_customer)
165
X_shirt 5.0
Y_shirt 3.0
variable customer is a name of customer that you want to check

The expected rating for a user for a given item will be the weighted average of the ratings given to the item by other users, where the weights will be the Pearson coefficients you calculated. Then you can pick the item with the highest expected rating and recommend it.

Related

Replace: Could not convert string to float ' ' or '[text]'

I have a large CSV with some columns containing data as shown
I want to get rid of unnecessary text and convert the remaining digits from string to float to just leave the values.
I am using
Added['Household Occupants'] = Added['Household Occupants'].str.replace(r'[^0-9]', '').astype(float)
and
Added['Grocery Spend'] = Added['Grocery Spend'].str.replace(r'\D', '').astype(float)
and these do the job perfect, but when I apply the same to 'Electronics Spend' and 'Goods Spend' columns, i get errors like;
'ValueError: could not convert string to float: '''
and
'ValueError: could not convert string to float: 'Goods''
Any suggestions would be greatly appreciated!
Thanks in advance!
I think this is a good example where pandas.Series.name can be useful :
out = (
Added.apply(lambda x: x.replace(f"{x.name.capitalize()}\((\d+)\)", r"\1",
regex=True)
.astype(float))
)
Another alternative with pandas.Series.str.extarct :
out = Added.apply(lambda x: x.str.extract("\((\d+)\)", expand=False).astype(float))
# Output :
print(out)
Household Occupants Grocery Spend Electronics Spend Goods Spend
0 1.0 500.0 5000.0 50.0
1 4.0 45.0 200.0 0.0
2 NaN NaN NaN NaN
3 5.0 NaN NaN NaN
4 3.0 NaN NaN NaN
1337 4.0 NaN NaN NaN
1338 4.0 NaN NaN NaN
1339 4.0 200.0 NaN NaN
1340 4.0 NaN NaN NaN
1341 NaN NaN NaN NaN

Filtering/Querying Pandas DataFrame after multiple grouping/agg

I have a dataframe that I first group, Counting QuoteLine Items grouped by stock(1-true, 0-false) and mfg type (K-Kit, M-manufactured, P-Purchased). Ultimately, I am interested in quotes that ALL items are either NonStock/Kit and/or Stock/['M','P'] :
grouped = df.groupby(['QuoteNum', 'typecode', 'stock']).agg({"QuoteLine": "count"})
and I get this:
QuoteLine-count
QuoteNum typecode stock
10001 K 0 1
10003 M 0 1
10005 M 0 3
1 1
10006 M 1 1
... ... ... ...
26961 P 1 1
26962 P 1 1
26963 P 1 2
26964 K 0 1
M 1 2
If I unstack it twice:
grouped = df.groupby(['QuoteNum', 'typecode', 'stock']).agg({"QuoteLine": "count"}).unstack().unstack()
# I get
QuoteLine-count
stock 0 1
typecode K M P K M P
QuoteNum
10001 1.0 NaN NaN NaN NaN NaN
10003 NaN 1.0 NaN NaN NaN NaN
10005 NaN 3.0 NaN NaN 1.0 NaN
10006 NaN NaN NaN NaN 1.0 NaN
10007 2.0 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
26959 NaN NaN NaN NaN NaN 1.0
26961 NaN 1.0 NaN NaN NaN 1.0
26962 NaN NaN NaN NaN NaN 1.0
26963 NaN NaN NaN NaN NaN 2.0
26964 1.0 NaN NaN NaN 2.0 NaN
Now I need to filter out all records where, this is where I need help
# pseudo-code
(stock == 0 and typecode in ['M','P']) -> values are NOT NaN (don't want those)
and
(stock == 1 and typecode='K') -> values are NOT NaN (don't want those either)
so I'm left with these records:
Basically: Columns "0/M, 0/P, 1/K" must be all NaNs and other columns have at least one non NaN value
QuoteLine-count
stock 0 1
typecode K M P K M P
QuoteNum
10001 1.0 NaN NaN NaN NaN NaN
10006 NaN NaN NaN NaN 1.0 NaN
10007 2.0 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
26959 NaN NaN NaN NaN NaN 1.0
26962 NaN NaN NaN NaN NaN 1.0
26963 NaN NaN NaN NaN NaN 2.0
26964 1.0 NaN NaN NaN 2.0 NaN
IIUC, use boolean mask to set rows that match your conditions to NaN then unstack desired levels:
# Shortcut (for readability)
lvl_vals = grouped.index.get_level_values
m1 = (lvl_vals('typecode') == 'K') & (lvl_vals('stock') == 0)
m2 = (lvl_vals('typecode').isin(['M', 'P'])) & (lvl_vals('stock') == 1)
grouped[m1|m2] = np.nan
out = grouped.unstack(level=['stock', 'typecode']) \
.loc[lambda x: x.isna().all(axis=1)]
Output result:
>>> out
QuoteLine-count
stock 0 1
typecode K M M P
QuoteNum
10001 NaN NaN NaN NaN
10006 NaN NaN NaN NaN
26961 NaN NaN NaN NaN
26962 NaN NaN NaN NaN
26963 NaN NaN NaN NaN
26964 NaN NaN NaN NaN
Desired values could be obtained by as_index==False, but i am not sure if they are in desired format.
grouped = df.groupby(['QuoteNum', 'typecode', 'stock'], as_index=False).agg({"QuoteLine": "count"})
grouped[((grouped["stock"]==0) & (grouped["typecode"].isin(["M" ,"P"]))) | ((grouped["stock"]==1) & (grouped["typecode"].isin(["K"])))]

Creating new columns in Pandas dataframe reading csv file

I'm reading a simple csv file and creating a pandas dataframe. The csv file can have 1 row or 2 rows or 10 rows.
If the csv file has 1 row then I want to create few columns and if it has <=2 rows, then create couple of new columns and if it has 10 rows, then I want to create 10 new columns.
After reading the csv, my sample dataframe looks like below.
df=pd.read_csv('/home/abc/myfile.csv',sep=',')
print(df)
id rate amount address lb ub msa
1 2.50 100 abcde 30 90 101
10 20 102
103
104
105
106
107
108
109
110
Case 1)If the dataframe has only 1 record then I want to create new columns 'new_id', 'new_rate' & 'new_address' and assign the values from 'id', 'rate' and 'address' columns coming from the dataframe
Expected Output:
id rate amount address lb ub msa new_id new_rate new_address
1 2.50 100 abcde 30 90 101 1 2.50 abcde
Case 2)If the dataframe has <=2 records then I want to create for the 1st record 'lb_1', 'ub_1' with values 30 and 90 and for the 2nd record 'lb_2' & 'ub_2' with values 10 & 20 coming from the dataframe
Expected Output:
if there is only 1 row:
id rate amount address lb ub msa lb_1 ub_1
1 2.50 100 abcde 30 90 101 30 90
if there are 2 rows:
id rate amount address lb ub msa lb_1 ub_1 lb_2 ub_2
1 2.50 100 abcde 30 90 101 30 90 10 20
10 20 102
Case 3)If the dataframe has 10 records then I want to create 10 new columns ie, msa_1,msa_2....msa_10 and assign the respective values msa_1=101, msa_2=102.......msa_10=110 for each row coming from the dataframe
Expected Output:
id rate amount address lb ub msa msa_1 msa_2 msa_3 msa_4 msa_5 msa_6 msa_7 msa_8 msa_9 msa_10
1 2.50 100 abcde 30 90 101 101 102 103 104 105 106 107 108 109 110
10 20 102
103
104
105
106
107
108
109
110
I'm trying to write the code as below but for 2nd and 3rd case, I'm not sure how to do it and also if there is any better way to handle all the 3 cases, that would be great.
Appreciate if anyone can show me the best way to get it done. Thanks in advance
Case1:
if df.shape[0]==1:
df.loc[(df.shape[0]==1), "new_id"] = df["id"]
df.loc[(df.shape[0]==1),"new_rate"]= df["rate"]
df.loc[(df.shape[0]==1),"new_address"]= df["address"]
Case2:
if df.shape[0]<=2:
for i in 1 to len(df.index)
df.loc[df['lb_i']]=db['lb']
df.loc[df['ub_i']]=df['ub']
Case3:
if df.shape[0]<=10:
for i in 1 to len(df.index)
df.loc[df['msa_i']]=df['msa']
for case 2 and case 3, you can do something like this -
Case 2-
# case 2
df= pd.read_csv('test.txt')
lb_dict = { f'lb_{i}': value for i,value in enumerate(df['lb'].to_list(),start=1)}
lb_df = pd.DataFrame.from_dict(lb_dict, orient='index').transpose()
ub_dict = { f'ub_{i}': value for i,value in enumerate(df['ub'].to_list(),start=1)}
ub_df = pd.DataFrame.from_dict(ub_dict, orient='index').transpose()
final_df = pd.concat([df,lb_df,ub_df],axis =1)
print(final_df)
output-
id
rate
amount
address
lb
ub
msa
lb_1
lb_2
ub_1
ub_2
0
1.0
2.5
100.0
abcde
30
90
101
30.0
10.0
90.0
20.0
1
NaN
NaN
NaN
NaN
10
20
102
NaN
NaN
NaN
NaN
For case 3 -
# case 3
df= pd.read_csv('test.txt')
msa_dict = { f'msa_{i}': value for i,value in enumerate(df['msa'].to_list(),start=1)}
msa_df = pd.DataFrame.from_dict(msa_dict, orient='index').transpose()
pd.concat([df,msa_df],axis =1)
Output -
id
rate
amount
address
lb
ub
msa
msa_1
msa_2
msa_3
msa_4
msa_5
msa_6
msa_7
msa_8
msa_9
msa_10
0
1.0
2.5
100.0
abcde
30.0
90.0
101
101.0
102.0
103.0
104.0
105.0
106.0
107.0
108.0
109.0
110.0
1
NaN
NaN
NaN
NaN
10.0
20.0
102
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
2
NaN
NaN
NaN
NaN
NaN
NaN
103
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
3
NaN
NaN
NaN
NaN
NaN
NaN
104
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
4
NaN
NaN
NaN
NaN
NaN
NaN
105
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
5
NaN
NaN
NaN
NaN
NaN
NaN
106
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
6
NaN
NaN
NaN
NaN
NaN
NaN
107
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
7
NaN
NaN
NaN
NaN
NaN
NaN
108
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
8
NaN
NaN
NaN
NaN
NaN
NaN
109
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
9
NaN
NaN
NaN
NaN
NaN
NaN
110
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
NaN
Solution -
I've just created a dictionary from the required column and then I concatenated it with the original dataframe column-wise.

From 15 object variables to final target variable (0 or 1)

Can i go from 15 object variables to one final binary target variable?
Those 15 variables has ~10.000 different codes, my dataset is about 21.000.000 records. What im trying to do is at first replace the codes i want with 1 and the other with 0, then if one of fifteen variables is 1 the target variable will be 1, if all fifteen variables are 0 the target variable will be 0.
i have tried to work with to_replace, as_type, to_numeric, infer_objects with not good results,for example my dataset look like this head(5):
D P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15
41234 1234 4367 874 NAN NAN NAN 789 NAN NAN NAN NAN NAN NAN NAN NAN
42345 7657 4367 874 NAN NAN NAN 789 NAN NAN NAN NAN NAN NAN NAN NAN
34212 7654 4347 474 NAN NAN NAN 789 NAN NAN NAN NAN NAN NAN NAN NAN
34212 8902 4317 374 NAN 452 NAN 719 NAN NAN NAN NAN NAN NAN NAN NAN
19374 2564 4387 274 NAN 452 NAN 799 NAN NAN NAN NAN NAN NAN NAN NAN
I want to transform all nan as 0, and selected codes with 1, so all the P1-P15 will be binary and the i will create a final P variable with them.
For example if P1-P15 have '3578','9732','4734'...(im using about 200 codes) i want to become 1.
All the other values i want to become 0.
D variable should stay as it is.
The final dataset will be (D,P), then i will add the train variables
Any ideas? The following code gives me wrong results.
selCodes=['3722','66']
dfnew['P']=(dfnew.loc[:,'PR1':].astype(str).isin(selCodes).any(axis=1).astype(int))
Take a look at a test dataset(left), and new P(right).With the example code 3722 P should be 1.
IIUC, Use, DataFrame.isin:
# example select codes
selCodes = ['1234', '9732', '719']
df['P'] = (
df.loc[:, 'P1':].astype(str)
.isin(selCodes).any(axis=1).astype(int)
)
df = df[['D', 'P']]
Result:
D P
0 41234 1
1 42345 0
2 34212 0
3 34212 1
4 19374 0

For every row in pandas, do until sample ID change

How can I iterarate over rows in a dataframe until the sample ID change?
my_df:
ID loc_start
sample1 10
sample1 15
sample2 10
sample2 20
sample3 5
Something like:
samples = ["sample1", "sample2" ,"sample3"]
out = pd.DataFrame()
for sample in samples:
if my_df["ID"] == sample:
my_list = []
for index, row in my_df.iterrows():
other_list = [row.loc_start]
my_list.append(other_list)
my_list = pd.DataFrame(my_list)
out = pd.merge(out, my_list)
Expected output:
sample1 sample2 sample3
10 10 5
15 20
I realize of course that this could be done easier if my_df really would look like this. However, what I'm after is the principle to iterate over rows until a certain column value change.
Based on the input & output provided, this should work.
You need to provide more info if you are looking for something else.
df.pivot(columns='ID', values = 'loc_start').rename_axis(None, axis=1).apply(lambda x: pd.Series(x.dropna().values))
output
sample1 sample2 sample3
0 10.0 10.0 5.0
1 15.0 20.0 NaN
Ben.T is correct that a pivot works here. Here is an example:
import pandas as pd
import numpy as np
df = pd.DataFrame(data=np.random.randint(0, 5, (10, 2)), columns=list("AB"))
# what does the df look like? Here, I consider column A to be analogous to your "ID" column
In [5]: df
Out[5]:
A B
0 3 1
1 2 1
2 4 2
3 4 1
4 0 4
5 4 2
6 4 1
7 3 1
8 1 1
9 4 0
# now do a pivot and see what it looks like
df2 = df.pivot(columns="A", values="B")
In [8]: df2
Out[8]:
A 0 1 2 3 4
0 NaN NaN NaN 1.0 NaN
1 NaN NaN 1.0 NaN NaN
2 NaN NaN NaN NaN 2.0
3 NaN NaN NaN NaN 1.0
4 4.0 NaN NaN NaN NaN
5 NaN NaN NaN NaN 2.0
6 NaN NaN NaN NaN 1.0
7 NaN NaN NaN 1.0 NaN
8 NaN 1.0 NaN NaN NaN
9 NaN NaN NaN NaN 0.0
Not quite what you wanted. With a little help from Jezreal's answer
df2 = df2.apply(lambda x: pd.Series(x.dropna().values))
In [20]: df3
Out[20]:
A 0 1 2 3 4
0 4.0 1.0 1.0 1.0 2.0
1 NaN NaN NaN 1.0 1.0
2 NaN NaN NaN NaN 2.0
3 NaN NaN NaN NaN 1.0
4 NaN NaN NaN NaN 0.0
The empty spots in the dataframe have to be filled with something, and NaN is used by default. Is this what you wanted?
If, on the other hand, you wanted to perform an operation on your data you would use the groupby instead.
df2 = df.groupby(by="A", as_index=False).mean()