Replace: Could not convert string to float ' ' or '[text]' - pandas

I have a large CSV with some columns containing data as shown
I want to get rid of unnecessary text and convert the remaining digits from string to float to just leave the values.
I am using
Added['Household Occupants'] = Added['Household Occupants'].str.replace(r'[^0-9]', '').astype(float)
and
Added['Grocery Spend'] = Added['Grocery Spend'].str.replace(r'\D', '').astype(float)
and these do the job perfect, but when I apply the same to 'Electronics Spend' and 'Goods Spend' columns, i get errors like;
'ValueError: could not convert string to float: '''
and
'ValueError: could not convert string to float: 'Goods''
Any suggestions would be greatly appreciated!
Thanks in advance!

I think this is a good example where pandas.Series.name can be useful :
out = (
Added.apply(lambda x: x.replace(f"{x.name.capitalize()}\((\d+)\)", r"\1",
regex=True)
.astype(float))
)
Another alternative with pandas.Series.str.extarct :
out = Added.apply(lambda x: x.str.extract("\((\d+)\)", expand=False).astype(float))
# Output :
print(out)
Household Occupants Grocery Spend Electronics Spend Goods Spend
0 1.0 500.0 5000.0 50.0
1 4.0 45.0 200.0 0.0
2 NaN NaN NaN NaN
3 5.0 NaN NaN NaN
4 3.0 NaN NaN NaN
1337 4.0 NaN NaN NaN
1338 4.0 NaN NaN NaN
1339 4.0 200.0 NaN NaN
1340 4.0 NaN NaN NaN
1341 NaN NaN NaN NaN

Related

Pandas DataFrame subtraction is getting an unexpected result. Concatenating instead?

I have two dataframes of the same size (510x6)
preds
0 1 2 3 4 5
0 2.610270 -4.083780 3.381037 4.174977 2.743785 -0.766932
1 0.049673 0.731330 1.656028 -0.427514 -0.803391 -0.656469
2 -3.579314 3.347611 2.891815 -1.772502 1.505312 -1.852362
3 -0.558046 -1.290783 2.351023 4.669028 3.096437 0.383327
4 -3.215028 0.616974 5.917364 5.275736 7.201042 -0.735897
... ... ... ... ... ... ...
505 -2.178958 3.918007 8.247562 -0.523363 2.936684 -3.153375
506 0.736896 -1.571704 0.831026 2.673974 2.259796 -0.815212
507 -2.687474 -1.268576 -0.603680 5.571290 -3.516223 0.752697
508 0.182165 0.904990 4.690155 6.320494 -2.326415 2.241589
509 -1.675801 -1.602143 7.066843 2.881135 -5.278826 1.831972
510 rows × 6 columns
outputStats
0 1 2 3 4 5
0 2.610270 -4.083780 3.381037 4.174977 2.743785 -0.766932
1 0.049673 0.731330 1.656028 -0.427514 -0.803391 -0.656469
2 -3.579314 3.347611 2.891815 -1.772502 1.505312 -1.852362
3 -0.558046 -1.290783 2.351023 4.669028 3.096437 0.383327
4 -3.215028 0.616974 5.917364 5.275736 7.201042 -0.735897
... ... ... ... ... ... ...
505 -2.178958 3.918007 8.247562 -0.523363 2.936684 -3.153375
506 0.736896 -1.571704 0.831026 2.673974 2.259796 -0.815212
507 -2.687474 -1.268576 -0.603680 5.571290 -3.516223 0.752697
508 0.182165 0.904990 4.690155 6.320494 -2.326415 2.241589
509 -1.675801 -1.602143 7.066843 2.881135 -5.278826 1.831972
510 rows × 6 columns
when I execute:
preds - outputStats
I expect a 510 x 6 dataframe with elementwise subtraction. Instead I get this:
0 1 2 3 4 5 0 1 2 3 4 5
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ...
505 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
506 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
507 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
508 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
509 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
I've tried dropping columns and the like, and that hasn't helped. I also get the same result with preds.subtract(outputStats). Any Ideas?
There are many ways that two different values can appear the same when displayed. One of the main ways is if they are different types, but corresponding values for those types. For instance, depending on how you're displaying them, the int 1 and the str '1' may not be easily distinguished. You can also have whitespace characters, such as '1' versus ' 1'.
If the problem is that one set is int while the other is str, you can solve the problem by converting them all to int or all to str. To do the former, do df.columns = [int(col) for col in df.columns]. To do the latter, df.columns = [str(col) for col in df.columns]. Converting to str is somewhat safer, as trying to convert to int can raise an error if the string isn't amenable to conversion (e.g. int('y') will raise an error), but int can be more usual as they have the numerical structure.
You asked in a comment about dropping columns. You can do this with drop and including axis=1 as a parameter to tell it to drop columns rather than rows, or you can use the del keyword. But changing the column names should remove the need to drop columns.

Filtering/Querying Pandas DataFrame after multiple grouping/agg

I have a dataframe that I first group, Counting QuoteLine Items grouped by stock(1-true, 0-false) and mfg type (K-Kit, M-manufactured, P-Purchased). Ultimately, I am interested in quotes that ALL items are either NonStock/Kit and/or Stock/['M','P'] :
grouped = df.groupby(['QuoteNum', 'typecode', 'stock']).agg({"QuoteLine": "count"})
and I get this:
QuoteLine-count
QuoteNum typecode stock
10001 K 0 1
10003 M 0 1
10005 M 0 3
1 1
10006 M 1 1
... ... ... ...
26961 P 1 1
26962 P 1 1
26963 P 1 2
26964 K 0 1
M 1 2
If I unstack it twice:
grouped = df.groupby(['QuoteNum', 'typecode', 'stock']).agg({"QuoteLine": "count"}).unstack().unstack()
# I get
QuoteLine-count
stock 0 1
typecode K M P K M P
QuoteNum
10001 1.0 NaN NaN NaN NaN NaN
10003 NaN 1.0 NaN NaN NaN NaN
10005 NaN 3.0 NaN NaN 1.0 NaN
10006 NaN NaN NaN NaN 1.0 NaN
10007 2.0 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
26959 NaN NaN NaN NaN NaN 1.0
26961 NaN 1.0 NaN NaN NaN 1.0
26962 NaN NaN NaN NaN NaN 1.0
26963 NaN NaN NaN NaN NaN 2.0
26964 1.0 NaN NaN NaN 2.0 NaN
Now I need to filter out all records where, this is where I need help
# pseudo-code
(stock == 0 and typecode in ['M','P']) -> values are NOT NaN (don't want those)
and
(stock == 1 and typecode='K') -> values are NOT NaN (don't want those either)
so I'm left with these records:
Basically: Columns "0/M, 0/P, 1/K" must be all NaNs and other columns have at least one non NaN value
QuoteLine-count
stock 0 1
typecode K M P K M P
QuoteNum
10001 1.0 NaN NaN NaN NaN NaN
10006 NaN NaN NaN NaN 1.0 NaN
10007 2.0 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
26959 NaN NaN NaN NaN NaN 1.0
26962 NaN NaN NaN NaN NaN 1.0
26963 NaN NaN NaN NaN NaN 2.0
26964 1.0 NaN NaN NaN 2.0 NaN
IIUC, use boolean mask to set rows that match your conditions to NaN then unstack desired levels:
# Shortcut (for readability)
lvl_vals = grouped.index.get_level_values
m1 = (lvl_vals('typecode') == 'K') & (lvl_vals('stock') == 0)
m2 = (lvl_vals('typecode').isin(['M', 'P'])) & (lvl_vals('stock') == 1)
grouped[m1|m2] = np.nan
out = grouped.unstack(level=['stock', 'typecode']) \
.loc[lambda x: x.isna().all(axis=1)]
Output result:
>>> out
QuoteLine-count
stock 0 1
typecode K M M P
QuoteNum
10001 NaN NaN NaN NaN
10006 NaN NaN NaN NaN
26961 NaN NaN NaN NaN
26962 NaN NaN NaN NaN
26963 NaN NaN NaN NaN
26964 NaN NaN NaN NaN
Desired values could be obtained by as_index==False, but i am not sure if they are in desired format.
grouped = df.groupby(['QuoteNum', 'typecode', 'stock'], as_index=False).agg({"QuoteLine": "count"})
grouped[((grouped["stock"]==0) & (grouped["typecode"].isin(["M" ,"P"]))) | ((grouped["stock"]==1) & (grouped["typecode"].isin(["K"])))]

Extract recommendations for user from pivot table

I have a following pivot table with user/items number of purchases that looks like this:
originalName Red t-shirt Black t-shirt ... Orange sweater Pink sweater
customer ...
165 NaN NaN ... NaN NaN
265 NaN 1.0 ... NaN NaN
288 NaN NaN ... NaN NaN
368 1.0 NaN ... NaN 2.0
396 NaN NaN ... 3.0 NaN
I wrote the method to get related items if I input one item, by using Pearson's correlation
def get_related_items(name, M, num):
number_of_orders = []
for title in M.columns:
if title == name:
continue
cor = pearson(M[name], M[title])
if np.isnan(cor):
continue
else:
number_of_orders.append((title, cor))
number_of_orders.sort(key=lambda tup: tup[1], reverse=True)
return number_of_orders[:num]
I am not sure what should be the logic to get the list of recommended items for a specific customer.
And how can I evaluate that?
Thanks!
import pandas as pd
import numpy as np
df=pd.DataFrame({'customer':[165,265,288,268,296],
'R_shirt':[np.nan,1.0,np.nan,1.0,np.nan],
'B_shirt':[np.nan,np.nan,2.0,np.nan,np.nan],
'X_shirt':[5.0,np.nan,2.0,np.nan,np.nan],
'Y_shirt':[3.0,np.nan,2.0,3.0,np.nan]
})
print(df)
customer R_shirt B_shirt X_shirt Y_shirt
0 165 NaN NaN 5.0 3.0
1 265 1.0 NaN NaN NaN
2 288 NaN 2.0 2.0 2.0
3 268 1.0 NaN NaN 3.0
4 296 NaN NaN NaN NaN
df['customer']=df['customer'].astype(str)
df=df.pivot_table(columns='customer')
customer = '165'
print(df)
customer 165 265 268 288
B_shirt NaN NaN NaN 2.0
R_shirt NaN 1.0 1.0 NaN
X_shirt 5.0 NaN NaN 2.0
Y_shirt 3.0 NaN 3.0 2.0
best_for_customer=df[customer][df[customer]!=np.nan].to_frame().sort_values(by=customer,ascending=False).dropna()
print(best_for_customer)
165
X_shirt 5.0
Y_shirt 3.0
variable customer is a name of customer that you want to check
The expected rating for a user for a given item will be the weighted average of the ratings given to the item by other users, where the weights will be the Pearson coefficients you calculated. Then you can pick the item with the highest expected rating and recommend it.

High Dimensional Data with time series

I am trying to make a pandas dataframe to hold my experimental data. The data is described below:
I have ~300 individuals participating in an experiment made of ~200 trials, where each trial has a number of experimentally controlled parameters (~10 parameters). For every trial and every individual I have a timeseries of some measurement, which is 30 timepoints long.
What is the best way to structure this data into a dataframe? I will need to be able to do things like get the experimental values for every individual at a certain time during all the trials with certain parameters, or get average values over certain times and trials for an indivudal, etc. Basically I will need to be able to slice this data in most conceivable ways.
Thanks!
EDIT: If you want to look at how I have my data at the moment, scroll down to the last 3 cells in this notebook: https://drive.google.com/file/d/1UZG_S2fg4MzaED8cLwE-nKHG0SHqevUr/view?usp=sharing
The data variable has all the parameters for each trial, and the interp_traces variable is an array of the timeseries measurements for each timepoint, individual, and trial.
I'd like to put everything in one thing if possible. The multi-index looks promising.
In my opinion need MultiIndex.
Sample:
individuals = list('ABCD')
trials = list('ab')
par = list('xyz')
dates = pd.date_range('2018-01-01', periods=5)
n = ['ind','trials','pars']
mux = pd.MultiIndex.from_product([individuals, trials, par], names=n)
df = pd.DataFrame(index=mux, columns=dates)
print (df)
2018-01-01 2018-01-02 2018-01-03 2018-01-04 2018-01-05
ind trials pars
A a x NaN NaN NaN NaN NaN
y NaN NaN NaN NaN NaN
z NaN NaN NaN NaN NaN
b x NaN NaN NaN NaN NaN
y NaN NaN NaN NaN NaN
z NaN NaN NaN NaN NaN
B a x NaN NaN NaN NaN NaN
y NaN NaN NaN NaN NaN
z NaN NaN NaN NaN NaN
b x NaN NaN NaN NaN NaN
y NaN NaN NaN NaN NaN
z NaN NaN NaN NaN NaN
C a x NaN NaN NaN NaN NaN
y NaN NaN NaN NaN NaN
z NaN NaN NaN NaN NaN
b x NaN NaN NaN NaN NaN
y NaN NaN NaN NaN NaN
z NaN NaN NaN NaN NaN
D a x NaN NaN NaN NaN NaN
y NaN NaN NaN NaN NaN
z NaN NaN NaN NaN NaN
b x NaN NaN NaN NaN NaN
y NaN NaN NaN NaN NaN
z NaN NaN NaN NaN NaN

Using pandas reindex with floats: interpolation

Can you explain this bizarre behaviour?
df=pd.DataFrame({'year':[1986,1987,1988],'bomb':arange(3)}).set_index('year')
In [9]: df.reindex(arange(1986,1988.125,.125))
Out[9]:
bomb
1986.000 0
1986.125 NaN
1986.250 NaN
1986.375 NaN
1986.500 NaN
1986.625 NaN
1986.750 NaN
1986.875 NaN
1987.000 1
1987.125 NaN
1987.250 NaN
1987.375 NaN
1987.500 NaN
1987.625 NaN
1987.750 NaN
1987.875 NaN
1988.000 2
In [10]: df.reindex(arange(1986,1988.1,.1))
Out[10]:
bomb
1986.0 0
1986.1 NaN
1986.2 NaN
1986.3 NaN
1986.4 NaN
1986.5 NaN
1986.6 NaN
1986.7 NaN
1986.8 NaN
1986.9 NaN
1987.0 NaN
1987.1 NaN
1987.2 NaN
1987.3 NaN
1987.4 NaN
1987.5 NaN
1987.6 NaN
1987.7 NaN
1987.8 NaN
1987.9 NaN
1988.0 NaN
When the increment is anything other than .125, I find that the new index values do not "find" the old rows that have matching values. ie there is a precision problem that is not being overcome. This is true even if I force the index to be a float before I try to interpolate. What is going on and/or what is the right way to do this?
I've been able to get it to work with increment of 0.1 by using
reindex( np.array(map(round,arange(1985,2010+dt,dt)*10))/10.0 )
By the way, I'm doing this as the first step in linearly interpolating a number of columns (e.g. "bomb" is one of them). If there's a nicer way to do that, I'd happily be set straight.
You are getting what you ask for. The reindex method only tries to for the data onto the new index that you provide. As mentioned in comments you are probably looking for dates in the index. I guess you were expecting the reindex method to do this though(interpolation):
df2 =df.reindex(arange(1986,1988.125,.125))
pd.Series.interpolate(df2['bomb'])
1986.000 0.000
1986.125 0.125
1986.250 0.250
1986.375 0.375
1986.500 0.500
1986.625 0.625
1986.750 0.750
1986.875 0.875
1987.000 1.000
1987.125 1.125
1987.250 1.250
1987.375 1.375
1987.500 1.500
1987.625 1.625
1987.750 1.750
1987.875 1.875
1988.000 2.000
Name: bomb
the second example you use is inconsistency is probably because of floating point accuracies. Stepping by 0.125 is equal to 1/8 which can be exactly done in binary. stepping by 0.1 is not directly mappable to binary so 1987 is probably out by a fraction.
1987.0 == 1987.0000000001
False
I think you are better off doing something like this by using PeriodIndex
In [39]: df=pd.DataFrame({'bomb':np.arange(3)})
In [40]: df
Out[40]:
bomb
0 0
1 1
2 2
In [41]: df.index = pd.period_range('1986','1988',freq='Y').asfreq('M')
In [42]: df
Out[42]:
bomb
1986-12 0
1987-12 1
1988-12 2
In [43]: df = df.reindex(pd.period_range('1986','1988',freq='M'))
In [44]: df
Out[44]:
bomb
1986-01 NaN
1986-02 NaN
1986-03 NaN
1986-04 NaN
1986-05 NaN
1986-06 NaN
1986-07 NaN
1986-08 NaN
1986-09 NaN
1986-10 NaN
1986-11 NaN
1986-12 0
1987-01 NaN
1987-02 NaN
1987-03 NaN
1987-04 NaN
1987-05 NaN
1987-06 NaN
1987-07 NaN
1987-08 NaN
1987-09 NaN
1987-10 NaN
1987-11 NaN
1987-12 1
1988-01 NaN
In [45]: df.iloc[0,0] = -1
In [46]: df['interp'] = df['bomb'].interpolate()
In [47]: df
Out[47]:
bomb interp
1986-01 -1 -1.000000
1986-02 NaN -0.909091
1986-03 NaN -0.818182
1986-04 NaN -0.727273
1986-05 NaN -0.636364
1986-06 NaN -0.545455
1986-07 NaN -0.454545
1986-08 NaN -0.363636
1986-09 NaN -0.272727
1986-10 NaN -0.181818
1986-11 NaN -0.090909
1986-12 0 0.000000
1987-01 NaN 0.083333
1987-02 NaN 0.166667
1987-03 NaN 0.250000
1987-04 NaN 0.333333
1987-05 NaN 0.416667
1987-06 NaN 0.500000
1987-07 NaN 0.583333
1987-08 NaN 0.666667
1987-09 NaN 0.750000
1987-10 NaN 0.833333
1987-11 NaN 0.916667
1987-12 1 1.000000
1988-01 NaN 1.000000