Cleaner way to copy a slice from one slightly-differently-named Multiindex to another - pandas

I need to assign a slice of dfB to dfA. Each df has multiindex columns where the level=0 has different names, while the level=1 has same names between dataframe.
A simple copy of the slice doesn't work because the multindexes are not identical.
I have a gross hack to temporarily rename the slice (which is a Series) before it's assigned to the dataframe. But this is gross. Is there a cleaner way to do this? I am not able to rename the level=0 of either dataframe.
af = pd.DataFrame(index=range(4),columns=pd.MultiIndex.from_product([['mike'], ['age', 'weight']]))
bf = pd.DataFrame(index=range(4),columns=pd.MultiIndex.from_product([['foo'], ['age', 'weight']]))
for i in range(4):
bf.loc[i,idx['foo','age']]= 10*i
bf.loc[i,idx['foo', 'weight']]= "huge"
af.loc[1, idx['mike', 'age']] = bf.loc[1, idx['foo', 'age']] # slice returns an int
af.loc[3, idx['mike', :]] = bf.loc[1, idx['foo', :]] # slice returns a series
print("3 is not copied")
print(af) # nothing copied to 3
#nasty hack. Can I do better?
gross = af.loc[3, idx['mike', :]].index
hack = bf.loc[1, idx['foo', :]].copy()
hack.index = gross
af.loc[3, idx['mike', :]] = hack
print("\n3 IS copied")
print(af)

For same indices use rename:
print (bf.loc[1, idx['foo', :]].rename({'foo':'mike'}, level=0))
mike age 10
weight huge
Name: 1, dtype: object
af.loc[3, idx['mike', :]] = bf.loc[1, idx['foo', :]].rename({'foo':'mike'}, level=0)
print(af)
mike
age weight
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 10 huge
Or assign numpy array:
af.loc[3, idx['mike', :]] = bf.loc[1, idx['foo', :]].to_numpy()
print(af)
mike
age weight
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 10 huge

Related

iterating through a dataframe alternative of for loop

i have a very large dataframe, i did a for loop but it is taking forever, and I am wondering if there is any alternative?
index
ids
year
0
1890
2001
1
2678
NaN
2
4780
NaN
3
9844
1999
the idea is to get an array of ids of people who have NaN values in the 'year' column, so what I did, was I turned NaN into 0, and wrote this for loop.
df_nan = []
for i in range(0, len(df.index)):
for j in range(0, len(df.columns)):
if ((int(df.values[i,j])) == 0):
df_nan.append(df.values[i,0])
the for loop works, coz I tried it on a smaller dataframe, but I cant use it on the main one because it takes so long.
You can use filtering.
df = pd.DataFrame({'ids': [1890, 2678, 4780, 9844], 'year': [2001, pd.np.nan, pd.np.nan, 1999]})
nan_rows = df[df['year'].isnull()]
ids = nan_rows['ids'].values
print(ids) # outputs: [2678 4780]

Pandas: Merging multiple dataframes efficiently

I have a situation where I need to merge multiple dataframes that I can do easily using the below code:
# Merge all the datasets together
df_prep1 = df_prep.merge(df1,on='e_id',how='left')
df_prep2 = df_prep1.merge(df2,on='e_id',how='left')
df_prep3 = df_prep2.merge(df3,on='e_id',how='left')
df_prep4 = df_prep3.merge(df_4,on='e_id',how='left')
df_prep5 = df_prep4.merge(df_5,on='e_id',how='left')
df_prep6 = df_prep5.merge(df_6,on='e_id',how='left')
But what I want to understand is that if there is any other efficient way to perform this merge, maybe using a helper function? If yes, then how could I achieve that?
You can use reduce from functools module to merge multiple dataframes:
from functools import reduce
dfs = [df_1, df_2, df_3, df_4, df_5, df_6]
out = reduce(lambda dfl, dfr: pd.merge(dfl, dfr, on='e_id', how='left'), dfs)
You can put all your dfs into a list, or pass them from a function, a loop, etc. and then have 1 main df that you merge everything onto.
You can start with an empty df and iterate through. In your case, since you are doing left merge, it looks like your df_prep should already have all of the e_id values that you want. You'll need to figure out what you want to do with any additional columns, e.g., you can have pandas add _x and _y after conflicting column names that you don't merge, or rename them, etc. See this toy example:
main_df = pd.DataFrame({'e_id': [0, 1, 2, 3, 4]})
for x in range(3):
dfx = pd.DataFrame({'e_id': [x], 'another_col' + str(x): [x * 10]})
main_df = main_df.merge(dfx, on='e_id', how='left')
to get:
e_id another_col0 another_col1 another_col2
0 0 0.0 NaN NaN
1 1 NaN 10.0 NaN
2 2 NaN NaN 20.0
3 3 NaN NaN NaN
4 4 NaN NaN NaN

How can i create a column from 2 related columns of lists in python?

sampleID
testnames
results
23939332
[32131,34343,35566]
[NEGATIVE,0.234,3.331]
32332323
[34343,96958,39550,88088]
[0,312,0.008,0.1,0.2]
The table above is what I have, and the one below is what I want to achieve:
sampleID
32131
34343
39550
88088
96985
35566
23939332
NEGATIVE
0.234
NaN
NaN
NaN
3.331
32332323
NaN
0,312
0.1
0.2
0.008
NaN
So I need to create columns of unique values from the testnames column and fill the cells with the corresponding values from the results column.
Considering this is as a sample from a very large dataset (table).
Here is a commented solution:
(df.set_index(['sampleID']) # keep sampleID out of the expansion
.apply(pd.Series.explode) # expand testnames and results
.reset_index() # reset the index
.groupby(['sampleID', 'testnames']) #
.first() # set the expected shape
.unstack()) #
It gives the result you expected, though with a different column order:
results
testnames 32131 34343 35566 39550 88088 96958
sampleID
23939332 NEGATIVE 0.234 3.331 NaN NaN NaN
32332323 NaN 0.312 NaN 0.1 0.2 0.008
Let's see how it does on generated data:
def build_df(n_samples, n_tests_per_sample, n_test_types):
df = pd.DataFrame(columns=['sampleID', 'testnames', 'results'])
test_types = np.random.choice(range(0,100000), size=n_test_types, replace=False)
for i in range(n_samples):
testnames = list(np.random.choice(test_types,size=n_tests_per_sample))
results = list(np.random.random(size=n_tests_per_sample))
df = df.append({'sampleID': i, 'testnames':testnames, 'results':results}, ignore_index=True)
return df
def reshape(df):
df2 = (df.set_index(['sampleID']) # keep the sampleID out of the expansion
.apply(pd.Series.explode) # expand testnames and results
.reset_index() # reset the index
.groupby(['sampleID', 'testnames']) #
.first() # set the expected shape
.unstack())
return df2
%time df = build_df(60000, 10, 100)
# Wall time: 9min 48s (yes, it was ugly)
%time df2 = reshape(df)
# Wall time: 1.01 s
reshape() breaks when n_test_types becomes too large, with ValueError: Unstacked DataFrame is too big, causing int32 overflow.

Series.replace cannot use dict-like to_replace and non-None value [duplicate]

I've got a pandas DataFrame filled mostly with real numbers, but there is a few nan values in it as well.
How can I replace the nans with averages of columns where they are?
This question is very similar to this one: numpy array: replace nan values with average of columns but, unfortunately, the solution given there doesn't work for a pandas DataFrame.
You can simply use DataFrame.fillna to fill the nan's directly:
In [27]: df
Out[27]:
A B C
0 -0.166919 0.979728 -0.632955
1 -0.297953 -0.912674 -1.365463
2 -0.120211 -0.540679 -0.680481
3 NaN -2.027325 1.533582
4 NaN NaN 0.461821
5 -0.788073 NaN NaN
6 -0.916080 -0.612343 NaN
7 -0.887858 1.033826 NaN
8 1.948430 1.025011 -2.982224
9 0.019698 -0.795876 -0.046431
In [28]: df.mean()
Out[28]:
A -0.151121
B -0.231291
C -0.530307
dtype: float64
In [29]: df.fillna(df.mean())
Out[29]:
A B C
0 -0.166919 0.979728 -0.632955
1 -0.297953 -0.912674 -1.365463
2 -0.120211 -0.540679 -0.680481
3 -0.151121 -2.027325 1.533582
4 -0.151121 -0.231291 0.461821
5 -0.788073 -0.231291 -0.530307
6 -0.916080 -0.612343 -0.530307
7 -0.887858 1.033826 -0.530307
8 1.948430 1.025011 -2.982224
9 0.019698 -0.795876 -0.046431
The docstring of fillna says that value should be a scalar or a dict, however, it seems to work with a Series as well. If you want to pass a dict, you could use df.mean().to_dict().
Try:
sub2['income'].fillna((sub2['income'].mean()), inplace=True)
In [16]: df = DataFrame(np.random.randn(10,3))
In [17]: df.iloc[3:5,0] = np.nan
In [18]: df.iloc[4:6,1] = np.nan
In [19]: df.iloc[5:8,2] = np.nan
In [20]: df
Out[20]:
0 1 2
0 1.148272 0.227366 -2.368136
1 -0.820823 1.071471 -0.784713
2 0.157913 0.602857 0.665034
3 NaN -0.985188 -0.324136
4 NaN NaN 0.238512
5 0.769657 NaN NaN
6 0.141951 0.326064 NaN
7 -1.694475 -0.523440 NaN
8 0.352556 -0.551487 -1.639298
9 -2.067324 -0.492617 -1.675794
In [22]: df.mean()
Out[22]:
0 -0.251534
1 -0.040622
2 -0.841219
dtype: float64
Apply per-column the mean of that columns and fill
In [23]: df.apply(lambda x: x.fillna(x.mean()),axis=0)
Out[23]:
0 1 2
0 1.148272 0.227366 -2.368136
1 -0.820823 1.071471 -0.784713
2 0.157913 0.602857 0.665034
3 -0.251534 -0.985188 -0.324136
4 -0.251534 -0.040622 0.238512
5 0.769657 -0.040622 -0.841219
6 0.141951 0.326064 -0.841219
7 -1.694475 -0.523440 -0.841219
8 0.352556 -0.551487 -1.639298
9 -2.067324 -0.492617 -1.675794
Although, the below code does the job, BUT its performance takes a big hit, as you deal with a DataFrame with # records 100k or more:
df.fillna(df.mean())
In my experience, one should replace NaN values (be it with Mean or Median), only where it is required, rather than applying fillna() all over the DataFrame.
I had a DataFrame with 20 variables, and only 4 of them required NaN values treatment (replacement). I tried the above code (Code 1), along with a slightly modified version of it (code 2), where i ran it selectively .i.e. only on variables which had a NaN value
#------------------------------------------------
#----(Code 1) Treatment on overall DataFrame-----
df.fillna(df.mean())
#------------------------------------------------
#----(Code 2) Selective Treatment----------------
for i in df.columns[df.isnull().any(axis=0)]: #---Applying Only on variables with NaN values
df[i].fillna(df[i].mean(),inplace=True)
#---df.isnull().any(axis=0) gives True/False flag (Boolean value series),
#---which when applied on df.columns[], helps identify variables with NaN values
Below is the performance i observed, as i kept on increasing the # records in DataFrame
DataFrame with ~100k records
Code 1: 22.06 Seconds
Code 2: 0.03 Seconds
DataFrame with ~200k records
Code 1: 180.06 Seconds
Code 2: 0.06 Seconds
DataFrame with ~1.6 Million records
Code 1: code kept running endlessly
Code 2: 0.40 Seconds
DataFrame with ~13 Million records
Code 1: --did not even try, after seeing performance on 1.6 Mn records--
Code 2: 3.20 Seconds
Apologies for a long answer ! Hope this helps !
If you want to impute missing values with mean and you want to go column by column, then this will only impute with the mean of that column. This might be a little more readable.
sub2['income'] = sub2['income'].fillna((sub2['income'].mean()))
# To read data from csv file
Dataset = pd.read_csv('Data.csv')
X = Dataset.iloc[:, :-1].values
# To calculate mean use imputer class
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
imputer = imputer.fit(X[:, 1:3])
X[:, 1:3] = imputer.transform(X[:, 1:3])
Directly use df.fillna(df.mean()) to fill all the null value with mean
If you want to fill null value with mean of that column then you can use this
suppose x=df['Item_Weight'] here Item_Weight is column name
here we are assigning (fill null values of x with mean of x into x)
df['Item_Weight'] = df['Item_Weight'].fillna((df['Item_Weight'].mean()))
If you want to fill null value with some string then use
here Outlet_size is column name
df.Outlet_Size = df.Outlet_Size.fillna('Missing')
Pandas: How to replace NaN (nan) values with the average (mean), median or other statistics of one column
Say your DataFrame is df and you have one column called nr_items. This is: df['nr_items']
If you want to replace the NaN values of your column df['nr_items'] with the mean of the column:
Use method .fillna():
mean_value=df['nr_items'].mean()
df['nr_item_ave']=df['nr_items'].fillna(mean_value)
I have created a new df column called nr_item_ave to store the new column with the NaN values replaced by the mean value of the column.
You should be careful when using the mean. If you have outliers is more recommendable to use the median
Another option besides those above is:
df = df.groupby(df.columns, axis = 1).transform(lambda x: x.fillna(x.mean()))
It's less elegant than previous responses for mean, but it could be shorter if you desire to replace nulls by some other column function.
using sklearn library preprocessing class
from sklearn.impute import SimpleImputer
missingvalues = SimpleImputer(missing_values = np.nan, strategy = 'mean', axis = 0)
missingvalues = missingvalues.fit(x[:,1:3])
x[:,1:3] = missingvalues.transform(x[:,1:3])
Note: In the recent version parameter missing_values value change to np.nan from NaN
I use this method to fill missing values by average of a column.
fill_mean = lambda col : col.fillna(col.mean())
df = df.apply(fill_mean, axis = 0)
You can also use value_counts to get the most frequent values. This would work on different datatypes.
df = df.apply(lambda x:x.fillna(x.value_counts().index[0]))
Here is the value_counts api reference.

Case insensitive pandas.concat

How would I perform a case insensitive pandas.concat?
df1 = pd.DataFrame({"a":[1,2,3]},index=["a","b","c"])
df2 = pd.DataFrame({"b":[1,2,3]},index=["a","b","c"])
df1a = pd.DataFrame({"A":[1,2,3]},index=["A","B","C"])
pd.concat([df1, df2],axis=1)
a b
a 1 1
b 2 2
c 3 3
but this does not work:
pd.concat([df1, df1a],axis=1)
a A
A NaN 1
B NaN 2
C NaN 3
a 1 NaN
b 2 NaN
c 3 NaN
Is there an easy way to do this?
I have the same question for concat on a Series.
This works for a DataFrame:
pd.DataFrame([11,21,31],index=pd.MultiIndex.from_tuples([("A",x) for x in ["a","B","c"]])).rename(str.lower)
but this does not work for a Series:
pd.Series([11,21,31],index=pd.MultiIndex.from_tuples([("A",x) for x in ["a","B","c"]])).rename(str.lower)
TypeError: descriptor 'lower' requires a 'str' object but received a 'tuple'
For renaming, DataFrames use:
def rename_axis(self, mapper, axis=1):
index = self.axes[axis]
if isinstance(index, MultiIndex):
new_axis = MultiIndex.from_tuples([tuple(mapper(y) for y in x) for x in index], names=index.names)
else:
new_axis = Index([mapper(x) for x in index], name=index.name)
whereas when renaming Series:
result.index = Index([mapper_f(x) for x in self.index], name=self.index.name)
so my updated question is how to perform the rename/case insensitive concat with a Series?
You can do this via rename:
pd.concat([df1, df1a.rename(index=str.lower)], axis=1)
EDIT:
If you want to do this with a MultiIndexed Series you'll need to set it manually, for now. There's a bug report over at pandas GitHub repo waiting to be fixed (thanks #ViktorKerkez).
s.index = pd.MultiIndex.from_tuples(s.index.map(lambda x: tuple(map(str.lower, x))))
You can replace str.lower with whatever function you want to use to rename your index.
Note that you cannot use reindex in general here, because it tries to find values with the renamed index and thus it will return nan values, unless your rename results in no changes to the original index.
For the MultiIndexed Series objects, if this is not a bug, you can do:
s.index = pd.MultiIndex.from_tuples(
s.index.map(lambda x: tuple(map(str.lower, x)))
)