Pandas Pivot Table Sort Index Level 1 Not "Sticking" - pandas

I know this is a lot, but I really cannot pinpoint what is causing the problem.
Most of this code is just to demonstrate what I'm doing, but the short end of it is:
After reordering columns in a multi-indexed data frame (via
transposing and other methods), calling columns.levels returns the
original sorted levels instead of the new ones.
Given the following:
#Original data frame
import pandas as pd
df = pd.DataFrame(
{'Year':[2012,2012,2012,2012,2012,2012,2013,2013,2013,2013,2013,2013,2014,2014,2014,2014,2014,2014],
'Type':['A','A','B','B','C','C','A','A','B','B','C','C','A','A','B','B','C','C'],
'Org':['a','c','a','b','a','c','a','b','a','c','a','c','a','b','a','c','a','b'],
'Enr':[3,5,3,6,6,4,7,89,5,3,7,34,4,64,3,6,7,44]
})
df.head()
Enr Org Type Year
0 3 a A 2012
1 5 c A 2012
2 3 a B 2012
3 6 b B 2012
4 6 a C 2012
#Pivoted
dfp=df.pivot_table(df,index=['Year'],columns=['Type','Org'],aggfunc=np.sum)\
.sortlevel(ascending=True).sort_index(axis=1)
dfp
Enr
Type A B C
Org a b c a b c a b c
Year
2012 3.0 NaN 5.0 3.0 6.0 NaN 6.0 NaN 4.0
2013 7.0 89.0 NaN 5.0 NaN 3.0 7.0 NaN 34.0
2014 4.0 64.0 NaN 3.0 NaN 6.0 7.0 44.0 NaN
#Transposed
f=dfp.T
Year 2012 2013 2014
Type Org
Enr A a 3.0 7.0 4.0
b NaN 89.0 64.0
c 5.0 NaN NaN
B a 3.0 5.0 3.0
b 6.0 NaN NaN
c NaN 3.0 6.0
C a 6.0 7.0 7.0
b NaN NaN 44.0
c 4.0 34.0 NaN
#Sort level 2 by last column and transpose back
ab2=f.groupby(level=1)[f.columns[-1]].transform(sum)
ab3=pd.concat([f,ab2],axis=1)
ab4=ab3.sort_values([ab3.columns[-1]],ascending=[0])
ab4=ab4.drop(ab4.columns[-1],axis=1,inplace=False)
g=ab4.T
g
Enr
Type A C B
Org a b c a b c a b c
Year
2012 3.0 NaN 5.0 6.0 NaN 4.0 3.0 6.0 NaN
2013 7.0 89.0 NaN 7.0 NaN 34.0 5.0 NaN 3.0
2014 4.0 64.0 NaN 7.0 44.0 NaN 3.0 NaN 6.0
I know this was a lot, but I really cannot pinpoint what is causing the problem.
If you do:
g.Enr.columns.levels
The result is:
FrozenList([['A', 'B', 'C'], ['a', 'b', 'c']])
My question is: Why is it not:
FrozenList([['A', 'C', 'B'], ['a', 'b', 'c']]) ?
I really need it to be the second one.
Thanks in advance!

A MultiIndex stores itself as a set of levels, which are the distinct possible values, and labels, which are integer codes for the actual labels used. Changing the column order is just a reshuffling of the codes, not changing the actual levels.
If you want the levels by the order in which they first appear you could do something like this.
In [61]: c = g.Enr.columns
In [62]: [c.levels[i].take(pd.unique(c.labels[i]))
...: for i in range(len(c.levels))]
Out[62]:
[Index([u'A', u'C', u'B'], dtype='object', name=u'Type'),
Index([u'a', u'b', u'c'], dtype='object', name=u'Org')]

Related

How do I append an uneven column to an existing one?

I am having trouble appending later values from column C to column A within the same df using pandas. I have tried .append and .concat with ignore_index=True, still not working.
import pandas as pd
d = {'a':[1,2,3,None, None], 'b':[7,8,9, None, None], 'c':[None, None, None, 5, 6]}
df = pd.DataFrame(d)
df['a'] = df['a'].append(df['c'], ignore_index=True)
print(df)
a b c
0 1.0 7.0 NaN
1 2.0 8.0 NaN
2 3.0 9.0 NaN
3 NaN NaN 5.0
4 NaN NaN 6.0
Desired:
a b c
0 1.0 7.0 NaN
1 2.0 8.0 NaN
2 3.0 9.0 NaN
3 5.0 NaN 5.0
4 6.0 NaN 6.0
Thank you for updating that, this is what I would do:
df['a'] = df['a'].fillna(df['c'])
print(df)
Output:
a b c
0 1.0 7.0 NaN
1 2.0 8.0 NaN
2 3.0 9.0 NaN
3 5.0 NaN 5.0
4 6.0 NaN 6.0

How to represent the column with max Nan values in pandas df?

i can show it by: df.isnull().sum() and get the max value with: df.isnull().sum().max() ,
but someone can tell me how to represent the column name with max Nan's ?
Thank you all!
Use Series.idxmax with DataFrame.loc for filter column with most missing values:
df.loc[:, df.isnull().sum().idxmax()]
If need select multiple columns with more maximes compare Series with max value:
df = pd.DataFrame({
'A':list('abcdef'),
'B':[4,5,np.nan,5,np.nan,4],
'C':[7,8,9,np.nan,2,np.nan],
'D':[1,np.nan,5,7,1,0]
})
print (df)
A B C D
0 a 4.0 7.0 1.0
1 b 5.0 8.0 NaN
2 c NaN 9.0 5.0
3 d 5.0 NaN 7.0
4 e NaN 2.0 1.0
5 f 4.0 NaN 0.0
s = df.isnull().sum()
df = df.loc[:, s.eq(s.max())]
print (df)
B C
0 4.0 7.0
1 5.0 8.0
2 NaN 9.0
3 5.0 NaN
4 NaN 2.0
5 4.0 NaN

How to perform a rolling window on a pandas DataFrame, whereby each row consists nan values that should not be replaced?

I have the following dataframe:
df = pd.DataFrame([[0, 1, 2, 4, np.nan, np.nan, np.nan,1],
[0, 1, 2 ,np.nan, np.nan, np.nan,np.nan,1],
[0, 2, 2 ,np.nan, 2, np.nan,1,1]])
With output:
0 1 2 3 4 5 6 7
0 0 1 2 4 NaN NaN NaN 1
1 0 1 2 NaN NaN NaN NaN 1
2 0 2 2 NaN 2 NaN 1 1
with dtypes:
df.dtypes
0 int64
1 int64
2 int64
3 float64
4 float64
5 float64
6 float64
7 int64
Then the underneath rolling summation is applied:
df.rolling(window = 7, min_periods =1, axis = 'columns').sum()
And the output is as follows:
0 1 2 3 4 5 6 7
0 0.0 1.0 3.0 4.0 4.0 4.0 4.0 4.0
1 0.0 1.0 3.0 NaN NaN NaN NaN 4.0
2 0.0 2.0 4.0 NaN 2.0 2.0 3.0 5.0
I notice that the rolling window stops and starts again whenever the dtype of the next column is different.
I however have a dataframe whereby all columns are of the same object type.
df = df.astype('object')
which has output:
0 1 2 3 4 5 6 7
0 0.0 1.0 3.0 7.0 7.0 7.0 7.0 8.0
1 0.0 1.0 3.0 3.0 3.0 3.0 3.0 4.0
2 0.0 2.0 4.0 4.0 6.0 6.0 7.0 8.0
My desired output however, stops and starts again after a nan value appears. This would look like:
0 1 2 3 4 5 6 7
0 0.0 1.0 3.0 7.0 NaN NaN NaN 8.0
1 0.0 1.0 3.0 NaN NaN NaN Nan 4.0
2 0.0 2.0 4.0 NaN 6.0 NaN 7.0 8.0
I figured there must be a way that NaN values are not considered but also not filled in with values obtained from the rolling window.
Anything would help!
Workaround is:
Where are the nan-values located:
nan = df.isnull()
Apply the rolling window.
df = df.rolling(window = 7, min_periods =1, axis = 'columns').sum()
Only show values labeled as false.
df[~nan]

prevent pandas.interpolate() from extrapolation

I'm having difficulty in preventing pd.DataFrame.interpolate(method='index') from extrapolation.
Specifically:
>>> df = pd.DataFrame({1: range(1, 5), 2: range(2, 6), 3 : range(3, 7)}, index = [1, 2, 3, 4])
>>> df = df.reindex(range(6)).reindex(range(5), axis=1)
>>> df.iloc[3, 2] = np.nan
>>> df
0 1 2 3 4
0 NaN NaN NaN NaN NaN
1 NaN 1.0 2.0 3.0 NaN
2 NaN 2.0 3.0 4.0 NaN
3 NaN 3.0 NaN 5.0 NaN
4 NaN 4.0 5.0 6.0 NaN
5 NaN NaN NaN NaN NaN
So df is just a block of data surrounded by NaN, with an interior missing point at iloc[3, 2]. Now when I apply .interpolate() (along either the horizontal or vertical axis), my goal is to have ONLY that interior point filled, leaving the surrounding NaNs untouched. But somehow I'm not able to get it to work.
I tried:
>>> df.interpolate(method='index', axis=0, limit_area='inside')
0 1 2 3 4
0 NaN NaN NaN NaN NaN
1 NaN 1.0 2.0 3.0 NaN
2 NaN 2.0 3.0 4.0 NaN
3 NaN 3.0 4.0 5.0 NaN
4 NaN 4.0 5.0 6.0 NaN
5 NaN 4.0 5.0 6.0 NaN
Note the last row got filled, which is undesirable. (btw, I'd think the fill value should be linear extrapolation based on index, but it is just padding the last value, which is highly undesirable.)
I also tried combination of limit and limit_direction to no avail.
What would be the correct argument setting to get the desired result? Hopefully without some contorted masking (but that would work too). Thx.
Ok, turns out I'm running this on Pandas 0.21, hence the limit_area argument is silently failing. Looks like starting from 0.24 this is fixed. Case closed.

How to select NaN values in pandas in specific range

I have a dataframe like this:
df = pd.DataFrame({'col1': [5,6,np.nan, np.nan,np.nan, 4, np.nan, np.nan,np.nan, np.nan,7,8,8, np.nan, 5 , np.nan]})
df:
col1
0 5.0
1 6.0
2 NaN
3 NaN
4 NaN
5 4.0
6 NaN
7 NaN
8 NaN
9 NaN
10 7.0
11 8.0
12 8.0
13 NaN
14 5.0
15 NaN
These NaN values should be replaced in the following way. The first selection should look like this.
2 NaN
3 NaN
4 NaN
5 4.0
6 NaN
7 NaN
8 NaN
9 NaN
And then these Nan values should be replace with the only value in that selection, 4.
The second selection is:
13 NaN
14 5.0
15 NaN
and these NaN values should be replaced with 5.
With isnull() you can select the NaN values in a dataframe but how are able to filter/select these specific ranges in pandas?
Solution if missing values are around one non missing val - solution create unique groups and replace in groups by forward and back filling:
#test missing values
s = df['col1'].isna()
#create unique groups
v = s.ne(s.shift()).cumsum()
#count groups and get only 1 value around, filter only misising values groups
mask = v.map(v.value_counts()).eq(1) | s
#groups for replacement per groups
g = mask.ne(mask.shift()).cumsum()
df['col2'] = df.groupby(g)['col1'].apply(lambda x: x.ffill().bfill())
print (df)
col1 col2
0 5.0 5.0
1 6.0 6.0
2 NaN 4.0
3 NaN 4.0
4 NaN 4.0
5 4.0 4.0
6 NaN 4.0
7 NaN 4.0
8 NaN 4.0
9 NaN 4.0
10 7.0 7.0
11 8.0 8.0
12 8.0 8.0
13 NaN 5.0
14 5.0 5.0
15 NaN 5.0