I have a datatable like this:
Run, test1, test2
1, 100, 102.
2, 110, 100.
3, 108, 105.
I would like to have the 2 columns merged together like this:
Run, results
1, 100
1, 102
2, 110
2, 100
3, 108
3, 105
How do I do it in Pandas? Thanks a lot!
Use stack with Multiindex to column by double reset_index:
df = df.set_index('Run').stack().reset_index(drop=True, level=1).reset_index(name='results')
print (df)
Run results
0 1 100.0
1 1 102.0
2 2 110.0
3 2 100.0
4 3 108.0
5 3 105.0
Or melt:
df = df.melt('Run', value_name='results').drop('variable', axis=1).sort_values('Run')
print (df)
Run results
0 1 100.0
3 1 102.0
1 2 110.0
4 2 100.0
2 3 108.0
5 3 105.0
Numpy solution with numpy.repeat:
a = np.repeat(df['Run'].values, 2)
b = df[['test1','test2']].values.flatten()
df = pd.DataFrame({'Run':a , 'results': b}, columns=['Run','results'])
print (df)
Run results
0 1 100.0
1 1 102.0
2 2 110.0
3 2 100.0
4 3 108.0
5 3 105.0
This how I achieve this
Option 1
wide_to_long
pd.wide_to_long(df,stubnames='test',i='Run',j='LOL').reset_index().drop('LOL',1)
Out[776]:
Run test
0 1 100.0
1 2 110.0
2 3 108.0
3 1 102.0
4 2 100.0
5 3 105.0
Notice : Here I did not change the column name from test to results, I think by using test as new column name is better in your situation .
Option 2
pd.concat
df=df.set_index('Run')
pd.concat([df[Col] for Col in df.columns],axis=0).reset_index().rename(columns={0:'results'})
Out[786]:
Run results
0 1 100.0
1 2 110.0
2 3 108.0
3 1 102.0
4 2 100.0
5 3 105.0
Related
I have a dataframe like this:
df = pd.DataFrame([[1,2],
[1,4],
[1,5],
[2,65],
[2,34],
[2,23],
[2,45]], columns = ['label', 'score'])
Is there an efficient way to create a column score_winsor that winsorises the score column within the groups at the 1% level?
I tried this with no success:
df['score_winsor'] = df.groupby('label')['score'].transform(lambda x: max(x.quantile(.01), min(x, x.quantile(.99))))
You could use scipy's implementation of winsorize
df["score_winsor"] = df.groupby('label')['score'].transform(lambda row: winsorize(row, limits=[0.01,0.01]))
Output
>>> df
label score score_winsor
0 1 2 2
1 1 4 4
2 1 5 5
3 2 65 65
4 2 34 34
5 2 23 23
6 2 45 45
This works:
df['score_winsor'] = df.groupby('label')['score'].transform(lambda x: np.maximum(x.quantile(.01), np.minimum(x, x.quantile(.99))))
Output
print(df.to_string())
label score score_winsor
0 1 2 2.04
1 1 4 4.00
2 1 5 4.98
3 2 65 64.40
4 2 34 34.00
5 2 23 23.33
6 2 45 45.00
Is it possible to do something like this
df = pd.DataFrame({
"sort_by": ["a","a","a","a","b","b","b", "a"],
"x": [100.5,200,200,500,1,2,3, 200],
"y": [4000,2000,2000,1000,500.5,600.5,600.5, 100.5]
})
df = df.sort_values(by=["x","y"], ascending=False)
where I can sort by the sort_by column and use x and y to find the rank (using y to break ties)
so ideal outlook will be
sort_by x y rank
a 500 1000 1
a 200 2000 2
a 200 2000 2
a 200 100.5 3
a 100.5 4000 4
b 3 600.5 1
b 2 600.5 2
b 1 500.5 3
Check with factorize after sort_values
df = df.sort_values(by=["x","y"], ascending=False)
df['rank']=tuple(zip(df.x,df.y))
df['rank']=df.groupby('sort_by',sort=False)['rank'].apply(lambda x : pd.Series(pd.factorize(x)[0])).values
df
Out[615]:
sort_by x y rank
3 a 500.0 1000.0 1
1 a 200.0 2000.0 2
2 a 200.0 2000.0 2
7 a 200.0 100.5 3
0 a 100.5 4000.0 4
6 b 3.0 600.5 1
5 b 2.0 600.5 2
4 b 1.0 500.5 3
Good morning.
I have a dataframe that can be both like this:
df1 =
zone date p1 p2
0 A 1 154 2
1 B 1 2647 7
2 C 1 0 0
3 A 2 1280 3
4 B 2 6809 20
5 C 2 288 5
6 A 3 2000 4
and like this:
df2 =
zone date p1 p2
0 A 1 154 2
1 B 1 2647 7
2 C 1 0 0
3 A 2 1280 3
4 B 2 6809 20
5 C 2 288 5
The difference between the two is only that the case may arise in which one, or several but not all, zones do have data for the highest of the time periods (column date). My desired result is to be able to complete the dataframe until a certain period of time (3 in the example), in the following way in each of the cases:
df1_result =
zone date p1 p2
0 A 1 154 2
1 B 1 2647 7
2 C 1 0 0
3 A 2 1280 3
4 B 2 6809 20
5 C 2 288 5
6 A 3 2000 4
7 B 3 6809 20
8 C 3 288 5
df2_result =
zone date p1 p2
0 A 1 154 2
1 B 1 2647 7
2 C 1 0 0
3 A 2 1280 3
4 B 2 6809 20
5 C 2 288 5
6 A 3 1280 3
7 B 3 6809 20
8 C 3 288 5
I've tried different combinations of pivot and fillna with different methods, but I can't achieve the previous result.
I hope my explanation was understood.
Many thanks in advance.
You can use reindex to create entries for all dates in the range, and then forward fill the last value into it.
import pandas as pd
df1 = pd.DataFrame([['A', 1,154, 2],
['B', 1,2647, 7],
['C', 1,0, 0],
['A', 2,1280, 3],
['B', 2,6809, 20],
['C', 2,288, 5],
['A', 3,2000, 4]],
columns=['zone', 'date', 'p1', 'p2'])
result = df1.groupby("zone").apply(lambda x: x.set_index("date").reindex(range(1, 4), method='ffill'))
print(result)
To get
zone p1 p2
zone date
A 1 A 154 2
2 A 1280 3
3 A 2000 4
B 1 B 2647 7
2 B 6809 20
3 B 6809 20
C 1 C 0 0
2 C 288 5
3 C 288 5
IIUC, you can reconstruct a pd.MultiIndex from your original df and use fillna to get the max from each subgroup of zone you have.
first, build your index
ind = df1.set_index(['zone', 'date']).index
levels = ind.levels
n = len(levels[0])
labels = [np.tile(np.arange(n), n), np.repeat(np.arange(0, n), n)]
Then, use pd.MultiIndex constructor to reindex
df1.set_index(['zone', 'date'])\
.reindex(pd.MultiIndex(levels= levels, labels= labels))\
.fillna(df1.groupby(['zone']).max())
p1 p2
zone date
A 1 154.0 2.0
B 1 2647.0 7.0
C 1 0.0 0.0
A 2 1280.0 3.0
B 2 6809.0 20.0
C 2 288.0 5.0
A 3 2000.0 4.0
B 3 6809.0 20.0
C 3 288.0 5.0
To fill df2, just change from df1 in this last line of code to df2 and you get
p1 p2
zone date
A 1 154.0 2.0
B 1 2647.0 7.0
C 1 0.0 0.0
A 2 1280.0 3.0
B 2 6809.0 20.0
C 2 288.0 5.0
A 3 2000.0 4.0
B 3 6809.0 20.0
C 3 288.0 5.0
I suggest not to copy/paste directly the code and try to run, but rather try to understand the process and make slight changes if needed depending on how different your original data frame is from what you posted.
I managed to solve using if and for loops but I'm looking for a less computationally expensive way to do this. i.e. using apply or map or any other technique
d = {1:10, 2:20, 3:30}
df
a b
1 35
1 nan
1 nan
2 nan
2 47
2 nan
3 56
3 nan
I want to fill missing values of column b according to dict d, i.e. output should be
a b
1 35
1 10
1 10
2 20
2 47
2 20
3 56
3 30
You can use fillna or combine_first by maped a column:
print (df['a'].map(d))
0 10
1 10
2 10
3 20
4 20
5 20
6 30
7 30
Name: a, dtype: int64
df['b'] = df['b'].fillna(df['a'].map(d))
print (df)
a b
0 1 35.0
1 1 10.0
2 1 10.0
3 2 20.0
4 2 47.0
5 2 20.0
6 3 56.0
7 3 30.0
df['b'] = df['b'].combine_first(df['a'].map(d))
print (df)
a b
0 1 35.0
1 1 10.0
2 1 10.0
3 2 20.0
4 2 47.0
5 2 20.0
6 3 56.0
7 3 30.0
And if all values are ints add astype:
df['b'] = df['b'].fillna(df['a'].map(d)).astype(int)
print (df)
a b
0 1 35
1 1 10
2 1 10
3 2 20
4 2 47
5 2 20
6 3 56
7 3 30
If all data in column a are in keys of dict, then is possible use replace:
df['b'] = df['b'].fillna(df['a'].replace(d))
in my csv data I have a column with a following data:
110.00
111.00
111.00 *
112.00
113.00
114.00
114.00 *
115.00
115.00 *
116.00
110.00
111.00
111.00 *
112.00
113.00
114.00
114.00 *
115.00
115.00 *
116.00
I read it in data frame and I'd like to delete on of the rows with the duplicating numbers but only if they are immedietely one after another. I marked the rows I's like to remove with an *.
Thanks for any suggestions
I think you can do this using .shift(), which can shift a series forward or backward (defaulting to one forward.) You want to keep rows if they're not the same as the next ones, so something like:
df[df["A"] != df["A"].shift()]
For example:
>>> df = pd.DataFrame({"A": [1,2,1,2,2,3,3,3,1,2]})
>>> df["A"]
0 1
1 2
2 1
3 2
4 2
5 3
6 3
7 3
8 1
9 2
Name: A, dtype: int64
>>> df["A"].shift()
0 NaN
1 1
2 2
3 1
4 2
5 2
6 3
7 3
8 3
9 1
Name: A, dtype: float64
>>> df["A"] != df["A"].shift()
0 True
1 True
2 True
3 True
4 False
5 True
6 False
7 False
8 True
9 True
Name: A, dtype: bool
Leading up to:
>>> df[df["A"] != df["A"].shift()]
A
0 1
1 2
2 1
3 2
5 3
8 1
9 2