Pandas transform rows with specific character - pandas

i am working on features transformation, and ran into this issue. Let me know what you think. Thanks!
I have a table like this
And I want to create an output column like this
Some info:
All the outputs will be based on numbers that end with a ':'
I have 100M+ rows in this table. Need to consider performance issue.
Let me know if you have some good ideas. Thanks!
Here is some copy and paste-able sample data:
df = pd.DataFrame({'Number': {0: '1000',1: '1000021', 2: '15:00', 3: '23424234',
4: '23423', 5: '3', 6 : '9:00', 7: '3423', 8: '32', 9: '7:00'}})

Solution #1:
You can use .str.contains(':') with np.where() to identify the values, otherwise return np.nan. Then, use ffill() to fill down on nan values:
import pandas as pd
import numpy as np
df = pd.DataFrame({'Number': {0: '1000',1: '1000021', 2: '15:00', 3: '23424234',
4: '23423', 5: '3', 6 : '9:00', 7: '3423', 8: '32', 9: '7:00'}})
df['Output'] = np.where(df['Number'].str.contains(':'),df['Number'].str.split(':').str[0],np.nan)
df['Output'] = df['Output'].ffill()
df
Solution #2 - Even easier and potentially better performance you can do some regex with str.extract() and then again ffill():
df['Output'] = df['Number'].str.extract('^(\d+):').ffill()
df
Out[1]:
Number Output
0 1000 NaN
1 1000021 NaN
2 15:00 15
3 23424234 15
4 23423 15
5 3 15
6 9:00 9
7 3423 9
8 32 9
9 7:00 7

I think this is what you are looking for:
import pandas as pd
c = ['Number']
d = ['1:00',100,1001,1321,3254,'15:00',20,60,80,90,'4:00',26,45,90,89]
df = pd.DataFrame(data=d,columns=c)
temp= df['Number'].str.split(":", n = 1, expand = True)
df['New_Val'] = temp[0].ffill()
print(df)
The output of this will be as follows:
Number New_Val
0 1:00 1
1 100 1
2 1001 1
3 1321 1
4 3254 1
5 15:00 15
6 20 15
7 60 15
8 80 15
9 90 15
10 4:00 4
11 26 4
12 45 4
13 90 4
14 89 4
Looks like your DataFrame has string values. I considered them as a mix of numbers and strings.
Here's the solution if df['Number'] is all strings.
df1 = pd.DataFrame({'Number': {0: '1000',1: '1000021', 2: '15:00', 3: '23424234',
4: '23423', 5: '3', 6 : '9:00', 7: '3423', 8: '32', 9: '7:00'}})
temp= df1['Number'].str.split(":", n = 1, expand = True)
temp.loc[temp[1].astype(bool) != False, 'New_val'] = temp[0]
df1['New_val'] = temp['New_val'].ffill()
print (df1)
The output of df1 will be:
Number New_val
0 1000 NaN
1 1000021 NaN
2 15:00 15
3 23424234 15
4 23423 15
5 3 15
6 9:00 9
7 3423 9
8 32 9
9 7:00 7

Related

Sample and group by, and pivot time series data

I'm struggling to handle a complex (imho) operation on time series data.
I have a time series data set and would like to break it into nonoverlapping pivoted grouped by chunks. It is organized by customer, year, and value. For the purposes of this toy example, I am trying to break it out into a simple forecast of the next 3 months.
df = pd.DataFrame({'Customer': {0: 'a', 1: 'a', 2: 'a', 3: 'a', 4: 'a', 5: 'a', 6: 'a', 7: 'b', 8: 'b', 9: 'b'},
'Year': {0: 2020, 1: 2020, 2: 2020, 3: 2020, 4: 2020, 5: 2021, 6: 2021, 7: 2020, 8: 2020, 9: 2020},
'Month': {0: 8, 1: 9, 2: 10, 3: 11, 4: 12, 5: 1, 6: 2, 7: 1, 8: 2, 9: 3},
'Value': {0: 5.2, 1: 2.2, 2: 1.7, 3: 9.0, 4: 5.5, 5: 2.5, 6: 1.9, 7: 4.5, 8: 2.9, 9: 3.1}})
My goal is to create a dataframe where each row contains non overlapping data that is in 3 month pivoted increments. So each row has the 3 "value" data points upcoming from that point in time. I'd also like this data to include the most recent data, so if there is an odd amount of data, that data is dropped (see example a).
| Customer | Year | Month | Month1 | Month2 | Month3 |
| b | 2020 | 1 | 4.5 | 2.9 | 3.1 |
| a | 2020 | 9 | 2.2 | 1.7 | 9.0 |
| a | 2020 | 12 | 5.5 | 2.5 | 1.9 |
Much appreciated.
One way is to first sort_values on your df so latest month goes first, assign group numbers and drop those not in groups of 3:
df = df.sort_values(["Year", "Month", "Customer"], ascending=False)
df["group"] = (df.groupby("Customer").cumcount()%3).eq(0).cumsum()
df = df[(df.groupby(["Customer", "group"])["Year"].transform("size").eq(3))]
df["num"] = (df.groupby("Customer").cumcount()%3+1).replace({1:3, 3:1})
print (df)
Customer Year Month Value group num
6 a 2021 2 1.9 2 3
5 a 2021 1 2.5 2 2
4 a 2020 12 5.5 2 1
3 a 2020 11 9.0 3 3
2 a 2020 10 1.7 3 2
1 a 2020 9 2.2 3 1
9 b 2020 3 3.1 5 3
8 b 2020 2 2.9 5 2
7 b 2020 1 4.5 5 1
Now you can pivot your data:
print (df.assign(Month=df["Month"].where(df["num"].eq(1)).bfill(),
Year=df["Year"].where(df["num"].eq(1)).bfill(),
num="Month"+df["num"].astype(str))
.pivot(["Customer","Month","Year"], "num", "Value")
.reset_index())
num Customer Month Year Month1 Month2 Month3
0 a 9.0 2020.0 2.2 1.7 9.0
1 a 12.0 2020.0 5.5 2.5 1.9
2 b 1.0 2020.0 4.5 2.9 3.1
There might be a better way to do this, but this will give you the output you want :
First we add a Customer_counter column to add an ID to rows member of the same chunk, and we remove extra rows.
df["Customer_chunk"] = (df[::-1].groupby("Customer").cumcount()) // 3
df = df.groupby(["Customer", "Customer_chunk"]).filter(lambda group: len(group) == 3)
Then we group by Customer and Customer_chunk to generate each column of the desired output.
df_grouped = df.groupby(["Customer", "Customer_chunk"])
colYear = df_grouped["Year"].first()
colMonth = df_grouped["Month"].first()
colMonth1 = df_grouped["Value"].first()
colMonth2 = df_grouped["Value"].nth(2)
colMonth3 = df_grouped["Value"].last()
And finally we create the output by merging every columns.
df_output = pd.concat([colYear, colMonth, colMonth1, colMonth2, colMonth3], axis=1).reset_index().drop("Customer_chunk", axis=1)
Some steps feel a bit clunky, there's probably room for improvement in this code but it shouldn't impact performance too much.

comverting the numpy array to proper dataframe

I have numpy array as data below
data = np.array([[1,2],[4,5],[7,8]])
i want to split it and change to dataframe with column name as below to get the first value of each array as below
df_main:
value_items excluded_items
1 2
4 5
7 8
from which later I can take like
df:
value_items
1
4
7
df2:
excluded_items
2
5
8
I tried to convert to dataframe with command
df = pd.DataFrame(data)
it resulted in still array of int32
so, the splitting is failure for me
Use reshape for 2d array and also add columns parameter:
df = pd.DataFrame(data.reshape(-1,2), columns=['value_items','excluded_items'])
Sample:
data = np.arange(785*2).reshape(1, 785, 2)
print (data)
[[[ 0 1]
[ 2 3]
[ 4 5]
...
[1564 1565]
[1566 1567]
[1568 1569]]]
print (data.shape)
(1, 785, 2)
df = pd.DataFrame(data.reshape(-1,2), columns=['value_items','excluded_items'])
print (df)
value_items excluded_items
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
.. ... ...
780 1560 1561
781 1562 1563
782 1564 1565
783 1566 1567
784 1568 1569
[785 rows x 2 columns]

Paired difference of columns in dataframe to generate dataframe with 1.3 million columns

I have a dataframe with 1600 columns.
The dataframe df looks like where the column names are 1, 3 , 2:
Row Labels 1 3 2
41730Type1 9 6 5
41730Type2 14 12 20
41731Type1 2 15 5
41731Type2 3 20 12
41732Type1 8 10 5
41732Type2 8 18 16
I need to create the following dataframe df2 pythonically:
Row Labels (1, 2) (1, 3) (2, 3)
41730Type1 -4 -3 1
41730Type2 6 -2 -8
41731Type1 3 13 10
41731Type2 9 17 8
41732Type1 -3 2 5
41732Type2 8 10 2
where e.g. column (1, 2) is created by df[2] - df[1]
The column names for df2 are created by pairing the column headers of df1 such that second element of each name is greater than first e.g. (1, 2), (1, 3), (2, 3)
The second challenge is can pandas dataframe support 1.3 million columns?
We can do combinations for the column , then create the dict and concat it back
import itertools
l=itertools.combinations(df.columns,2)
d={'{0[0]}|{0[1]}'.format(x) : df[x[0]]-df[x[1]] for x in [*l] }
newdf=pd.concat(d,axis=1)
1|3 1|2 3|2
RowLabels
41730Type1 3 4 1
41730Type2 2 -6 -8
41731Type1 -13 -3 10
41731Type2 -17 -9 8
41732Type1 -2 3 5
41732Type2 -10 -8 2
itertools combinations seems the obvious choice, same as #YOBEN_S, a different route to the solution, using numpy arrays and dictionary
from itertools import combinations
new_data = combinations(df.to_numpy().T,2)
new_cols = combinations(df.columns, 2)
result = {key : np.subtract(arr1,arr2)
if key[0] > key[1]
else np.subtract(arr2,arr1)
for (arr1, arr2), key
in zip(new_data,new_cols)}
outcome = pd.DataFrame.from_dict(result,orient='index').sort_index().T
outcome
(1, 2) (1, 3) (3, 2)
0 -4 -3 1
1 6 -2 -8
2 3 13 10
3 9 17 8
4 -3 2 5
5 8 10 2

How to substitute a column in a pandas dataframe whit a series?

Let's have a dataframe df and a series s1 in pandas
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randn(10000,1000))
s1 = pd.Series(range(0,10000))
How can I modify df so that the column 42 become equal to s1?
How can I modify df so that the columns between 42 and 442 become equal to s1?
I would like to know the simplest way to do that but also a way to do that in place.
I think you need first same length Series with DataFrame, here 20:
np.random.seed(456)
df = pd.DataFrame(np.random.randn(20,10))
#print (df)
s1 = pd.Series(range(0,20))
#print (s1)
#set column by Series
df[8] = s1
#set Series to range of columns
cols = df.loc[:, 3:5].columns
df[cols] = pd.concat([s1] * len(cols), axis=1)
print (df)
0 1 2 3 4 5 6 7 8 9
0 -0.668129 -0.498210 0.618576 0 0 0 0.301966 0.449483 0 -0.315231
1 -2.015971 -1.130231 -1.111846 1 1 1 1.915676 0.920348 1 1.157552
2 -0.106208 -0.088752 -0.971485 2 2 2 -0.366948 -0.301085 2 1.141635
3 -1.309529 -0.274381 0.864837 3 3 3 0.670294 0.086347 3 -1.212503
4 0.120359 -0.358880 1.199936 4 4 4 0.389167 1.201631 4 0.445432
5 -1.031109 0.067133 -1.213451 5 5 5 -0.636896 0.013802 5 1.726135
6 -0.491877 0.254206 -0.268168 6 6 6 0.671070 -0.633645 6 1.813671
7 0.080433 -0.882443 1.152671 7 7 7 0.249225 1.385407 7 1.010374
8 0.307274 0.806150 0.071719 8 8 8 1.133853 -0.789922 8 -0.286098
9 -0.767206 1.094445 1.603907 9 9 9 0.083149 2.322640 9 0.396845
10 -0.740018 -0.853377 -2.039522 10 10 10 0.764962 -0.472048 10 -0.071255
11 -0.238565 1.077573 2.143252 11 11 11 1.542892 2.572560 11 -0.803516
12 -0.139521 -0.992107 -0.892619 12 12 12 0.259612 -0.661760 12 -1.508976
13 -1.077001 0.381962 0.205388 13 13 13 -0.023986 -1.293080 13 1.846402
14 -0.714792 -0.728496 -0.127079 14 14 14 0.606065 -2.320500 14 -0.992798
15 -0.127113 -0.563313 -0.101387 15 15 15 0.647325 -0.816023 15 -0.309938
16 -1.151304 -1.673719 0.074930 16 16 16 -0.392157 0.736714 16 1.142983
17 -1.247396 -0.471524 1.173713 17 17 17 -0.005391 0.426134 17 0.781832
18 -0.325111 0.579248 0.040363 18 18 18 0.361926 0.036871 18 0.581314
19 -1.057501 -1.814500 0.109628 19 19 19 -1.738658 -0.061883 19 0.989456
Timings
Another solutions, but it seems concat solution is fastest:
np.random.seed(456)
df = pd.DataFrame(np.random.randn(1000,1000))
#print (df)
s1 = pd.Series(range(0,1000))
#print (s1)
#set column by Series
df[8] = s1
#set Series to range of columns
cols = df.loc[:, 42:442].columns
print (df)
In [310]: %timeit df[cols] = np.broadcast_to(s1.values[:, np.newaxis], (len(df),len(cols)))
1 loop, best of 3: 202 ms per loop
In [311]: %timeit df[cols] = np.repeat(s1.values[:, np.newaxis], len(cols), axis=1)
1 loop, best of 3: 208 ms per loop
In [312]: %timeit df[cols] = np.array([s1.values]*len(cols)).transpose()
10 loops, best of 3: 175 ms per loop
In [313]: %timeit df[cols] = pd.concat([s1] * len(cols), axis=1)
10 loops, best of 3: 53.8 ms per loop

Nested groupby in DataFrame and aggregate multiple columns

I am trying to do nested groupby as follows:
>>> df1 = pd.DataFrame({'Date': {0: '2016-10-11', 1: '2016-10-11', 2: '2016-10-11', 3: '2016-10-11', 4: '2016-10-11',5: '2016-10-12'}, 'Stock': {0: 'ABC', 1: 'ABC', 2: 'ABC', 3: 'ABC', 4: 'ABC', 5: 'XYZ'}, 'Quantity': {0: 60,1: 50, 2: 40, 3: 30, 4: 20, 5: 10}, 'UiD':{0:1,1:1,2:1,3:2,4:2,5:3}, 'StartTime': {0: '08:00:00.241', 1: '08:00:00.243', 2: '12:34:23.563', 3: '08:14.05.908', 4: '18:54:50.100', 5: '10:08:36.657'}, 'Sign':{0:1,1:1,2:0,3:-1,4:0,5:-1}, 'leg1':{0:2,1:2,2:4,3:5,4:7,5:8}})
>>> df1
Date Quantity Sign StartTime Stock UiD leg1
0 2016-10-11 60 1 08:00:00.241 ABC 1 2
1 2016-10-11 50 1 08:00:00.243 ABC 1 2
2 2016-10-11 40 0 12:34:23.563 ABC 1 4
3 2016-10-11 30 -1 08:14.05.908 ABC 2 5
4 2016-10-11 20 0 18:54:50.100 ABC 2 7
5 2016-10-12 10 -1 10:08:36.657 XYZ 3 8
>>> dfg1=df1.groupby(['Date','Stock'])
>>> dfg1.apply(lambda x:x.groupby('UiD').first()).groupby(['Date','Stock']).apply(lambda x:np.sum(x['Quantity']))
Date Stock
2016-10-11 ABC 90
2016-10-12 XYZ 10
dtype: int64
>>>
>>> dfg1['leg1'].sum()
Date Stock
2016-10-11 ABC 20
2016-10-12 XYZ 8
Name: leg1, dtype: int64
So far so good. Now I am trying to concatenate the two results into a new DataFrame df2 as follows:
>>> df2 = pd.concat([dfg1['leg1'].sum(), dfg1.apply(lambda x:x.groupby('UiD').first()).groupby(['Date','Stock']).apply(lambda x:np.sum(x['Quantity']))],axis=1)
0 1
Date Stock
2016-10-11 ABC 20 90
2016-10-12 XYZ 8 10
>>>
I am wondering if there is a better way to re-write following line in order to avoid repetition of groupby(['Date','Stock'])
dfg1.apply(lambda x:x.groupby('UiD').first()).groupby(['Date','Stock']).apply(lambda x:np.sum(x['Quantity']))
Also this fails if ['Date','Stock'] contains 'UiD' as one of the keys or if ['Date','Stock'] is replaced by just ['UiD'].
Please restate your question to be clearer. You want to groupby(['Date','Stock']), then:
take only the first record for each UiD and sum (aggregate) its
Quantity, but also
sum all leg1 values for that Date,Stock
combination (not just the first-for-each-UiD). Is that right?
Anyway you want to perform an aggregation (sum) on multiple columns, and yeah the way to avoid repetition of groupby(['Date','Stock']) is to keep one dataframe, not try to stitch together two dataframes from two individual aggregate operations. Something like the following (I'll fix it once you confirm this is what you want):
def filter_first_UiD(g):
#return g.groupby('UiD').first().agg(np.sum)
return g.groupby('UiD').first().agg({'Quantity':'sum', 'leg1':'sum'})
df1.groupby(['Date','Stock']).apply(filter_first_UiD)
The way I dealt with the last scenario of avoiding groupby to fail if ['Date','Stock'] contains 'UiD' as one of the keys or if ['Date','Stock'] is replaced by just ['UiD'] is as follows:
>>> df2 = pd.concat([dfg1['leg1'].sum(), dfg1[].first() if 'UiD' in `['Date','Stock']` else dfg1.apply(lambda x:x.groupby('UiD').first()).groupby(['Date','Stock']).apply(lambda x:np.sum(x['Quantity']))],axis=1)
But more elegant solution is still an open question.