Pandas dataframe : 'Series' object has no attribute 'stack' on a groupby (more then 1 group) - dataframe

I'm facing an 'Series' object has no attribute 'stack' but this is not always happenings on my data set. without identifying the root cause. Sometimes working fine, sometimes facing the issue...
Here is the query :
within var_max_num_by_grpby = 50
df1['counterA'] = (df1.groupby(['id_type', 'start_date', 'freq'], as_index=True).apply( lambda x: pd.Series(i % var_max_num_by_grpby + 1 for i in range(len(x)))).stack().values)
I added the .stack attribute as workaround issue when my group by as only 1 group...
I'm expecting a counter increasing from 1 to n in column 'counterA', everytime the group by ['id_type', 'start_date', 'freq'] reach 50 (var_max_num_by_grpby)

Found the issue, hope this can help someone else...
The root cause is that my groupby condition returns sometime only 1 groupby.
When dataset returns 1 groupby, the attribut .stack() fix the issue.
Else, means dataset returns more then 1 groupby, the .stack() attribut generate the error : 'Series' object has no attribute 'stack' on a group by (not everytime)
I just did a if / else base on how many groupby my dataset returns, like this :
# number of distinct groupby clause
nb_groupby = len(df1.groupby(['id_type', 'start_date', 'freq']).nunique().reset_index() )
print('number of distinct groupby clause = ', nb_groupby)
if nb_groupby == 1:
df1['cpt_lot_50_max'] = (df1.groupby(['id_type', 'start_date', 'freq'], as_index=True).apply(
lambda x: pd.Series(i % var_max_num_by_grpby + 1 for i in range(len(x)))).stack().values)
else:
df1['cpt_lot_50_max'] = (df1.groupby(['id_type', 'start_date', 'freq'], as_index=True).apply(
lambda x: pd.Series(i % var_max_num_by_grpby + 1 for i in range(len(x)))).values)

Related

Rolling apply lambda function based on condtion

I have a dataframe with normalised (to 100) returns for 18 products (columns). I want to apply a lambda function which multplies the next row by the previous row.
I can do :
df= df.rolling(2).apply(lambda x: (x[0]*x[1]),raw=True)
But some of my columns dont have values on row 1 (they go live on row 4). So I need to either:
Have a lambda function that starts only on row 4 yet applies to the entire df. I can create the first 4 rows manually.
As my values are 100 until "live" I could have the lambda function only applying when the value does not equal 100.
I have tried both :
1.
df.iloc[3:,:] = df.iloc[3:,:].rolling(2).apply(lambda x: (x[0]*x[1]),raw=True)
df= df.rolling(2).apply(lambda x: (x[0]*x[1]) if x[0] != 100 else x,raw=True)
But both meet with total failure.
Any advice welcomed - I've spent hours looking through the site and have yet to find any outcome that works for this situation.
So given the lack of responses I came up with a solution where I split my df in 2 parts and appended it back together.
My lambda function was also garbage I needed something like :
df2 = df.copy()
for i in range(df2.index.size):
if not i:
continue
df2.iloc[i] = (df2.iloc[i - 1] * (df.iloc[i]))
df2
to actually achieve what I was after.

Group by multiple columns with custom function

I do this:
join = lambda x: ' '.join(x)
df_temp = df.groupby(['Id']).agg({'InfoType': join,'InfoLabel1': join, 'InfoLabel2': join})
and it works.
Is there any other more efficient way (in terms of code lines etc) to do this?
For example I did this:
df_temp = df.groupby(['Id'])['InfoType', 'InfoLabel1', 'InfoLabel2'].agg(lambda x: ', '.join(x))
but this outputs only the Id and InfoType columns.
P.S.
Hm it seems that now also this is working:
df_temp = df.groupby(['Id'])['InfoType', 'InfoLabel1', 'InfoLabel2'].agg(lambda x: ', '.join(x))
This happened after I converted all columns to strings - the columns contained NAs too.
I am having the impression that pandas was encountering some errors that it was not raising and it was just outputing the columns (i.e. 'InfoType') at which it did not encounter the errors).

Error in using Pandas groupby.apply to drop duplication

I have a Pandas data frame which has some duplicate values, not rows. I want to use groupby.apply to remove the duplication. An example is as follows.
df = pd.DataFrame([['a', 1, 1], ['a', 1, 2], ['b', 1, 1]], columns=['A', 'B', 'C'])
A B C
0 a 1 1
1 a 1 2
2 b 1 1
# My function
def get_uniq_t(df):
if df.shape[0] > 1:
df['D'] = df.C * 10 + df.B
df = df[df.D == df.D.max()].drop(columns='D')
return df
df = df.groupby('A').apply(get_uniq_t)
Then I get the following value error message. The issue seems to do with creating the new column D. If I create the column D outside the function, the code seems running fine. Can someone help explain what caused the value error message?
ValueError: Shape of passed values is (3, 3), indices imply (2, 3)
The problem with your code is that it attempts to modify
the original group.
Other problem is that this function should return a single row
not a DataFrame.
Change your function to:
def get_uniq_t(df):
iMax = (df.C * 10 + df.B).idxmax()
return df.loc[iMax]
Then its application returns:
A B C
A
a a 1 2
b b 1 1
Edit following the comment
In my opinion, it is not allowed to modify the original group,
as it would indirectly modify the original DataFrame.
At least it displays a warning about this and is considered a bad practice.
Search the Web for SettingWithCopyWarning for more extensive description.
My code (get_uniq_t function) does not modify the original group.
It only returns one row from the current group.
The returned row is selected based on which row returns the greatest value
of df.C * 10 + df.B. So when you apply this function, the result is a new
DataFrame, with consecutive rows equal to results of this function
for consecutive groups.
You can perform an operation equivalent to modification, when you
create some new content, e.g. as the result of groupby instruction
and then save it under the same variable which so far held the source
DataFrame.

How to split a column into multiple columns and then count the null values in the new column in SQL or Pandas?

I have a relatively large table with thousands of rows and few tens of columns. Some columns are meta data and others are numerical values. The problem I have is, some meta data columns are incomplete or partial that is, it missed the string after a ":". I want to get a count of how many of these are with the missing part after the colon mark.
If you look at the miniature example below, what I should get is a small table telling me that in group A, MetaData is complete for 2 entries and incomplete (missing after ":") in other 2 entries. Ideally I also want to get some statistics on SomeValue (Count, max, min etc.).
How do I do it in an SQL query or in Python Pandas?
Might turn out to be simple to use some build in function however, I am not getting it right.
Data:
Group MetaData SomeValue
A AB:xxx 20
A AB: 5
A PQ:yyy 30
A PQ: 2
Expected Output result:
Group MetaDataComplete Count
A Yes 2
A No 2
No reason to use split functions (unless the value can contain a colon character.) I'm just going to assume that the "null" values (not technically the right word) end with :.
select
"Group",
case when MetaData like '%:' then 'Yes' else 'No' end as MetaDataComplete,
count(*) as "Count"
from T
group by "Group", case when MetaData like '%:' then 'Yes' else 'No' end
You could also use right(MetaData, 1) = ':'.
Or supposing that values can contain their own colons, try charindex(':', MetaData) = len(MetaData) if you just want to ask whether the first colon is in the last position.
Here is an example:
## 1- Create Dataframe
In [1]:
import pandas as pd
import numpy as np
cols = ['Group', 'MetaData', 'SomeValue']
data = [['A', 'AB:xxx', 20],
['A', 'AB:', 5],
['A', 'PQ:yyy', 30],
['A', 'PQ:', 2]
]
df = pd.DataFrame(columns=cols, data=data)
# 2- New data frame with split value columns
new = df["MetaData"].str.split(":", n = 1, expand = True)
df["MetaData_1"]= new[0]
df["MetaData_2"]= new[1]
# 3- Dropping old MetaData columns
df.drop(columns =["MetaData"], inplace = True)
## 4- Replacing empty string by nan and count them
df.replace('',np.NaN, inplace=True)
df.isnull().sum()
Out [1]:
Group 0
SomeValue 0
MetaData_1 0
MetaData_2 2
dtype: int64
From a SQL perspective, performing a split is painful, not mention using the split results in having to perform the query first then querying the results:
SELECT
Results.[Group],
Results.MetaData,
Results.MetaValue,
COUNT(Results.MetaValue)
FROM (SELECT
[Group]
MetaData,
SUBSTRING(MetaData, CHARINDEX(':', MetaData) + 1, LEN(MetaData)) AS MetaValue
FROM VeryLargeTable) AS Results
GROUP BY Results.[Group],
Results.MetaData,
Results.MetaValue
If your just after a count, you could also try the algorithmic approach. Just loop over the data and use regular expressions with negative lookahead.
import pandas as pd
import re
pattern = '.*:(?!.)' # detects the strings of the missing data form
missing = 0
not_missing = 0
for i in data['MetaData'].tolist():
match = re.findall(pattern, i)
if match:
missing += 1
else:
not_missing += 1

Taking second last observed row

I am new to pandas. I know how to use drop_duplicates and take the last observed row in a dataframe. Is there any way that I can use it to take only second last observed. Or any other way of doing it.
For example:
I would like to go from
df = pd.DataFrame(data={'A':[1,1,1,2,2,2],'B':[1,2,3,4,5,6]}) to
df1 = pd.DataFrame(data={'A':[1,2],'B':[2,5]})
The idea is that you'll group the data by the duplicate column , then check the length of group , if the length of group is greater than or equal 2 this mean that you can slice the second element of group , if the group has a length of one which mean that this value is not duplicated , then take index 0 which is the only element in the grouped data
df.groupby(df['A']).apply(lambda x : x.iloc[1] if len(x) >= 2 else x.iloc[0])
The first answer I think was on the right track, but possibly not quite right. I have extended your data to include 'A' groups with two observations, and an 'A' group with one observation, for the sake of completeness.
import pandas as pd
df = pd.DataFrame(data={'A':[1,1,1,2,2,2, 3, 3, 4],'B':[1,2,3,4,5,6, 7, 8, 9]})
def user_apply_func(x):
if len(x) == 2:
return x.iloc[0]
if len(x) > 2:
return x.iloc[-2]
return
df.groupby('A').apply(user_apply_func)
Out[7]:
A B
A
1 1 2
2 2 5
3 3 7
4 NaN NaN
For your reference the apply method automatically passes the data frame as the first argument.
Also, as you are always going to be reducing each group of data to a single observation you could also use the agg method (aggregate). apply is more flexible in terms of the length of the sequences that can be returned whereas agg must reduce the data to a single value.
df.groupby('A').agg(user_apply_func)