How to translate a pandas group by without aggregation to pyspark? - pandas

I am trying to convert the following pandas line into pyspark:
df = df.groupby('ID', as_index=False).head(1)
Now, I am familiar with the pyspark df.groupby("col1", "col2") method in pyspark, as well as the following to get whatever the first element is within a group:
df = df.withColumn("row_num", row_number().over(Window.partitionBy("ID").orderBy("SOME_DATE_COLUMN"))).where(col("row_num") < 2)
However, without an orderBy argument, this grouping and fetching of the first element in each group doesn't work (and I am literally trying to convert from pandas to spark, whatever the pandas line does):
An error occurred while calling o2547.withColumn.
: org.apache.spark.sql.AnalysisException: Window function row_number() >requires window to be ordered, please add ORDER BY clause. For example >SELECT row_number()(value_expr) OVER (PARTITION BY window_partition >ORDER BY window_ordering) from table
Looking at the pandas groupby documentation, I cannot grasp what groupby does without a following sort/agg function applied to the groups; i.e. what is the default order within a group from which the $.head(1)$ fetches the first element?

It depends on the order of your pandas dataframe before the groupby. From the pandas groupby documentation:
Sort group keys. Get better performance by turning this off. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group.
Converting the pandas beheaviour exactly to pyspark is impossible as pyspark dataframes aren't ordered. But if your data source can provide a row number or something like that, it is possible.

Related

Groupby does return previous df without changing it

df=pd.read_csv('../input/tipping/tips.csv')
df_1 = df.groupby(['day','time'])
df_1.head()
Guys, what am I missing here ? As it returns to me previous dataframe without groupby
We can print it using the following :
df_1 = df.groupby(['day','time']).apply(print)
groupby doesn't work the way you are assuming by the sounds of it. Using head on the grouped dataframe takes the first 5 rows of the dataframe, even if it is across groups because that is how the groupby object is built. You can use #tlentali's approach to print out each group, but df_1 will not be assigned the grouped dataframe that way, instead, None (the number of groups times) as that is the output of print.
The way below gives a lot of control over how to show/display the groups and their keys
This might also help you understand more about how the grouped data frame structure in pandas works.
df_1 = df.groupby(['day','time'])
# for each (day,time) and grouped data
for key, group in df_1:
# show the (day,time)
print(key)
# display head of the grouped data
group.head()

Find the average of a column based on another Pandas?

I'm working on a jupyter notebook, and I would like to get the average 'pcnt_change' based on 'day_of_week'. How do I do this?
A simple groupby call would do the trick here.
If df is the pandas dataframe:
df.groupby('day_of_week').mean()
would return a dataframe with average of all numeric columns in the dataframe with day_of_week as index. If you want only certain column(s) to be returned, select only the needed columns on the groupby call (for e.g.,
df[['open_price', 'high_price', 'day_of_week']].groupby('day_of_week').mean()

Ranking pandas df by ordered groups

Consider the following pre-ordered dataframe
import pandas as pd
df = pd.DataFrame({'Item':['Hat','Necklace','Bag','Bag','Necklace','Hat','Hat','Bag','Bag']})
I want to dense_rank the Item column without affecting the current order of the dataframe.
Namely
I have tried using
df['bad_rank'] = df['Item'].rank(ascending=False, method='dense').astype(int)
However, this is not what I want since 'Hat' is ranked second and should be ranked first.
I have also coded a dirty answer but I am surprised that there is not a simpler way using the rank method.
ordered_rank = dict(zip(df['Item'].unique(), range(1,len(df['Item'].unique())+1)))
df['good_rank'] = df['Item'].map(ordered_rank)
Would someone be willing to help me out?

Filtering within groupBy(x).agg to create averages on different subsets of the Pyspark Dataframe

Normally when I have to make aggregations, I use something like the following code in PySpark:
import pyspark.sql.functions as f
df_aggregate = df.groupBy('id')\
.agg(f.mean('value_col').alias('value_col_mean'))
Now I actually want to compute the average or mean on multiple subsets of the dataframe df (i.e. on different time windows, for example a mean for the last year, a mean for the last 2 years, etc.). I understand I could do df.filter(f.col(filter_col) >= condition).groupBy.... for every subset, but instead I would prefer to do this in one 'go'.
Is it possible to apply the filtering within the .agg(..) part of PySpark?
Edit
Example data for one id looks like (the real data contains many values for id):
You can put the conditions inside a when statement, and put them all inside .agg:
import pyspark.sql.functions as f
df_aggregate = df.withColumn('value_col', f.regexp_replace('value_col', ',', '.'))\
.groupBy('id')\
.agg(f.mean(f.when(last_year_condition, f.col('value_col'))).alias('value_col_mean_last_year'),
f.mean(f.when(last_two_years_condition, f.col('value_col'))).alias('value_col_mean_last_two_years')
)

How to convert pandas dataframe to single index after aggregation?

I have been playing with aggregation in pandas dataframe. Considering the following dataframe:
df=pd.DataFrame({'a':[1,2,3,4,5,6,7,8],
'batch':['q','q','q','w','w','w','w','e'],
'c':[4,1,3,4,5,1,3,2]})
I have to do aggregation on the batch column with mean for column a and min for column c.
I used the following method to do the aggregation:
agg_dict = {'a':{'a':'mean'},'c':{'c':'min'}}
aggregated_df = df.groupby("batch").agg(agg_dict)
The problem is that I want the final data frame to have the same columns as the original data frame with the slight difference of having the aggregated values present in each of the columns.
The result of the above aggregation is a multi-index data frame, and am not sure how to convert it to an individual data frame?
I followed the link: Reverting from multiindex to single index dataframe in pandas . But, this didn't work, and the final output was still a multi-index data frame.
Great, if someone could help
you can try the following code df.groupby('batch').aggregate({'c':'min','a':mean})