Question: in pandas when dropping duplicates you can specify which columns to keep. Is there an equivalent in Spark Dataframes?
Pandas:
df.sort_values('actual_datetime', ascending=False).drop_duplicates(subset=['scheduled_datetime', 'flt_flightnumber'], keep='first')
Spark dataframe (I use Spark 1.6.0) doesn't have the keep option
df.orderBy(['actual_datetime']).dropDuplicates(subset=['scheduled_datetime', 'flt_flightnumber'])
Imagine scheduled_datetime and flt_flightnumber are columns 6 ,17. By creating keys based on the values of these columns we can also deduplicate
def get_key(x):
return "{0}{1}".format(x[6],x[17])
df= df.map(lambda x: (get_key(x),x)).reduceByKey(lambda x,y: (x))
but how to specify to keep the first row and get rid of the other duplicates ? What about the last row ?
To everyone saying that dropDuplicates keeps the first occurrence - this is not strictly correct.
dropDuplicates keeps the 'first occurrence' of a sort operation - only if there is 1 partition. See below for some examples.
However this is not practical for most Spark datasets. So I'm also including an example of 'first occurrence' drop duplicates operation using Window function + sort + rank + filter.
See bottom of post for example.
This is tested in Spark 2.4.0 using pyspark.
dropDuplicates examples
import pandas as pd
# generating some example data with pandas, will convert to spark df below
df1 = pd.DataFrame({'col1':range(0,5)})
df1['datestr'] = '2018-01-01'
df2 = pd.DataFrame({'col1':range(0,5)})
df2['datestr'] = '2018-02-01'
df3 = pd.DataFrame({'col1':range(0,5)})
df3['datestr'] = '2018-03-01'
dfall = pd.concat([df1,df2,df3])
print(dfall)
col1 datestr
0 0 2018-01-01
1 1 2018-01-01
2 2 2018-01-01
3 3 2018-01-01
4 4 2018-01-01
0 0 2018-02-01
1 1 2018-02-01
2 2 2018-02-01
3 3 2018-02-01
4 4 2018-02-01
0 0 2018-03-01
1 1 2018-03-01
2 2 2018-03-01
3 3 2018-03-01
4 4 2018-03-01
# first example
# does not give first (based on datestr)
(spark.createDataFrame(dfall)
.orderBy('datestr')
.dropDuplicates(subset = ['col1'])
.show()
)
# dropDuplicates NOT based on occurrence of sorted datestr
+----+----------+
|col1| datestr|
+----+----------+
| 0|2018-03-01|
| 1|2018-02-01|
| 3|2018-02-01|
| 2|2018-02-01|
| 4|2018-01-01|
+----+----------+
# second example
# testing what happens with repartition
(spark.createDataFrame(dfall)
.orderBy('datestr')
.repartition('datestr')
.dropDuplicates(subset = ['col1'])
.show()
)
# dropDuplicates NOT based on occurrence of sorted datestr
+----+----------+
|col1| datestr|
+----+----------+
| 0|2018-02-01|
| 1|2018-01-01|
| 3|2018-02-01|
| 2|2018-02-01|
| 4|2018-02-01|
+----+----------+
#third example
# testing with coalesce(1)
(spark
.createDataFrame(dfall)
.orderBy('datestr')
.coalesce(1)
.dropDuplicates(subset = ['col1'])
.show()
)
# dropDuplicates based on occurrence of sorted datestr
+----+----------+
|col1| datestr|
+----+----------+
| 0|2018-01-01|
| 1|2018-01-01|
| 2|2018-01-01|
| 3|2018-01-01|
| 4|2018-01-01|
+----+----------+
# fourth example
# testing with reverse sort then coalesce(1)
(spark
.createDataFrame(dfall)
.orderBy('datestr', ascending = False)
.coalesce(1)
.dropDuplicates(subset = ['col1'])
.show()
)
# dropDuplicates based on occurrence of sorted datestr```
+----+----------+
|col1| datestr|
+----+----------+
| 0|2018-03-01|
| 1|2018-03-01|
| 2|2018-03-01|
| 3|2018-03-01|
| 4|2018-03-01|
+----+----------+
window, sort, rank, filter example
# generating some example data with pandas
df1 = pd.DataFrame({'col1':range(0,5)})
df1['datestr'] = '2018-01-01'
df2 = pd.DataFrame({'col1':range(0,5)})
df2['datestr'] = '2018-02-01'
df3 = pd.DataFrame({'col1':range(0,5)})
df3['datestr'] = '2018-03-01'
dfall = pd.concat([df1,df2,df3])
# into spark df
df_s = (spark.createDataFrame(dfall))
from pyspark.sql import Window
from pyspark.sql.functions import rank
window = Window.partitionBy("col1").orderBy("datestr")
(df_s.withColumn('rank', rank().over(window))
.filter(col('rank') == 1)
.drop('rank')
.show()
)
+----+----------+
|col1| datestr|
+----+----------+
| 0|2018-01-01|
| 1|2018-01-01|
| 3|2018-01-01|
| 2|2018-01-01|
| 4|2018-01-01|
+----+----------+
# however this fails if ties/duplicates exist in the windowing paritions
# and so a tie breaker for the 'rank' function must be added
# generating some example data with pandas, will convert to spark df below
df1 = pd.DataFrame({'col1':range(0,5)})
df1['datestr'] = '2018-01-01'
df2 = pd.DataFrame({'col1':range(0,5)})
df2['datestr'] = '2018-01-01' # note duplicates in this dataset
df3 = pd.DataFrame({'col1':range(0,5)})
df3['datestr'] = '2018-03-01'
dfall = pd.concat([df1,df2,df3])
print(dfall)
col1 datestr
0 0 2018-01-01
1 1 2018-01-01
2 2 2018-01-01
3 3 2018-01-01
4 4 2018-01-01
0 0 2018-01-01
1 1 2018-01-01
2 2 2018-01-01
3 3 2018-01-01
4 4 2018-01-01
0 0 2018-03-01
1 1 2018-03-01
2 2 2018-03-01
3 3 2018-03-01
4 4 2018-03-01
# this will fail, since duplicates exist within the window partitions
# and no way to specify ranking style exists in pyspark rank() fn
window = Window.partitionBy("col1").orderBy("datestr")
(df_s.withColumn('rank', rank().over(window))
.filter(col('rank') == 1)
.drop('rank')
.show()
)
+----+----------+
|col1| datestr|
+----+----------+
| 0|2018-01-01|
| 0|2018-01-01|
| 1|2018-01-01|
| 1|2018-01-01|
| 3|2018-01-01|
| 3|2018-01-01|
| 2|2018-01-01|
| 2|2018-01-01|
| 4|2018-01-01|
| 4|2018-01-01|
+----+----------+
# to deal with ties within window partitions, a tiebreaker column is added
from pyspark.sql import Window
from pyspark.sql.functions import rank, col, monotonically_increasing_id
window = Window.partitionBy("col1").orderBy("datestr",'tiebreak')
(df_s
.withColumn('tiebreak', monotonically_increasing_id())
.withColumn('rank', rank().over(window))
.filter(col('rank') == 1).drop('rank','tiebreak')
.show()
)
+----+----------+
|col1| datestr|
+----+----------+
| 0|2018-01-01|
| 1|2018-01-01|
| 3|2018-01-01|
| 2|2018-01-01|
| 4|2018-01-01|
+----+----------+
Use window and row_number functions.
Order by ascending or descending to select first or last.
from pyspark.sql import Window
from pyspark.sql import functions as f
window = Window.partitionBy("col1").orderBy("datestr").asc()
df = (df.withColumn('row', f.row_number().over(window))\
.filter(col('row') == 1)
.drop('row')
.show())
I did the following:
dataframe.groupBy("uniqueColumn").min("time")
This will group by the given column, and within the same group choose the one with min time (this will keep the first and remove others)
solution 1
add a new column row num(incremental column) and drop duplicates based the min row after grouping on all the columns you are interested in.(you can include all the columns for dropping duplicates except the row num col)
solution 2:
turn the data-frame into a rdd (df.rdd) then group the rdd on one or more or all keys and then run a lambda function on the group and drop the rows the way you want and return only the row that you are interested in.
One of my friend (sameer) mentioned that below(old solution) didn't work for him.
use dropDuplicates method by default it keeps the first occurance.
You can use a window with row_number:
import pandas as pd
df1 = pd.DataFrame({'col1':range(0,5)})
df1['datestr'] = '2018-01-01'
df2 = pd.DataFrame({'col1':range(0,5)})
df2['datestr'] = '2018-02-01'
df3 = pd.DataFrame({'col1':range(0,5)})
df3['datestr'] = '2018-03-01'
dfall = spark.createDataFrame(pd.concat([df1,df2,df3]))
from pyspark.sql.window import Window
from pyspark.sql.functions import rank, col,row_number
window = Window.partitionBy('col1').orderBy(col('datestr'))
dfall.select('*', row_number().over(window).alias('posicion')).show()
dfall.select('*', row_number().over(window).alias('posicion')).where('posicion ==1').show()
+----+----------+--------+
|col1| datestr|posicion|
+----+----------+--------+
| 0|2018-01-01| 1|
| 0|2018-02-01| 2|
| 0|2018-03-01| 3|
| 1|2018-01-01| 1|
| 1|2018-02-01| 2|
| 1|2018-03-01| 3|
| 3|2018-01-01| 1|
| 3|2018-02-01| 2|
| 3|2018-03-01| 3|
| 2|2018-01-01| 1|
| 2|2018-02-01| 2|
| 2|2018-03-01| 3|
| 4|2018-01-01| 1|
| 4|2018-02-01| 2|
| 4|2018-03-01| 3|
+----+----------+--------+
+----+----------+--------+
|col1| datestr|posicion|
+----+----------+--------+
| 0|2018-01-01| 1|
| 1|2018-01-01| 1|
| 3|2018-01-01| 1|
| 2|2018-01-01| 1|
| 4|2018-01-01| 1|
+----+----------+--------+
I just did something perhaps similar to what you guys need, using drop_duplicates pyspark.
Situation is this. I have 2 dataframes (coming from 2 files) which are exactly same except 2 columns file_date(file date extracted from the file name) and data_date(row date stamp). Annoyingly I have rows which are with same data_date (and all other column cells too) but different file_date as they get replicated on every newcomming file with an addition of one new row.
I needed to capture all rows from the new file, plus that one row left over from the previous file. That row is not in the new file. Remaining columns on the right from data_date are same between the two files for the same data_date.
file_1_20190122 - df1
+------------+----------+----------+
|station_code| file_date| data_date|
+------------+----------+----------+
| AGGH|2019-01-22|2019-01-16| <- One row we want to keep where file_date 22nd
| AGGH|2019-01-22|2019-01-17|
| AGGH|2019-01-22|2019-01-18|
| AGGH|2019-01-22|2019-01-19|
| AGGH|2019-01-22|2019-01-20|
| AGGH|2019-01-22|2019-01-21|
| AGGH|2019-01-22|2019-01-22|
file_2_20190123 - df2
+------------+----------+----------+
|station_code| file_date| data_date|
+------------+----------+----------+
| AGGH|2019-01-23|2019-01-17| \/ ALL rows we want to keep where file_date 23rd
| AGGH|2019-01-23|2019-01-18|
| AGGH|2019-01-23|2019-01-19|
| AGGH|2019-01-23|2019-01-20|
| AGGH|2019-01-23|2019-01-21|
| AGGH|2019-01-23|2019-01-22|
| AGGH|2019-01-23|2019-01-23|
This will require us to sort and concat df's, then deduplicate them on all columns but one.
Let me walk you through.
union_df = df1.union(df2) \
.sort(['station_code', 'data_date'], ascending=[True, True])
+------------+----------+----------+
|station_code| file_date| data_date|
+------------+----------+----------+
| AGGH|2019-01-22|2019-01-16| <- keep
| AGGH|2019-01-23|2019-01-17| <- keep
| AGGH|2019-01-22|2019-01-17| x- drop
| AGGH|2019-01-22|2019-01-18| x- drop
| AGGH|2019-01-23|2019-01-18| <- keep
| AGGH|2019-01-23|2019-01-19| <- keep
| AGGH|2019-01-22|2019-01-19| x- drop
| AGGH|2019-01-23|2019-01-20| <- keep
| AGGH|2019-01-22|2019-01-20| x- drop
| AGGH|2019-01-22|2019-01-21| x- drop
| AGGH|2019-01-23|2019-01-21| <- keep
| AGGH|2019-01-23|2019-01-22| <- keep
| AGGH|2019-01-22|2019-01-22| x- drop
| AGGH|2019-01-23|2019-01-23| <- keep
Here we drop already sorted duped rows excluding keys ['file_date', 'data_date'].
nonduped_union_df = union_df \
.drop_duplicates(['station_code', 'data_date', 'time_zone',
'latitude', 'longitude', 'elevation',
'highest_temperature', 'lowest_temperature',
'highest_temperature_10_year_normal',
'another_50_columns'])
And the result holds ONE row with earliest date from DF1 which is not in DF2 and ALL rows from DF2
nonduped_union_df.select(['station_code', 'file_date', 'data_date',
'highest_temperature', 'lowest_temperature']) \
.sort(['station_code', 'data_date'], ascending=[True, True]) \
.show(30)
+------------+----------+----------+-------------------+------------------+
|station_code| file_date| data_date|highest_temperature|lowest_temperature|
+------------+----------+----------+-------------------+------------------+
| AGGH|2019-01-22|2019-01-16| 90| 77| <- df1 22nd
| AGGH|2019-01-23|2019-01-17| 90| 77| \/- df2 23rd
| AGGH|2019-01-23|2019-01-18| 91| 75|
| AGGH|2019-01-23|2019-01-19| 88| 77|
| AGGH|2019-01-23|2019-01-20| 88| 77|
| AGGH|2019-01-23|2019-01-21| 88| 77|
| AGGH|2019-01-23|2019-01-22| 90| 75|
| AGGH|2019-01-23|2019-01-23| 90| 75|
| CWCA|2019-01-22|2019-01-15| 23| -2|
| CWCA|2019-01-23|2019-01-16| 7| -8|
| CWCA|2019-01-23|2019-01-17| 28| -6|
| CWCA|2019-01-23|2019-01-18| 0| -13|
| CWCA|2019-01-23|2019-01-19| 25| -15|
| CWCA|2019-01-23|2019-01-20| -4| -18|
| CWCA|2019-01-23|2019-01-21| 27| -6|
| CWCA|2019-01-22|2019-01-22| 30| 17|
| CWCA|2019-01-23|2019-01-22| 30| 13|
| CWCO|2019-01-22|2019-01-15| 34| 29|
| CWCO|2019-01-23|2019-01-16| 33| 13|
| CWCO|2019-01-22|2019-01-16| 33| 13|
| CWCO|2019-01-22|2019-01-17| 23| 7|
| CWCO|2019-01-23|2019-01-17| 23| 7|
+------------+----------+----------+-------------------+------------------+
only showing top 30 rows
It may not be best suitable answer for this case, but it's the one worked for me.
Let me know, if stuck somewhere.
BTW - if anyone can tell me how to select all columns in a df, except one without listing them in a list - I will be very thankful.
Regards
G
I would try this way:
Assuming your data_df looks like this, and we want to keep the rows with the highest value in col1 per datestr:
col1 datestr
0 2018-01-01
1 2018-01-01
2 2018-01-01
3 2018-01-01
4 2018-01-01
0 2018-02-01
1 2018-02-01
2 2018-02-01
3 2018-02-01
4 2018-02-01
0 2018-03-01
1 2018-03-01
2 2018-03-01
3 2018-03-01
4 2018-03-01
you can do:
from pyspark.sql import Window
import pyspark.sql.functions as F
w = Window.partitionBy('datestr')
data_df = data_df.withColumn("max", F.max(F.col("col1"))\
.over(w))\
.where(F.col('max') == F.col('col1'))\
.drop("max")
this results in:
col1 datestr
4 2018-01-01
4 2018-02-01
4 2018-03-01
Given the below table:
+----+----------+
|col1| datestr|
+----+----------+
| 0|2018-01-01|
| 1|2018-01-01|
| 2|2018-01-01|
| 3|2018-01-01|
| 4|2018-01-01|
| 0|2018-02-01|
| 1|2018-02-01|
| 2|2018-02-01|
| 3|2018-02-01|
| 4|2018-02-01|
| 0|2018-03-01|
| 1|2018-03-01|
| 2|2018-03-01|
| 3|2018-03-01|
| 4|2018-03-01|
+----+----------+
You can do it in two steps:
Group by the given table based upon the col1 and pick min date.
+----+----------+
|col1| datestr|
+----+----------+
| 0|2018-01-01|
| 1|2018-01-01|
| 2|2018-01-01|
| 3|2018-01-01|
| 4|2018-01-01|
+----+----------+
left Join the resultant table with original table on col1 and min_datestr.
if datasets isnt not large convert to pandas data frame and drop duplicates keeping last or first then convert back.
Related
I have a sample dataframe:
df = spark.createDataFrame([('name1','id1',1,None,3),('name2','id2',None,2,5)],['NAME','personID','col1','col2','col3'])
My use case has 15 columns
What I would like to do is using case when and loop, to add new columns that correspond to each column from the original except the first two columns. Within those new columns, it will give a value of 1 if notNull, otherwise 0.
I am aiming to get something like below:
+--------+--------+--------+-------+-------+-------+------+------+
|Name | ID | col1 | col2 | col3 | col1_N|col2_N|col3_N|
+--------+--------+--------+-------+-------+-------+------+------+
|name1 | id1 | 1 | Null | 3 | 1 | 0 | 1 |
|name2 | id2 | Null | 2 | 5 | 0 | 1 | 1 |
+--------+--------+--------+-------+-------+-------+------+------+
the first five columns are the original columns, the last three columns will be added with corresponding 1 or 0 from 'col1', 'col2', and 'col3' values.
The last code/s I am working on creates a new one but does not keep the original dataframe values.
df.select([when(col(c).isNotNull(), 1).otherwise(0).alias(c + '_N') for c in df.columns])
for which I get:
+-------+-------+-------+------+------+
| Name_N| ID_N | col1_N|col2_N|col3_N|
+-------+-------+-------+------+------+
| 1 | 1 | 1 | 0 | 1 |
| 1 | 1 | 0 | 1 | 1 |
+-------+-------+-------+------+------+
The above could have been acceptable but I need to keep the original values of Name and ID columns.
I got an InvalidArgument with below:
df.select(['*'],[when(col(c).isNotNull(), 1).otherwise(0).alias(c + '_N') for c in df.columns])
TypeError: Invalid argument, not a string or column: ['*'] of type <class 'list'>. For column literals, use 'lit', 'array', 'struct' or 'create_map' function.
I thought selecting all first will give me all the columns of the original
UPDATE:
somehow this worked, but I only get the last column:
for c in df.columns[2:]:
sdf = df.withColumn(c+'_N', when(col(c).isNotNull(),1).otherwise(0))
but this is what I get:
+--------+--------+--------+-------+-------+------+
|Name | ID | col1 | col2 | col3 |col3_N|
+--------+--------+--------+-------+-------+------+
|name1 | id1 | 1 | Null | 3 | 1 |
|name2 | id2 | Null | 2 | 5 | 1 |
+--------+--------+--------+-------+-------+------+
``
I only got the last original column
Using list comprehension as show below will give expected result.
df.select([col(c) if c in ['NAME', 'personID'] else when(col(c).isNotNull(), 1).otherwise(0).alias(f"{c}_N") for c in df.columns]).show()
+-----+--------+------+------+------+
| NAME|personID|col1_N|col2_N|col3_N|
+-----+--------+------+------+------+
|name1| id1| 1| 0| 1|
|name2| id2| 0| 1| 1|
+-----+--------+------+------+------+
Just fix your 1st approach with specifying a slice of columns and simplify boolean condition:
df.select([col(c).isNotNull().cast("integer").alias(c + '_N') for c in df.columns[2:]])
I have a dataframe as follows:
| created | id | date |value|
| 1650983874871 | x | 2020-05-08 | 5 |
| 1650367659030 | x | 2020-05-08 | 3 |
| 1639429213087 | x | 2020-05-08 | 2 |
| 1650983874871 | x | 2020-06-08 | 5 |
| 1650367659030 | x | 2020-06-08 | 3 |
| 1639429213087 | x | 2020-06-08 | 2 |
I want to get max of created for every date.
The table should look like :
| created | id | date |value|
| 1650983874871 | x | 2020-05-08 | 5 |
| 1650983874871 | x | 2020-06-08 | 5 |
I tried:
df2 = (
df
.groupby(['id', 'date'])
.agg(
F.max(F.col('created')).alias('created_max')
)
df3 = df.join(df2, on=['id', 'date'], how='left')
But this is not working as expected.
Can anyone help me.
You need to make two changes.
The join condition needs to include created as well. Here I have changed alias to alias("created") to make the join easier. This will ensure a unique join condition (if there are no duplicate created values).
The join type must be inner.
df2 = (
df
.groupby(['id', 'date'])
.agg(
F.max(F.col('created')).alias('created')
)
)
df3 = df.join(df2, on=['id', 'date','created'], how='inner')
df3.show()
+---+----------+-------------+-----+
| id| date| created|value|
+---+----------+-------------+-----+
| x|2020-05-08|1650983874871| 5|
| x|2020-06-08|1650983874871| 5|
+---+----------+-------------+-----+
Instead of using the group by and joining, you can also use the Window in pyspark.sql:
from pyspark.sql import functions as func
from pyspark.sql.window import Window
df = df\
.withColumn('max_created', func.max('created').over(Window.partitionBy('date', 'id')))\
.filter(func.col('created')==func.col('max_created'))\
.drop('max_created')
Step:
Get the max value based on the Window
Filter the row by using the matched timestamp
I have data like below
---------------------------------------------------|
|Id | DateTime | products |
|--------|-----------------------------|-----------|
| 1| 2017-08-24T00:00:00.000+0000| 1 |
| 1| 2017-08-24T00:00:00.000+0000| 2 |
| 1| 2017-08-24T00:00:00.000+0000| 3 |
| 1| 2016-05-24T00:00:00.000+0000| 1 |
I am using window.unboundedPreceding , window.unboundedFollowing as below to get the second recent datetime.
sorted_times = Window.partitionBy('Id').orderBy(F.col('ModifiedTime').desc()).rangeBetween(Window.unboundedPreceding,Window.unboundedFollowing)
df3 = (data.withColumn("second_recent",F.collect_list(F.col('ModifiedTime')).over(sorted_times)).getItem(1)))
But I get the results as below,getting the second date from second row which is same as first row
------------------------------------------------------------------------------
|Id |DateTime | secondtime |Products
|--------|-----------------------------|----------------------------- |--------------
| 1| 2017-08-24T00:00:00.000+0000| 2017-08-24T00:00:00.000+0000 | 1
| 1| 2017-08-24T00:00:00.000+0000| 2017-08-24T00:00:00.000+0000 | 2
| 1| 2017-08-24T00:00:00.000+0000| 2017-08-24T00:00:00.000+0000 | 3
| 1| 2016-05-24T00:00:00.000+0000| 2017-08-24T00:00:00.000+0000 | 1
Please help me in finding the second latest datetime on distinct datetime.
Thanks in advance
Use collect_set instead of collect_list for no duplicates:
df3 = data.withColumn(
"second_recent",
F.collect_set(F.col('LastModifiedTime')).over(sorted_times)[1]
)
df3.show(truncate=False)
#+-----+----------------------------+--------+----------------------------+
#|VipId|LastModifiedTime |products|second_recent |
#+-----+----------------------------+--------+----------------------------+
#|1 |2017-08-24T00:00:00.000+0000|1 |2016-05-24T00:00:00.000+0000|
#|1 |2017-08-24T00:00:00.000+0000|2 |2016-05-24T00:00:00.000+0000|
#|1 |2017-08-24T00:00:00.000+0000|3 |2016-05-24T00:00:00.000+0000|
#|1 |2016-05-24T00:00:00.000+0000|1 |2016-05-24T00:00:00.000+0000|
#+-----+----------------------------+--------+----------------------------+
Another way by using unordered window and sorting the array before taking second_recent:
from pyspark.sql import functions as F, Window
df3 = data.withColumn(
"second_recent",
F.sort_array(
F.collect_set(F.col('LastModifiedTime')).over(Window.partitionBy('VipId')),
False
)[1]
)
I have the following dataset with the name 'data':
+---------+-------------+------+
| name | subject| mark |
+---------+-------------+------+
| Anna| math| 80|
| Vlad| history| 67|
| Jack| art| 78|
| David| math| 71|
| Monica| art| 65|
| Alex| lit| 59|
| Mark| math| 82|
+---------+-------------+------+
I would like to do a map-reduce job.
The result show look like this or similar:
Anna, David : 1
Anna, Mark : 1
David, mark: 1
Vlad, None : 1
Jack, Monica: 1
Alex, None : 1
I have tried to do the following:
new_data = data.select(['name', 'subject']).show()
+---------+-------------+
| name | subject|
+---------+-------------+
| Anna| math|
| Vlad| history|
| Jack| art|
| David| math|
| Monica| art|
| Alex| lit|
| Mark| math|
+---------+-------------+
data_new.groupBy('name','subject').count().show(10)
However, this command does not give what I need.
You can do a self left join using the subject, get the distinct pairs, and add a column of 1.
import pyspark.sql.functions as F
result = df.alias('t1').join(df.alias('t2'),
F.expr("t1.subject = t2.subject and t1.name != t2.name"),
'left'
).select(
F.concat_ws(
', ',
F.greatest('t1.name', F.coalesce('t2.name', F.lit('None'))),
F.least('t1.name', F.coalesce('t2.name', F.lit('None')))
).alias('pair')
).distinct().withColumn('val', F.lit(1))
result.show()
+------------+---+
| pair|val|
+------------+---+
| Alex, None| 1|
| Anna, David| 1|
| Anna, Mark| 1|
| None, Vlad| 1|
| David, Mark| 1|
|Jack, Monica| 1|
+------------+---+
The process could be:
Grouping student with the same subject in an array
Call a udf function to create the array items permutation
Add a column that shows numbers for each subject
Call explode function to create separate3 rows for each item in the array
Let's do the steps one by one:
Step 1: Grouping
import pyspark.sql.functions as F
grouped_df = data_new.groupBy('subject').agg(F.collect_set('name').alias('students_array'))
Step 2: udf function
from itertools import permutations
def permutatoin(df_col):
result = sorted([e for e in set(permutations(df_col))])
return result
spark.udf.register("perWithPython", permutatoin)
grouped_df = grouped_df.select('*', permutatoin('students_array'))
Step 3: Create a new digit value column for each subject
grouped_df = grouped_df .withColumn('subject_no', F.rowNumber().over(Window.partitionBy('subject'))
Step 4: create separate rows
grouped_df.select(grouped_df.subject_no, explode(grouped_df.students_array)).show(truncate=False)
I have a need to be able to add new rows to a PySpark df will values based upon the contents of other rows with a common id. There will eventually millions of ids with lots rows for each id. I have tried the below method which works but seems overly complicated.
I start with a df in the format below (but in reality have more columns):
+-------+----------+-------+
| id | variable | value |
+-------+----------+-------+
| 1 | varA | 30 |
| 1 | varB | 1 |
| 1 | varC | -9 |
+-------+----------+-------+
Currently I am pivoting this df to get it in the following format:
+-----+------+------+------+
| id | varA | varB | varC |
+-----+------+------+------+
| 1 | 30 | 1 | -9 |
+-----+------+------+------+
On this df I can then use the standard withColumn and when functionality to add new columns based on the values in other columns. For example:
df = df.withColumn("varD", when((col("varA") > 16) & (col("varC") != -9)), 2).otherwise(1)
Which leads to:
+-----+------+------+------+------+
| id | varA | varB | varC | varD |
+-----+------+------+------+------+
| 1 | 30 | 1 | -9 | 1 |
+-----+------+------+------+------+
I can then pivot this df back to the original format leading to this:
+-------+----------+-------+
| id | variable | value |
+-------+----------+-------+
| 1 | varA | 30 |
| 1 | varB | 1 |
| 1 | varC | -9 |
| 1 | varD | 1 |
+-------+----------+-------+
This works but seems like it could, with millions of rows, lead to expensive and unnecessary operations. It feels like it should be doable without the need to pivot and unpivot the data. Do I need to do this?
I have read about Window functions and it sounds as if they may be another way to achieve the same result but to be honest I am struggling to get started with them. I can see how they can be used to generate a value, say a sum, for each id, or to find a maximum value but have not found a way to even get started on applying complex conditions that lead to a new row.
Any help to get started with this problem would be gratefully received.
You can use pandas_udf for adding/deleting rows/col on grouped data, and implement your processing logic in pandas udf.
import pyspark.sql.functions as F
row_schema = StructType(
[StructField("id", IntegerType(), True),
StructField("variable", StringType(), True),
StructField("value", IntegerType(), True)]
)
#F.pandas_udf(row_schema, F.PandasUDFType.GROUPED_MAP)
def addRow(pdf):
val = 1
if (len(pdf.loc[(pdf['variable'] == 'varA') & (pdf['value'] > 16)]) > 0 ) & \
(len(pdf.loc[(pdf['variable'] == 'varC') & (pdf['value'] != -9)]) > 0):
val = 2
return pdf.append(pd.Series([1, 'varD', val], index=['id', 'variable', 'value']), ignore_index=True)
df = spark.createDataFrame([[1, 'varA', 30],
[1, 'varB', 1],
[1, 'varC', -9]
], schema=['id', 'variable', 'value'])
df.groupBy("id").apply(addRow).show()
which resuts
+---+--------+-----+
| id|variable|value|
+---+--------+-----+
| 1| varA| 30|
| 1| varB| 1|
| 1| varC| -9|
| 1| varD| 1|
+---+--------+-----+