PySpark lead based on condition - dataframe

I have a dataset such as:
Condition | Date
0 | 2019/01/10
1 | 2019/01/11
0 | 2019/01/15
1 | 2019/01/16
1 | 2019/01/19
0 | 2019/01/23
0 | 2019/01/25
1 | 2019/01/29
1 | 2019/01/30
I would like to get the next value of the date column when condition == 1 was met.
The desired output would be something like:
Condition | Date | Lead
0 | 2019/01/10 | 2019/01/15
1 | 2019/01/11 | 2019/01/16
0 | 2019/01/15 | 2019/01/23
1 | 2019/01/16 | 2019/01/19
1 | 2019/01/19 | 2019/01/29
0 | 2019/01/23 | 2019/01/25
0 | 2019/01/25 | NaN
1 | 2019/01/29 | 2019/01/30
1 | 2019/01/30 | NaN
How can I perform that?
Please keep in mind it's a very large dataset - which I will have to partition and group by an UUID so the solution has to be somewhat performant.

To get the next value of the date column when condition == 1 was met, we can use the first window function with a when().otherwise() which emulates the lead.
data_sdf. \
withColumn('dt_w_cond1_lead',
func.first(func.when(func.col('cond') == 1, func.col('dt')), ignorenulls=True).
over(wd.partitionBy().orderBy('dt').rowsBetween(1, sys.maxsize))
). \
show()
# +----+----------+---------------+
# |cond| dt|dt_w_cond1_lead|
# +----+----------+---------------+
# | 0|2019-01-10| 2019-01-11|
# | 1|2019-01-11| 2019-01-16|
# | 0|2019-01-15| 2019-01-16|
# | 1|2019-01-16| 2019-01-19|
# | 1|2019-01-19| 2019-01-29|
# | 0|2019-01-23| 2019-01-29|
# | 0|2019-01-25| 2019-01-29|
# | 1|2019-01-29| 2019-01-30|
# | 1|2019-01-30| null|
# +----+----------+---------------+

You can use window function lead. As you said in the question, for more performance, you will need to have more partitions.
Input:
from pyspark.sql import functions as F, Window as W
df = spark.createDataFrame(
[(0, '2019/01/10'),
(1, '2019/01/11'),
(0, '2019/01/15'),
(1, '2019/01/16'),
(1, '2019/01/19'),
(0, '2019/01/23'),
(0, '2019/01/25'),
(1, '2019/01/29'),
(1, '2019/01/30')],
['Condition', 'Date'])
Script:
w = W.partitionBy('Condition').orderBy('Date')
df = df.withColumn('Lead', F.lead('Date').over(w))
df.show()
# +---------+----------+----------+
# |Condition| Date| Lead|
# +---------+----------+----------+
# | 0|2019/01/10|2019/01/15|
# | 0|2019/01/15|2019/01/23|
# | 0|2019/01/23|2019/01/25|
# | 0|2019/01/25| null|
# | 1|2019/01/11|2019/01/16|
# | 1|2019/01/16|2019/01/19|
# | 1|2019/01/19|2019/01/29|
# | 1|2019/01/29|2019/01/30|
# | 1|2019/01/30| null|
# +---------+----------+----------+

Related

Pyspark get rows with max value for a column over a window

I have a dataframe as follows:
| created | id | date |value|
| 1650983874871 | x | 2020-05-08 | 5 |
| 1650367659030 | x | 2020-05-08 | 3 |
| 1639429213087 | x | 2020-05-08 | 2 |
| 1650983874871 | x | 2020-06-08 | 5 |
| 1650367659030 | x | 2020-06-08 | 3 |
| 1639429213087 | x | 2020-06-08 | 2 |
I want to get max of created for every date.
The table should look like :
| created | id | date |value|
| 1650983874871 | x | 2020-05-08 | 5 |
| 1650983874871 | x | 2020-06-08 | 5 |
I tried:
df2 = (
df
.groupby(['id', 'date'])
.agg(
F.max(F.col('created')).alias('created_max')
)
df3 = df.join(df2, on=['id', 'date'], how='left')
But this is not working as expected.
Can anyone help me.
You need to make two changes.
The join condition needs to include created as well. Here I have changed alias to alias("created") to make the join easier. This will ensure a unique join condition (if there are no duplicate created values).
The join type must be inner.
df2 = (
df
.groupby(['id', 'date'])
.agg(
F.max(F.col('created')).alias('created')
)
)
df3 = df.join(df2, on=['id', 'date','created'], how='inner')
df3.show()
+---+----------+-------------+-----+
| id| date| created|value|
+---+----------+-------------+-----+
| x|2020-05-08|1650983874871| 5|
| x|2020-06-08|1650983874871| 5|
+---+----------+-------------+-----+
Instead of using the group by and joining, you can also use the Window in pyspark.sql:
from pyspark.sql import functions as func
from pyspark.sql.window import Window
df = df\
.withColumn('max_created', func.max('created').over(Window.partitionBy('date', 'id')))\
.filter(func.col('created')==func.col('max_created'))\
.drop('max_created')
Step:
Get the max value based on the Window
Filter the row by using the matched timestamp

Check the elements in two columns of different dataframes

I have two dataframes.
Df1
Id | Name | Remarks
---------------------
1 | A | Not bad
1 | B | Good
2 | C | Very bad
Df2
Id | Name | Place |Job
-----------------------
1 | A | Can | IT
2 |C | Cbe | CS
4 |L | anc | ME
5 | A | cne | IE
Output
Id | Name | Remarks |Results
------------------------------
1 | A | Not bad |True
1 | B | Good |False
2 | C | VeryGood |True
That is the result should be true if same id and name are present in both dataframes. I tried
df1['Results']=np.where(Df1['id','Name'].isin(Df2['Id','Name']),'true','false')
But it was not successful.
Use DataFrame.merge with indicator parameter and compare both values:
df = Df1[['id','Name']].merge(Df2[['Id','Name']], indicator='Results', how='left')
df['Results'] = df['Results'].eq('both')
Your solution is possible by compare index values by DataFrame.set_index with Index.isin:
df1['Results']= Df1.set_index(['id','Name']).index.isin(Df2.set_index(['id','Name']).index)
Or compare tuples from both columns:
df1['Results']= Df1[['id','Name']].agg(tuple, 1).isin(Df2[['id','Name']].agg(tuple, 1))
You can easily achieve by merge like #jezrael 's answer.
You can also achieve it with np.where,list comprehension and zip like below:
df1['Results']=np.where([str(i)+'_'+str(j)==str(k)+'_'+str(l) for i,j,k,l in zip(Df1['ID'],Df1['Name'],Df2['ID'],Df2['Name'])],True,False)

Py spark join on pipeline separated column

I have two data frames which i want to join. The catch is the one of the tables have pipeline separated string on which one of the value is what I want to join with. How do I it in Pyspark. Below is an example
TABLE A has
+-------+--------------------+
|id | name |
+-------+--------------------+
| 613760|123|test|test2 |
| 613740|456|ABC |
| 598946|OMG|567 |
TABLE B has
+-------+--------------------+
|join_id| prod_type|
+-------+--------------------+
| 123 |Direct De |
| 456 |Direct |
| 567 |In |
Expected Result - Join table A and Table B when there is a match with Table A's pipeline separated ID against Table B's value. For instance TableA.id - 613760 the name has 123|test and I want to join with table B's join ID 123 likewise 456 and 567.
Resultant Table
+--------------------+-------+
| name |join_Id|
+-------+------------+-------+
|123|test|test2 |123 |
|456|ABC |456 |
|OMG|567 |567 |
Can someone help me solve this. I am relatively new to pyspark and I am learning
To solve your problem you need to:
split those "pipeline separated strings"
then exploding those values into separated rows.
posexplode would do that for you http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions.posexplode
from there an "inner join" and
finally a "select" would do the rest of the trick.
See the code below:
import pyspark.sql.functions as f
#First create the dataframes to test solution
table_A = spark.createDataFrame([(613760, '123|test|test2' ), (613740, '456|ABC'), (598946, 'OMG|567' )], ["id", "name"])
# +-------+--------------------+
# |id | name |
# +-------+--------------------+
# | 613760|123|test|test2 |
# | 613740|456|ABC |
# | 598946|OMG|567 |
table_B = spark.createDataFrame([('123', 'Direct De' ), ('456', 'Direct'), ('567', 'In' )], ["join_id", "prod_type"])
# +-------+--------------------+
# |join_id| prod_type|
# +-------+--------------------+
# | 123 |Direct De |
# | 456 |Direct |
# | 567 |In |
result = table_A \
.select(
'name',
f.posexplode(f.split(f.col('name'),'\|')).alias('pos', 'join_id')) \
.join(table_B, on='join_id', how='inner') \
.select('name', 'join_id')
result.show(10, False)
# +--------------+-------+
# |name |join_id|
# +--------------+-------+
# |123|test|test2|123 |
# |456|ABC |456 |
# |OMG|567 |567 |
# +--------------+-------+
Hope that works. As you continue getting better in Pyspark. I would recommend you to go through the functions in pyspark.sql.functions and that would take your skills to the next level.

Using PySpark window functions with conditions to add rows

I have a need to be able to add new rows to a PySpark df will values based upon the contents of other rows with a common id. There will eventually millions of ids with lots rows for each id. I have tried the below method which works but seems overly complicated.
I start with a df in the format below (but in reality have more columns):
+-------+----------+-------+
| id | variable | value |
+-------+----------+-------+
| 1 | varA | 30 |
| 1 | varB | 1 |
| 1 | varC | -9 |
+-------+----------+-------+
Currently I am pivoting this df to get it in the following format:
+-----+------+------+------+
| id | varA | varB | varC |
+-----+------+------+------+
| 1 | 30 | 1 | -9 |
+-----+------+------+------+
On this df I can then use the standard withColumn and when functionality to add new columns based on the values in other columns. For example:
df = df.withColumn("varD", when((col("varA") > 16) & (col("varC") != -9)), 2).otherwise(1)
Which leads to:
+-----+------+------+------+------+
| id | varA | varB | varC | varD |
+-----+------+------+------+------+
| 1 | 30 | 1 | -9 | 1 |
+-----+------+------+------+------+
I can then pivot this df back to the original format leading to this:
+-------+----------+-------+
| id | variable | value |
+-------+----------+-------+
| 1 | varA | 30 |
| 1 | varB | 1 |
| 1 | varC | -9 |
| 1 | varD | 1 |
+-------+----------+-------+
This works but seems like it could, with millions of rows, lead to expensive and unnecessary operations. It feels like it should be doable without the need to pivot and unpivot the data. Do I need to do this?
I have read about Window functions and it sounds as if they may be another way to achieve the same result but to be honest I am struggling to get started with them. I can see how they can be used to generate a value, say a sum, for each id, or to find a maximum value but have not found a way to even get started on applying complex conditions that lead to a new row.
Any help to get started with this problem would be gratefully received.
You can use pandas_udf for adding/deleting rows/col on grouped data, and implement your processing logic in pandas udf.
import pyspark.sql.functions as F
row_schema = StructType(
[StructField("id", IntegerType(), True),
StructField("variable", StringType(), True),
StructField("value", IntegerType(), True)]
)
#F.pandas_udf(row_schema, F.PandasUDFType.GROUPED_MAP)
def addRow(pdf):
val = 1
if (len(pdf.loc[(pdf['variable'] == 'varA') & (pdf['value'] > 16)]) > 0 ) & \
(len(pdf.loc[(pdf['variable'] == 'varC') & (pdf['value'] != -9)]) > 0):
val = 2
return pdf.append(pd.Series([1, 'varD', val], index=['id', 'variable', 'value']), ignore_index=True)
df = spark.createDataFrame([[1, 'varA', 30],
[1, 'varB', 1],
[1, 'varC', -9]
], schema=['id', 'variable', 'value'])
df.groupBy("id").apply(addRow).show()
which resuts
+---+--------+-----+
| id|variable|value|
+---+--------+-----+
| 1| varA| 30|
| 1| varB| 1|
| 1| varC| -9|
| 1| varD| 1|
+---+--------+-----+

Pandas: need to create dataframe for weekly search per event occurrence

If I have this events dataframe df_e below:
|------|------------|-------|
| group| event date | count |
| x123 | 2016-01-06 | 1 |
| | 2016-01-08 | 10 |
| | 2016-02-15 | 9 |
| | 2016-05-22 | 6 |
| | 2016-05-29 | 2 |
| | 2016-05-31 | 6 |
| | 2016-12-29 | 1 |
| x124 | 2016-01-01 | 1 |
...
and also know the t0 which is the beginning of time (let's say for x123 it's 2016-01-01) and tN which is the end of experiment from another dataframe df_s (2017-05-25), then how can I create the dataframe df_new which should like this
|------|------------|---------------|--------|
| group| obs. weekly| lifetime, week| status |
| x123 | 2016-01-01 | 1 | 1 |
| | 2016-01-08 | 0 | 0 |
| | 2016-01-15 | 0 | 0 |
| | 2016-01-22 | 1 | 1 |
| | 2016-01-29 | 2 | 1 |
...
| | 2017-05-18 | 1 | 1 |
| | 2017-05-25 | 1 | 1 |
...
| x124 | 2017-05-18 | 1 | 1 |
| x124 | 2017-05-25 | 1 | 1 |
Explanation: take t0 and generate rows until tN per week period. For each row R, search with that group if the event date falls within R, if True, then count how long in weeks it lives there, also set status = 1 as alive, otherwise set lifetime, status columns for this R as 0, e.g. dead.
Questions:
1) How to generate dataframes per group given t0 and tN values, e.g. generate [group, obs. weekly, lifetime, status] columns for (tN - t0) / week rows?
2) How to accomplish the construction of such df_new dataframe explained above?
I can begin with this so far =)
import pandas as pd
# 1. generate dataframes per group to get the boundary within `t0` and `tN` from df_s dataframe, where each dataframe has "group, obs, lifetime, status" columns X (tN - t0 / week) rows filled with 0 values.
df_all = pd.concat([df_group1, df_group2])
def do_that(R):
found_event_row = df_e.iloc[[R.group]]
# check if found_event_row['date'] falls into R['obs'] week
# if True, then found how long it's there
df_new = df_all.apply(do_that)
I'm not really sure if I get you but group one is not related to group two, right? if that's the case I think what you want is something like this:
import pandas as pd
df_group1 = df_group1.set_index('event date')
df_group1.index = pd.to_datetime(df_group1.index) #convert the index to datetime so you can 'resample'
df_group1['lifetime, week'] = df_group1.resample('1W').apply(lamda x: yourfuncion(x))
df_group1 = df_group1.reset_index()
df_group1['status']= df_group1.apply(lambda x: 1 if x['lifetime, week']>0 else 0)
#do the same with group2 and concat to create df_all
I'm not sure how you get 'lifetime, week' but all that's left is creating the function that generates it.