Window & Aggregate functions in Pyspark SQL/SQL - sql

After the answer by #Vaebhav realized the question was not set up correctly.
Hence editing it with his code snippet.
I have the following table
from pyspark.sql.types import IntegerType,TimestampType,DoubleType
input_str = """
4219,2018-01-01 08:10:00,3.0,50.78,
4216,2018-01-02 08:01:00,5.0,100.84,
4217,2018-01-02 20:00:00,4.0,800.49,
4139,2018-01-03 11:05:00,1.0,400.0,
4170,2018-01-03 09:10:00,2.0,100.0,
4029,2018-01-06 09:06:00,6.0,300.55,
4029,2018-01-06 09:16:00,2.0,310.55,
4217,2018-01-06 09:36:00,5.0,307.55,
1139,2018-01-21 11:05:00,1.0,400.0,
2170,2018-01-21 09:10:00,2.0,100.0,
4218,2018-02-06 09:36:00,5.0,307.55,
4218,2018-02-06 09:36:00,5.0,307.55
""".split(",")
input_values = list(map(lambda x: x.strip() if x.strip() != '' else None, input_str))
cols = list(map(lambda x: x.strip() if x.strip() != 'null' else None, "customer_id,timestamp,quantity,price".split(',')))
n = len(input_values)
n_cols = 4
input_list = [tuple(input_values[i:i+n_cols]) for i in range(0,n,n_cols)]
sparkDF = sqlContext.createDataFrame(input_list,cols)
sparkDF = sparkDF.withColumn('customer_id',F.col('customer_id').cast(IntegerType()))\
.withColumn('timestamp',F.col('timestamp').cast(TimestampType()))\
.withColumn('quantity',F.col('quantity').cast(IntegerType()))\
.withColumn('price',F.col('price').cast(DoubleType()))
I want to calculate the aggergate as follows :
trxn_date
unique_cust_visits
next_7_day_visits
next_30_day_visits
2018-01-01
1
7
9
2018-01-02
2
6
8
2018-01-03
2
4
6
2018-01-06
2
2
4
2018-01-21
2
2
3
2018-02-06
1
1
1
where the
trxn_date is date from the timestamp column,
daily_cust_visits is unique count of customers,
next_7_day_visits is a count of customers on a 7 day rolling window basis.
next_30_day_visits is a count of customers on a 30 day rolling window basis.
I want to write the code as a single SQL query.

You can achieve this by using ROW rather than a RANGE Frame Type , a good explanation can be found here
ROW - based on physical offsets from the position of the current input row
RANGE - based on logical offsets from the position of the current input row
Also in your implementation ,a PARTITION BY clause would be redundant, as it wont create the required Frames for a look-ahead.
Data Preparation
input_str = """
4219,2018-01-02 08:10:00,3.0,50.78,
4216,2018-01-02 08:01:00,5.0,100.84,
4217,2018-01-02 20:00:00,4.0,800.49,
4139,2018-01-03 11:05:00,1.0,400.0,
4170,2018-01-03 09:10:00,2.0,100.0,
4029,2018-01-06 09:06:00,6.0,300.55,
4029,2018-01-06 09:16:00,2.0,310.55,
4217,2018-01-06 09:36:00,5.0,307.55
""".split(",")
input_values = list(map(lambda x: x.strip() if x.strip() != '' else None, input_str))
cols = list(map(lambda x: x.strip() if x.strip() != 'null' else None, "customer_id timestamp quantity price".split('\t')))
n = len(input_values)
n_cols = 4
input_list = [tuple(input_values[i:i+n_cols]) for i in range(0,n,n_cols)]
sparkDF = sql.createDataFrame(input_list,cols)
sparkDF = sparkDF.withColumn('customer_id',F.col('customer_id').cast(IntegerType()))\
.withColumn('timestamp',F.col('timestamp').cast(TimestampType()))\
.withColumn('quantity',F.col('quantity').cast(IntegerType()))\
.withColumn('price',F.col('price').cast(DoubleType()))
sparkDF.show()
+-----------+-------------------+--------+------+
|customer_id| timestamp|quantity| price|
+-----------+-------------------+--------+------+
| 4219|2018-01-02 08:10:00| 3| 50.78|
| 4216|2018-01-02 08:01:00| 5|100.84|
| 4217|2018-01-02 20:00:00| 4|800.49|
| 4139|2018-01-03 11:05:00| 1| 400.0|
| 4170|2018-01-03 09:10:00| 2| 100.0|
| 4029|2018-01-06 09:06:00| 6|300.55|
| 4029|2018-01-06 09:16:00| 2|310.55|
| 4217|2018-01-06 09:36:00| 5|307.55|
+-----------+-------------------+--------+------+
Window Aggregates
sparkDF.createOrReplaceTempView("transactions")
sql.sql("""
SELECT
TO_DATE(timestamp) as trxn_date
,COUNT(DISTINCT customer_id) as unique_cust_visits
,SUM(COUNT(DISTINCT customer_id)) OVER (
ORDER BY 'timestamp'
ROWS BETWEEN CURRENT ROW AND 7 FOLLOWING
) as next_7_day_visits
FROM transactions
GROUP BY 1
""").show()
+----------+------------------+-----------------+
| trxn_date|unique_cust_visits|next_7_day_visits|
+----------+------------------+-----------------+
|2018-01-02| 3| 7|
|2018-01-03| 2| 4|
|2018-01-06| 2| 2|
+----------+------------------+-----------------+

Building upon #Vaebhav's answer the required query in this case is
sqlContext.sql("""
SELECT
TO_DATE(timestamp) as trxn_date
,COUNT(DISTINCT customer_id) as unique_cust_visits
,SUM(COUNT(DISTINCT customer_id)) OVER (
ORDER BY CAST(TO_DATE(timestamp) AS TIMESTAMP) DESC
RANGE BETWEEN INTERVAL 7 DAYS PRECEDING AND CURRENT ROW
) as next_7_day_visits
,SUM(COUNT(DISTINCT customer_id)) OVER (
ORDER BY CAST(TO_DATE(timestamp) AS TIMESTAMP) DESC
RANGE BETWEEN INTERVAL 30 DAYS PRECEDING AND CURRENT ROW
) as next_30_day_visits
FROM transactions
GROUP BY 1
ORDER by trxn_date
""").show()
trxn_date
unique_cust_visits
next_7_day_visits
next_30_day_visits
2018-01-01
1
7
9
2018-01-02
2
6
8
2018-01-03
2
4
6
2018-01-06
2
2
4
2018-01-21
2
2
3
2018-02-06
1
1
1

Related

PySpark generate missing dates and fill data with previous value

I need help for this case to fill, with a new row, missing values:
This is just an example, but I have a lot of rows with different IDs.
Input dataframe:
ID
FLAG
DATE
123
1
01/01/2021
123
0
01/02/2021
123
1
01/03/2021
123
0
01/06/2021
123
0
01/08/2021
777
0
01/01/2021
777
1
01/03/2021
So I have a finite set of dates and I wanna take until the last one for each ID (in the example, for ID = 123: 01/01/2021, 01/02/2021, 01/03/2021... until 01/08/2021). So basically I could do a cross join with a calendar, but I don't know how can I fill missing value with a rule or a filter, after the cross join.
Expected output: (in bold the generated missing values)
ID
FLAG
DATE
123
1
01/01/2021
123
0
01/02/2021
123
1
01/03/2021
123
1
01/04/2021
123
1
01/05/2021
123
0
01/06/2021
123
0
01/07/2021
123
0
01/08/2021
777
0
01/01/2021
777
0
01/02/2021
777
1
01/03/2021
You can first group by id to calculate max and min date then using sequence function, generate all the dates from min_date to max_date. Finally, join with original dataframe and fill nulls with last non null per group of id. Here's a complete working example:
Your input dataframe:
from pyspark.sql import Window
import pyspark.sql.functions as F
df = spark.createDataFrame([
(123, 1, "01/01/2021"), (123, 0, "01/02/2021"),
(123, 1, "01/03/2021"), (123, 0, "01/06/2021"),
(123, 0, "01/08/2021"), (777, 0, "01/01/2021"),
(777, 1, "01/03/2021")
], ["id", "flag", "date"])
Groupby id and generate all possible dates for each id:
all_dates_df = df.groupBy("id").agg(
F.date_trunc("mm", F.max(F.to_date("date", "dd/MM/yyyy"))).alias("max_date"),
F.date_trunc("mm", F.min(F.to_date("date", "dd/MM/yyyy"))).alias("min_date")
).select(
"id",
F.expr("sequence(min_date, max_date, interval 1 month)").alias("date")
).withColumn(
"date", F.explode("date")
).withColumn(
"date",
F.date_format("date", "dd/MM/yyyy")
)
Now, left join with df and use last function over a Window partitioned by id to fill null values:
w = Window.partitionBy("id").orderBy("date")
result = all_dates_df.join(df, ["id", "date"], "left").select(
"id",
"date",
*[F.last(F.col(c), ignorenulls=True).over(w).alias(c)
for c in df.columns if c not in ("id", "date")
]
)
result.show()
#+---+----------+----+
#| id| date|flag|
#+---+----------+----+
#|123|01/01/2021| 1|
#|123|01/02/2021| 0|
#|123|01/03/2021| 1|
#|123|01/04/2021| 1|
#|123|01/05/2021| 1|
#|123|01/06/2021| 0|
#|123|01/07/2021| 0|
#|123|01/08/2021| 0|
#|777|01/01/2021| 0|
#|777|01/02/2021| 0|
#|777|01/03/2021| 1|
#+---+----------+----+
You can find the ranges of dates between the DATE value in the current row and the following row and then use sequence to generate all intermediate dates and explode this array to fill in values for the missing dates.
from pyspark.sql import functions as F
from pyspark.sql import Window
data = [(123, 1, "01/01/2021",),
(123, 0, "01/02/2021",),
(123, 1, "01/03/2021",),
(123, 0, "01/06/2021",),
(123, 0, "01/08/2021",),
(777, 0, "01/01/2021",),
(777, 1, "01/03/2021",), ]
df = spark.createDataFrame(data, ("ID", "FLAG", "DATE",)).withColumn("DATE", F.to_date(F.col("DATE"), "dd/MM/yyyy"))
window_spec = Window.partitionBy("ID").orderBy("DATE")
next_date = F.coalesce(F.lead("DATE", 1).over(window_spec), F.col("DATE") + F.expr("interval 1 month"))
end_date_range = next_date - F.expr("interval 1 month")
df.withColumn("Ranges", F.sequence(F.col("DATE"), end_date_range, F.expr("interval 1 month")))\
.withColumn("DATE", F.explode("Ranges"))\
.withColumn("DATE", F.date_format("date", "dd/MM/yyyy"))\
.drop("Ranges").show(truncate=False)
Output
+---+----+----------+
|ID |FLAG|DATE |
+---+----+----------+
|123|1 |01/01/2021|
|123|0 |01/02/2021|
|123|1 |01/03/2021|
|123|1 |01/04/2021|
|123|1 |01/05/2021|
|123|0 |01/06/2021|
|123|0 |01/07/2021|
|123|0 |01/08/2021|
|777|0 |01/01/2021|
|777|0 |01/02/2021|
|777|1 |01/03/2021|
+---+----+----------+

Joining 2 dataframes pyspark

I am new to Pyspark.
I have data like this in 2 tables as below. I am using data frames.
Table1:
Id
Amount
Date
1
£100
01/04/2021
1
£50
08/04/2021
2
£60
02/04/2021
2
£20
06/05/2021
Table2:
Id
Status
Date
1
S1
01/04/2021
1
S2
05/04/2021
1
S3
10/04/2021
2
S1
02/04/2021
2
S2
10/04/2021
I need to join those 2 data frames above to produce output like this as below.
For every record in table 1, we need to get the record from table 2 valid as of that Date and vice versa. For e.g, table1 has £50 for Id=1 on 08/04/2021 but table 2 has a record for Id=1 on 05/04/2021 where status changed to S2. So, for 08/04/2021 the status is S2. That's what I am not sure how to give in the join condition to get this output
What's the efficient way of achieving this?
Expected Output:
Id
Status
Date
Amount
1
S1
01/04/2021
£100
1
S2
05/04/2021
£100
1
S2
08/04/2021
£50
1
S3
10/04/2021
£50
2
S1
02/04/2021
£60
2
S2
10/04/2021
£60
2
S2
06/05/2021
£20
Use full join on Id and Date then lag window function to get the values of Status and Amount from the precedent closest Date row:
from pyspark.sql import Window
import pyspark.sql.functions as F
w = Window.partitionBy("Id").orderBy(F.to_date("Date", "dd/MM/yyyy"))
joined_df = df1.join(df2, ["Id", "Date"], "full").withColumn(
"Status",
F.coalesce(F.col("Status"), F.lag("Status").over(w))
).withColumn(
"Amount",
F.coalesce(F.col("Amount"), F.lag("Amount").over(w))
)
joined_df.show()
#+---+----------+------+------+
#| Id| Date|Amount|Status|
#+---+----------+------+------+
#| 1|01/04/2021| £100| S1|
#| 1|05/04/2021| £100| S2|
#| 1|08/04/2021| £50| S2|
#| 1|10/04/2021| £50| S3|
#| 2|02/04/2021| £60| S1|
#| 2|10/04/2021| £60| S2|
#| 2|06/05/2021| £20| S2|
#+---+----------+------+------+

Pandas - how to get the minimum value for each row from values across several rows

I have a pandas dataframe in the following structure:
|index | a | b | c | d | e |
| ---- | -- | -- | -- | -- | -- |
|0 | -1 | -2| 5 | 3 | 1 |
How can I get the minimum value for each row using only the positive values in columns a-e?
For the example row above, the minimum of (5,3,1) should be 1 and not (-2).
You can use the loop on all rows and apply your condition on the rows.
for example:
df = pd.DataFrame([{"a":-2,"b":2,"c":5},{"a":3,"b":0,"c":-1}])
# a b c
#0 -2 2 5
#1 3 0 -1
def my_condition(li):
li = [i for i in li if i>=0]
return min(li)
min_cel = []
for k,r in df.iterrows():
li = r.to_dict().values()
min_cel.append( my_condition(li) )
df["min"] = min_cel
# a b c min
#0 -2 2 5 2
#1 3 0 -1 0
You can also write the same code on one line:
df['min'] = ddd.apply(lambda row: min([i for i in row.to_dict().values() if i>=0]) , axis=1)

Visits during the last 2 years

I have a list with users and the dates of their last visit. For every time they visit, I want to know how many times they visited over the last 2 years.
# Create toy example
import pandas as pd
import numpy as np
date_range = pd.date_range(pd.to_datetime('2010-01-01'),
pd.to_datetime('2016-01-01'), freq='D')
date_range = np.random.choice(date_range, 8)
visits = {'user': list(np.repeat(1, 4)) + list(np.repeat(2, 4)) ,
'time': list(date_range)}
df = pd.DataFrame(visits)
df.sort_values(by = ['user', 'time'], axis = 0)
df = spark.createDataFrame(df).repartition(1).cache()
df.show()
What I am looking for is something like this:
time user nr_visits_during_2_previous_years
0 2010-02-27 1 0
2 2012-02-21 1 1
3 2013-04-30 1 1
1 2013-06-20 1 2
6 2010-06-23 2 0
4 2011-10-19 2 1
5 2011-11-10 2 2
7 2014-02-06 2 0
Suppose you create a dataframe with these values and you need to check for visits after 2015-01-01.
import pyspark.sql.functions as f
import pyspark.sql.types as t
df = spark.createDataFrame([("2014-02-01", "1"),("2015-03-01", "2"),("2017-12-01", "3"),
("2014-05-01", "2"),("2016-10-12", "1"),("2016-08-21", "1"),
("2017-07-01", "3"),("2015-09-11", "1"),("2016-08-24", "1")
,("2016-04-05", "2"),("2014-11-19", "3"),("2016-03-11", "3")], ["date", "id"])
Now, you need to change your date column to DateType from StringType and then filter rows for which user visited after 2015-01-01.
df2 = df.withColumn("date",f.to_date('date', 'yyyy-MM-dd'))
df3 = df2.where(df2.date >= f.lit('2015-01-01'))
Last part, just use groupby on id column and use count to get the number of visits by a user after 2015-01-01
df3.groupby('id').count().show()
+---+-----+
| id|count|
+---+-----+
| 3| 3|
| 1| 4|
| 2| 2|
+---+-----+

Create New Column If Statement Based on Duplicate Rows in R

I want to create a new column based on whether or not it is a duplicate row. I have my data ordered by user # then date. I want the new column to check to see if the value in the first column is equal to the row before, then do the same for the date.
For example I have the first two columns of data and want to create a boolean array in the 3rd column whether or not it was a new user on a new day:
User# Date Unique
1 1/1/17 1
1 1/1/17 0
1 1/2/17 1
2 1/1/17 1
3 1/1/17 1
3 1/2/17 1
This may give you what you are looking for
library(dplyr)
User <- c(1,1,1,2,3,3)
Date <- c("1/1/17","1/1/17","1/2/17","1/1/17","1/1/17","1/2/17")
df <- data.frame(User,Date,stringsAsFactors = FALSE)
df <- df %>%
group_by(User, Date) %>%
mutate(Unique = if_else(duplicated(Date) == FALSE, 1, 0))
There might be a typo in the sample data set as the last row is unique per the given criteria
df1$Unique <- c(1, diff(df1$User) != 0 | diff(df1$Date) != 0)
User Date Unique
1 1 2017-01-01 1
2 1 2017-01-01 0
3 1 2017-01-02 1
4 2 2017-01-01 1
5 3 2017-01-01 1
6 3 2017-01-02 1
update
If the users are stored as factors then the following will work
User <- c(1, 1, 1, 2, 3, 3)
User <- letters[User]
Date <- c("1/1/17", "1/1/17", "1/4/17", "1/1/17", "1/1/17", "1/2/17")
df1 <- data.frame(User, Date)
df1$Date <- as.Date(df1$Date, "%m/%d/%y")
df1$Unique <- c(1, diff(as.numeric(df1$User)) != 0 | diff(df1$Date) > 1)
User Date Unique
1 a 2017-01-01 1
2 a 2017-01-01 0
3 a 2017-01-04 1
4 b 2017-01-01 1
5 c 2017-01-01 1
6 c 2017-01-02 0