Spark Medium of values as column - dataframe

I just starting to work with Spark and I have to create a column with values based on another data frame values. My first data frame has an Id and start date columns while my other one has a yield value,acquired date and Id. I have to create a new column in the first data frame with the mean of the available values from the last 30 days of the start date with the yield values from the other data frame. So the output should look something like this:
Table 1
ID start_date
1 01/12/2018
2 01/11/2019
Table 2
ID yield acquired_date
1 120 05/11/2019
1 100 05/11/2018
1 200 07/11/2018
1 200 08/11/2018
2 350 04/10/2020
2 300 04/10/2019
2 100 05/10/2019
output
ID start_date yield_mean
1 01/12/2018 250
2 01/11/2019 200
Note: the mean only accounts for values where acquired date is 30 days less than start date so row 0 and row 4 are not used.

Related

Multiplication of returns by company increasing in time (BHARs)

I have the following Dataframe, organized in panel data. It contains daily returns of many companies on different days following the IPO date. The day_diff represents the days that have passed since the IPO, and return_1 represents the daily individual returns for that specific day for that specific company, from which I have already added +1. Each company has its own company_tic and I have about 300 companies. My goal is to calculate the first component of the right-hand side of the equation below (so having results for each day_diff and company_tic, always starting at day 0, until the last day of data; e.g. = from day 0 to day 1, then from day 0 to day 2, from 0 to day 3, and so on until my last day, which is day 730). I have tried df.groupby(['company_tic', 'day_diff'])['return_1'].expanding().prod() but it doesn't work. Any alternatives?
Index day_diff company_tic return_1
0 0 xyz 1.8914
1 1 xyz 1.0542
2 2 xyz 1.0016
3 0 abc 1.4398
4 1 abc 1.1023
5 2 abc 1.0233
... ... ... ...
159236 x 3
Not sure to fully get what you want, but you might want to use cumprod instead of expanding().prod().
Here's what I tried :
df['return_1_prod'] = df.groupby('company_tic')['return_1'].cumprod()
Output :
day_diff company_tic return_1 return_1_prod
0 0 xyz 1.8914 1.891400
1 1 xyz 1.0542 1.993914
2 2 xyz 1.0016 1.997104
3 0 abc 1.4398 1.439800
4 1 abc 1.1023 1.587092
5 2 abc 1.0233 1.624071

Get average of rows group by value intervals

I have a table as follows:
ID | Value
1 5
1 1000
1 1500
2 1000
2 1800
3 40
3 1000
3 1200
3 2000
3 2500
I want to obtain the average of each ID groupped by a given range r of value. For instance, if in this case r=1000, The expected result would be:
ID | Value
1 5
1 1250
2 1400
3 40
3 1100
3 2250
I have seen that this can be done with time intervals as seen here. My question is, how can I perform this type of group by operation for integer/float types?
You could try this way:
SELECT id, avg(value) as AvgValue
FROM (SELECT id, value, ROUND(value/1000, 0) AS range FROM yourtable) t
GROUP BY id, range

Is it possible to set a dynamic window frame bound in SQL OVER(ROW BETWEEN ...)-Clause?

Consider the following table, describing a patients medication plan. For example, the first row describes that the patient with patient_id = 1 is treated from timestamp 0 to 4. At time = 0, the patient has not yet become any medication (kum_amount_start = 0). At time = 4, the patient has received a kumulated amount of 100 units of a certain drug. It can be assumed, that the drug is given in with a constant rate. Regarding the first row, this means that the drug is given with a rate of 25 units/h.
patient_id
starttime [h]
endtime [h]
kum_amount_start
kum_amount_end
1
0
4
0
100
1
4
5
100
300
1
5
15
300
550
1
15
18
550
700
2
0
3
0
150
2
3
6
150
350
2
6
10
350
700
2
10
15
700
1100
2
15
19
1100
1500
I want to add the two columns "kum_amount_start_last_6hr" and "kum_amount_end_last_6hr" that describe the amount that has been given within the last 6 hours of the treatment (for the respective timestamps start, end).
I'm stuck with this problem for a while now.
I tried to tackle it with something like this
SUM(kum_amount) OVER (PARTITION BY patient_id ROWS BETWEEN "dynmaic window size" AND CURRENT ROW)
but I'm not sure whether this is the right approach.
I would be very happy if you could help me out here, thanks!

SQL Teradata - in query create new column that multiplies column by 2 if certain value is true

I have a sql query I'm running that exports 2 columns, cost and months. The months column either has a value of 6 or 2. I want to create a new column that checks the months column and sees what the value is. If the month value is 6 then multiply the cost column by 2 and if the month value is 12 then just copy that same number in the cost column. Sample data:
cost months
100 6
200 12
400 6
expected result:
cost months total
100 6 200
200 12 200
400 6 800
A simple case statement should work:
select
cost,
months,
case when months = 6 then cost * 2
else cost
end as total
from <your table>

PowerPivot formula for row wise weighted average

I have a table in PowerPivot which contains the logged data of a traffic control camera mounted on a road. This table is filled the velocity and the number of vehicles that pass this camera during a specific time(e.g. 14:10 - 15:25). Now I want to know that how can I get the average velocity of cars for an specific hour and list them in a separate table with 24 rows(hour 0 - 23) where the second column of each row is the weighted average velocity of that hour? A sample of my stat_table data is given below:
count vel hour
----- --- ----
133 96.00237 15
117 91.45705 21
81 81.90521 6
2 84.29946 21
4 77.7841 18
1 140.8766 17
2 56.14951 14
6 71.72839 13
4 64.14309 9
1 60.949 17
1 77.00728 21
133 100.3956 6
109 100.8567 15
54 86.6369 9
1 83.96901 17
10 114.6556 21
6 85.39127 18
1 76.77993 15
3 113.3561 2
3 94.48055 2
In a separate PowerPivot table I have 24 rows and 2 columns but when I enter my formula, the whole rows get updated with the same number. My formula is:
=sumX(FILTER(stat_table, stat_table[hour]=[hour]), stat_table[count] * stat_table[vel])/sumX(FILTER(stat_table, stat_table[hour]=[hour]), stat_table[count])
Create a new calculated column named "WeightedVelocity" as follows
WeightedVelocity = [count]*[vel]
Create a measure "WeightedAverage" as follows
WeightedAverage = sum(stat_table[WeightedVelocity]) / sum(stat_table[count])
Use measure "WeightedAverage" in VALUES area of pivot Table and use "hour" column in ROWS to get desired result.