I have a table that looks like this:
Name Post Like Share Comment Date
--------------------------------------------
Sita test data 1 5 2 4 28/4/2015
Munni test data 2 5 2 5 27/4/2015
Shila test data 3 1 3 1 22/4/2015
Ram Test data 4 5 0 5 1/4/2015
Sam Test data 5 4 0 2 2/4/2015
Jadu Test data 6 1 5 2 30/3/2015
Madhu Test data 7 5 0 4 10/4/2015
Now I want my result set like this:
Type Name Post Like Share Comment Date
-------------------------------------------------------------------------
Today Sita test data 1 5 2 4 28/4/2015
Last 7 Days Sita test data 1 5 2 4 28/4/2015
Last 7 Days Munni test data 2 5 2 5 27/4/2015
Last 7 Days Shila test data 3 1 3 1 22/4/2015
Last 30 Days Sita test data 1 5 2 4 28/4/2015
Last 30 Days Munni test data 2 5 2 5 27/4/2015
Last 30 Days Shila test data 3 1 3 1 22/4/2015
Last 30 Days Ram Test data 4 5 0 5 1/4/2015
Last 30 Days Sam Test data 5 4 0 2 2/4/2015
Last 30 Days Jadu Test data 6 1 5 2 30/3/2015
Last 30 Days Madhu Test data 7 5 0 4 10/4/2015
Today must have only today's post. Last 7 days must have today's post + last 7 day's post. Last 30 days column must have all the post of last 30 days.
A couple of unions with different case statements to get the date range would work.
Use union all and dateadd:
select 'Today' as Type, Name, Post, [Like], Share, Comment, [Date]
from yourtable
where [Date] = getdate()
union all
select 'Last 7 Days' as Type, Name, Post, [Like], Share, Comment, [Date]
from yourtable
where [Date] >= DateAdd(day,-7,getdate())
union all
select 'Last 30 Days' as Type, Name, Post, [Like], Share, Comment, [Date]
from yourtable
where [Date] >= DateAdd(day,-30,getdate())
BTW, terrible choice for column names (don't use reserved words).
Related
I'm trying to prepare my data to create a burndown visual. As you can see the Rate column isn't simply A - B, as it carries forward the previous value if B is null.
I've tried some case statements using lag and sums but no avail.
Some direction on the case statement or an optimal solution would be ideal.
For example, this is how my data looks:
ID
A
B
1
20
NULL
2
20
3
3
20
NULL
4
20
7
5
20
NULL
6
20
NULL
7
20
NULL
8
20
5
9
20
7
And I want a rate column that looks like this.
ID
A
B
Rate
1
20
NULL
20
2
20
3
17
3
20
NULL
17
4
20
7
10
5
20
NULL
10
6
20
NULL
10
7
20
NULL
10
8
20
5
5
9
20
7
-2
Thanks to #Larnu for the guidance.
Here is the solution when you have your data partitioned by some group ID and ordered by some data or row ID.
SELECT
GROUP_ID,
ROW_ID,
COL_A,
COL_B,
COL_A - (SUM(ISNULL(COL_B,0)) OVER (PARTITION BY GROUP_ID ORDER BY ROW_ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW))
FROM table
I have a view that converts fiscal year periods to calendar periods, creating a new column called "NewPeriod". I would then like to create a date using this "NewPeriod" column using the Date() function, Date(Year, NewPeriod, "1"). I am unable to use the NewPeriod in the Date function, is there a way I can accomplish this in the same view?
SELECT distinct
company_code,
Period,
Year,
CASE COMPANY_CODE
WHEN 1 THEN CASE Period
WHEN 4 THEN 1
WHEN 5 THEN 2
WHEN 6 THEN 3
WHEN 7 THEN 4
WHEN 8 THEN 5
WHEN 9 THEN 6
WHEN 10 THEN 7
WHEN 11 THEN 8
WHEN 12 THEN 9
WHEN 1 THEN 10
WHEN 2 THEN 11
WHEN 3 THEN 12
ELSE
Period
END
Else Period
END AS NewPeriod,
FROM
`table`
Using Sql Server Mgmt Studio. My data set is as below.
ID Days Value Threshold
A 1 10 30
A 2 20 30
A 3 34 30
A 4 25 30
A 5 20 30
B 1 5 15
B 2 10 15
B 3 12 15
B 4 17 15
B 5 20 15
I want to run a query so only rows after the threshold has been reached are selected for each ID. Also, I want to create a new days column starting at 1 from where the rows are selected. The expected output for the above dataset will look like
ID Days Value Threshold NewDayColumn
A 3 34 30 1
A 4 25 30 2
A 5 20 30 3
B 4 17 15 1
B 5 20 15 2
It doesn't matter if the data goes below the threshold for the latter rows, I want to take the first row when threshold is crossed as 1 and continue counting rows for the ID.
Thank you!
You can use window functions for this. Here is one method:
select t.*, row_number() over (partition by id order by days) as newDayColumn
from (select t.*,
min(case when value > threshold then days end) over (partition by id) as threshold_days
from t
) t
where days >= threshold_days;
The table looks like below:
testid stepid serverid duration
1 1 1 10
1 2 1 11
2 1 2 12
2 2 2 13
3 1 1 14
3 2 1 15
4 1 2 16
4 2 2 17
4 tests ran on two servers. Each test has 2 steps. I would like to calculate average duration of each step of all tests on the 2 servers given test id. For example, if given test ids are 1 and 2, the final table looks like below:
stepid avg_duration
1 (10 + 12) / 2
2 (11 + 13) / 2
This is just a group by, right?
select stepid, avg(duration)
from t
where testid in (1, 2)
group by stepid;
Note: You might want avg(duration*1.0) if you want "normal" division.
If I have data from week 1 to week 52 data and I want 4 week Moving Average with 1 week how can I make a SQL query for this? For example, for week 5 I want week1-week4 average, week6 I want week5-week8 average and so on.
I have the columns week and target_value in table A.
Sample data is like this:
Week target_value
1 20
2 10
3 10
4 20
5 60
6 20
So the output I want will start from week 5 as only week 1-week4 is available not before that.
Output data will look like:
Week Output
5 15 (20+10+10+20)/4=15 Moving Average week1-week4
6 25 (10+10+20+60)/4=25 Moving Average week2-week5
The data is in hive but I can move it to oracle if it is simpler to do this there.
SELECT
Week,
(SELECT ISNULL(AVG(B.target_value), A.target_value)
FROM tblA B
WHERE (B.Week < A.Week)
AND B.Week >= (A.Week - 4)
) AS Moving_Average
FROM tblA A
The ISNULL keeps you from getting a null for your first week since there is no week 0. If you want it to be null, then just leave the ISNULL function out.
If you want it to start at week 5 only, then add the following line to the end of the SQL that I wrote:
WHERE A.Week > 4
Results:
Week Moving_Average
1 20
2 20
3 15
4 13
5 15
6 25