In my SQL Server 2008 stored procedure, I have a table variable with RecordID, TotalMinutes, ProcessID.
Declare #tblSum table(RecordID int, TotalMinutes int, ProcessID int)
RecordID is my primary key, total minutes is the total minutes, and I have different processes but these processes are repeated multiple times on my data.
Here is an example of my data:
RecordID TotalMinutes ProcessID
--------------------------------------------
1 10 1
2 20 1
3 30 1
4 10 2
5 40 2
6 10 2
7 10 3
8 55 3
9 60 3
10 15 4
My plan is to return the data by totaling or adding all the data with same ProcessID and put it on a new table variable with FinalMinutes column just like the table below:
RecordID TotalMinutes ProcessID FinalMinutes
-----------------------------------------------------
1 10 1 60
2 20 1 60
3 30 1 60
4 10 2 80
5 60 2 80
6 10 2 80
7 10 3 125
8 55 3 125
9 60 3 125
10 15 4 15
I cannot do a group by since it will cut the result into 4 rows. I need to retain the number of rows, and every data it has, I will just add a FinalMinutes column on a new table variable.
Here is one way using SUM()Over() windowed aggregate function
Select *,
FinalMinutes = sum(TotalMinutes)over(partition by ProcessID)
From yourtable
Related
I'm trying to prepare my data to create a burndown visual. As you can see the Rate column isn't simply A - B, as it carries forward the previous value if B is null.
I've tried some case statements using lag and sums but no avail.
Some direction on the case statement or an optimal solution would be ideal.
For example, this is how my data looks:
ID
A
B
1
20
NULL
2
20
3
3
20
NULL
4
20
7
5
20
NULL
6
20
NULL
7
20
NULL
8
20
5
9
20
7
And I want a rate column that looks like this.
ID
A
B
Rate
1
20
NULL
20
2
20
3
17
3
20
NULL
17
4
20
7
10
5
20
NULL
10
6
20
NULL
10
7
20
NULL
10
8
20
5
5
9
20
7
-2
Thanks to #Larnu for the guidance.
Here is the solution when you have your data partitioned by some group ID and ordered by some data or row ID.
SELECT
GROUP_ID,
ROW_ID,
COL_A,
COL_B,
COL_A - (SUM(ISNULL(COL_B,0)) OVER (PARTITION BY GROUP_ID ORDER BY ROW_ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW))
FROM table
My data has the following Structure
ID
Month
Year
Revenue
1
1
20
860
1
2
20
22
1
5
20
339
2
3
20
12098
3
3
20
12
3
4
20
10
3
6
20
9
3
7
20
122
3
8
20
11
There are 1000s of IDs and I want to select a random sample of 100 IDs. So if I randomly select ID 3, I need all rows of data for ID 3. I have to use SQL for this. I welcome any suggestions.
You can use following query.
For MS-Sql
Select top 100 * from table_name where ID=$randomId ORDER BY NEWID(); //like ID=3
For My-Sql
Select * from table_name where ID=$randomId ORDER BY RAND() LIMIT 100; //like ID=3
Using Sql Server Mgmt Studio. My data set is as below.
ID Days Value Threshold
A 1 10 30
A 2 20 30
A 3 34 30
A 4 25 30
A 5 20 30
B 1 5 15
B 2 10 15
B 3 12 15
B 4 17 15
B 5 20 15
I want to run a query so only rows after the threshold has been reached are selected for each ID. Also, I want to create a new days column starting at 1 from where the rows are selected. The expected output for the above dataset will look like
ID Days Value Threshold NewDayColumn
A 3 34 30 1
A 4 25 30 2
A 5 20 30 3
B 4 17 15 1
B 5 20 15 2
It doesn't matter if the data goes below the threshold for the latter rows, I want to take the first row when threshold is crossed as 1 and continue counting rows for the ID.
Thank you!
You can use window functions for this. Here is one method:
select t.*, row_number() over (partition by id order by days) as newDayColumn
from (select t.*,
min(case when value > threshold then days end) over (partition by id) as threshold_days
from t
) t
where days >= threshold_days;
I am trying to change something like this:
Index Record Time
1 10 100
1 10 200
1 10 300
1 10 400
1 3 500
1 10 600
1 10 700
2 10 800
2 10 900
2 10 1000
3 5 1100
3 5 1200
3 5 1300
into this:
Index CountSeq Record LastTime
1 4 10 400
1 1 3 500
1 2 10 700
2 3 10 1000
3 3 5 1300
I am trying to apply this logic per unique index -- I just included three indexes to show the outcome.
So for a given index I want to combine them by streaks of the same Record. So notice that the first four entries for Index 1 have Records 10, but it is more succinct to say that there were 4 entries with record 10, ending at time 400. Then I repeat the process going forward, in sequence.
In short I am trying to perform a count-grouping over sequential chunks of the same Record, within each index. In other words I am NOT looking for this:
select index, count(*) as countseq, record, max(time) as lasttime
from Table1
group by index,record
Which combines everything by the same record whereas I want them to be separated by sequence breaks.
Is there a way to do this in SQL?
It's hard to solve your problem without having a single primary key, so I'll assume you have a primary key column that increases each row (primkey). This request would return the same table with a 'diff' column that has value 1 if the previous primkey row has the same index and record as the current one, 0 otherwise :
SELECT *,
IF((SELECT index, record FROM yourTable p2 WHERE p1.primkey = p2.primkey)
= (SELECT index, record FROM yourTable p2 WHERE p1.primkey-1 = p2.primkey), 1, 0) as diff
FROM yourTable p1
If you use a temporary variable that increases each time the IF expression is false, you would get a result like this :
primkey Index Record Time diff
1 1 10 100 1
2 1 10 200 1
3 1 10 300 1
4 1 10 400 1
5 1 3 500 2
6 1 10 600 3
7 1 10 700 3
8 2 10 800 4
9 2 10 900 4
10 2 10 1000 4
11 3 5 1100 5
12 3 5 1200 5
13 3 5 1300 5
Which would solve your problem, you would just add 'diff' to the group by clause.
Unfortunately I can't test it on sqlite, but you should be able to use variables like this.
It's probably a dirty workaround but I couldn't find any better way, hope it helps.
From a given table I want to be able to sum values having the same number (should be easy, right?)
Problem: A given value can be assigned from 2 to n consecutive numbers.
For some reasons this information is stored in a single row describing the value, the starting number and the ending number as below.
TABLE A
id | starting_number | ending_number | value
----+-----------------+---------------+-------
1 2 5 8
2 0 3 5
3 4 6 6
4 7 8 10
For instance the first row means:
value '8' is assigned to numbers: 2, 3 and 4 (5 is excluded)
So, I would like the following intermediairy result table
TABLE B
id | number | value
----+--------+-------
1 2 8
1 3 8
1 4 8
2 0 5
2 1 5
2 2 5
3 4 6
3 5 6
4 7 10
So I can sum 'value' for elements having identical 'number'
SELECT number, sum(value)
FROM B
GROUP BY number
TABLE C
number | sum(value)
--------+------------
2 13
3 8
4 14
0 5
1 5
5 6
7 10
I don't know how to do this and didn't find any answer on the web (maybe not looking with appropriate key words...)
Any idea?
You can do what you want with generate_series(). So, TableB is basically:
select id, generate_series(starting_number, ending_number - 1, 1) as n, value
from tableA;
Your aggregation is then:
select n, sum(value)
from (select id, generate_series(starting_number, ending_number - 1, 1) as n, value
from tableA
) a
group by n;