Using temporary extended table to make a sum - sql

From a given table I want to be able to sum values having the same number (should be easy, right?)
Problem: A given value can be assigned from 2 to n consecutive numbers.
For some reasons this information is stored in a single row describing the value, the starting number and the ending number as below.
TABLE A
id | starting_number | ending_number | value
----+-----------------+---------------+-------
1 2 5 8
2 0 3 5
3 4 6 6
4 7 8 10
For instance the first row means:
value '8' is assigned to numbers: 2, 3 and 4 (5 is excluded)
So, I would like the following intermediairy result table
TABLE B
id | number | value
----+--------+-------
1 2 8
1 3 8
1 4 8
2 0 5
2 1 5
2 2 5
3 4 6
3 5 6
4 7 10
So I can sum 'value' for elements having identical 'number'
SELECT number, sum(value)
FROM B
GROUP BY number
TABLE C
number | sum(value)
--------+------------
2 13
3 8
4 14
0 5
1 5
5 6
7 10
I don't know how to do this and didn't find any answer on the web (maybe not looking with appropriate key words...)
Any idea?

You can do what you want with generate_series(). So, TableB is basically:
select id, generate_series(starting_number, ending_number - 1, 1) as n, value
from tableA;
Your aggregation is then:
select n, sum(value)
from (select id, generate_series(starting_number, ending_number - 1, 1) as n, value
from tableA
) a
group by n;

Related

Burndown analysis in SQL Server Management Studio

I'm trying to prepare my data to create a burndown visual. As you can see the Rate column isn't simply A - B, as it carries forward the previous value if B is null.
I've tried some case statements using lag and sums but no avail.
Some direction on the case statement or an optimal solution would be ideal.
For example, this is how my data looks:
ID
A
B
1
20
NULL
2
20
3
3
20
NULL
4
20
7
5
20
NULL
6
20
NULL
7
20
NULL
8
20
5
9
20
7
And I want a rate column that looks like this.
ID
A
B
Rate
1
20
NULL
20
2
20
3
17
3
20
NULL
17
4
20
7
10
5
20
NULL
10
6
20
NULL
10
7
20
NULL
10
8
20
5
5
9
20
7
-2
Thanks to #Larnu for the guidance.
Here is the solution when you have your data partitioned by some group ID and ordered by some data or row ID.
SELECT
GROUP_ID,
ROW_ID,
COL_A,
COL_B,
COL_A - (SUM(ISNULL(COL_B,0)) OVER (PARTITION BY GROUP_ID ORDER BY ROW_ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW))
FROM table

Calculate scattered rows's average in sql server?

The table looks like below:
testid stepid serverid duration
1 1 1 10
1 2 1 11
2 1 2 12
2 2 2 13
3 1 1 14
3 2 1 15
4 1 2 16
4 2 2 17
4 tests ran on two servers. Each test has 2 steps. I would like to calculate average duration of each step of all tests on the 2 servers given test id. For example, if given test ids are 1 and 2, the final table looks like below:
stepid avg_duration
1 (10 + 12) / 2
2 (11 + 13) / 2
This is just a group by, right?
select stepid, avg(duration)
from t
where testid in (1, 2)
group by stepid;
Note: You might want avg(duration*1.0) if you want "normal" division.

Possible to group by counts?

I am trying to change something like this:
Index Record Time
1 10 100
1 10 200
1 10 300
1 10 400
1 3 500
1 10 600
1 10 700
2 10 800
2 10 900
2 10 1000
3 5 1100
3 5 1200
3 5 1300
into this:
Index CountSeq Record LastTime
1 4 10 400
1 1 3 500
1 2 10 700
2 3 10 1000
3 3 5 1300
I am trying to apply this logic per unique index -- I just included three indexes to show the outcome.
So for a given index I want to combine them by streaks of the same Record. So notice that the first four entries for Index 1 have Records 10, but it is more succinct to say that there were 4 entries with record 10, ending at time 400. Then I repeat the process going forward, in sequence.
In short I am trying to perform a count-grouping over sequential chunks of the same Record, within each index. In other words I am NOT looking for this:
select index, count(*) as countseq, record, max(time) as lasttime
from Table1
group by index,record
Which combines everything by the same record whereas I want them to be separated by sequence breaks.
Is there a way to do this in SQL?
It's hard to solve your problem without having a single primary key, so I'll assume you have a primary key column that increases each row (primkey). This request would return the same table with a 'diff' column that has value 1 if the previous primkey row has the same index and record as the current one, 0 otherwise :
SELECT *,
IF((SELECT index, record FROM yourTable p2 WHERE p1.primkey = p2.primkey)
= (SELECT index, record FROM yourTable p2 WHERE p1.primkey-1 = p2.primkey), 1, 0) as diff
FROM yourTable p1
If you use a temporary variable that increases each time the IF expression is false, you would get a result like this :
primkey Index Record Time diff
1 1 10 100 1
2 1 10 200 1
3 1 10 300 1
4 1 10 400 1
5 1 3 500 2
6 1 10 600 3
7 1 10 700 3
8 2 10 800 4
9 2 10 900 4
10 2 10 1000 4
11 3 5 1100 5
12 3 5 1200 5
13 3 5 1300 5
Which would solve your problem, you would just add 'diff' to the group by clause.
Unfortunately I can't test it on sqlite, but you should be able to use variables like this.
It's probably a dirty workaround but I couldn't find any better way, hope it helps.

reorder sort_order in table with sqlite

I have this table:
id sort_ord
0 6
1 7
2 2
3 3
4 4
5 5
6 8
Why does this query:
UPDATE table
SET sort_ord=(
SELECT count(*)
FROM table AS pq
WHERE sort_ord<table.sort_ord
ORDER BY sort_ord
)
WHERE(sort_ord>=0)
Produce:
id sort_ord
0 4
1 5
2 0
3 1
4 2
5 4
6 6
I was expecting all sort_ord fields to subtract by 2.
Here is defined: https://www.sqlite.org/isolation.html
About this link i can interpret, you has several instances for one query (update table and select count table) and independent of each other.
When you are in update sort_data(5) id 5, you have new data for read on every "SET sot_ord" (understanding what say about isolation), and now the result is 4.
Every select is a new instance and a new data reading
id sort_ord
0 4
1 5
2 0
3 1
4 2
5 5**
6 8**

MDX: iif condition on the value of dimension

I have 1 Virtual cube consists of 2 cubes.
Example of fact table of 1st cube.
id object_id time_id date_id state
1 10 2 1 0
2 11 5 1 0
3 10 7 1 1
4 10 3 1 0
5 11 4 1 0
6 11 7 1 1
7 10 8 1 0
8 11 5 1 0
9 10 7 1 1
10 10 9 1 2
Where State: 0 - Ok, 1 - Down, 2 - Unknown
For this cube I have one measure StateCount it should count States for each object_id.
Here for example we have such result:
for 10 : 3 times Ok , 2 times Down, 1 time Unknown
for 11 : 3 times Ok , 1 time Down
Second cube looks like this:
id object_id time_id date_id status
1 10 2 1 0
2 11 5 1 0
3 10 7 1 1
4 10 3 1 1
5 11 4 1 1
Where Status: 0 - out, 1 - in. I keep this in StatusDim.
In this table I keep records that should not be count. If object have status 1 that means that I have exclude it from count.
If we intersect these tables and use StateCount we will receive this result:
for 10 : 2 times Ok , 1 times Down, 1 time Unknown
for 11 : 2 times Ok , 1 time Down
As far as i know, i must use calculated member with IIF condition. Currently I'm trying something like this.
WITH MEMBER [Measures].[StateTimeCountDown] AS(
iif(
[StatusDimDown.DowntimeHierarchy].[DowntimeStatus].CurrentMember.MemberValue
<> "in"
, [Measures].[StateTimeCount]
, null )
)
The multidimensional way to do this would be to make attributes from your state and status columns (hopefully with user understandable members, i. e. using "Ok" and not "0"). Then, you can just use a normal count measure on the fact tables, and slice by these attributes. No need for complex calculation definitions.