select case when number is distinct then 1 else 0 - sql

I want to get the expected output such that when number happens first time in the account group, then display 1, otherwise if it happens another time, display null or 0. The same logic for other account groups.
The logic I can think is
select *,
case when number happens first time then 1 else null
over (partition by account order by number) from table.
account number expected output
abc 20 1
abc 20 0
abc 30 1
def 20 1
def 30 1
def 30 0

use lag
select *,case when number=lag(number) over(partition by account order by account)
then 0 else 1 end as val
from table_name

You almost had it!
select account, number
case when Row_Number() OVER (partition by account order by number) = 1 THEN 1 END ExpOut
from table

#PaulX's answer is close, but the partitioning isn't quite right. You can do:
-- CTE for sample data
with your_table (account, num) as (
select 'abc', 20 from dual
union all select 'abc', 20 from dual
union all select 'abc', 30 from dual
union all select 'def', 20 from dual
union all select 'def', 30 from dual
union all select 'def', 30 from dual
)
select account, num,
case when row_number() over (partition by account, num order by null) = 1
then 1
else 0
end as output
from your_table;
ACCOUNT NUM OUTPUT
------- ---------- ----------
abc 20 1
abc 20 0
abc 30 1
def 20 1
def 30 1
def 30 0
(adjusted for legal column names; hopefully you don't actually have quoited identifiers...)
If you want nulls rather than zeros then just leave out the else 0 part. And this just assumes by 'first' you mean the first returned in your result set, as otherwise - at least with the columns you showed - there is no obvious alternative. If you actually have other columns, particularly if you're using any others for the ordering of the result set, then you can apply the same ordering inside the partition clause to make it consistent.

Related

sql pivot multiple and partially similar row values into multiple unknown number of cols

To SELECT multiple CASE-WHEN expressions into a single row per ID I have used aggregation MAX and GROUPBY.
SELECT table1.IDvar,
MAX(CASE WHEN table2.var1 = 'foo' THEN table2.var2 END) AS condition1,
MAX(CASE WHEN table2.var1 = 'bar' THEN table2.var2 END) AS condition2
FROM table1
FULL JOIN table2 ON table1.IDvar = table2.table1_IDvar
GROUP BY table1.IDvar
However, I have observed that empirical criteria such as foo entered in the CASE-WHEN-THEN-END expression may occur multiple times, that is, in multiple rows each of which corresponds to different values on columns of interest (THEN column-of-interest END) in the db schema. This implies that taking the MAX or MIN drops data that may be of interest. It is not known in advance how many rows there are for each value in criteria_col and thus in the cols_of_interest.
Sample data e.g.:
IDvar_foreign_key
criteria_col
col_of_interest1
col_of_interest2
x1
foo
01-01-2021
100
x1
foo
01-06-2021
2000
x1
foo
01-08-2021
0
x1
bar
01-08-2021
300
Note: the actual table does contain a unique identifier or primary key.
Q: Are there ways to pivot certain columns/tables in a db schema without possibly dropping some values?
An ouput something like this:
IDvar_foreign_key
foo_1_col_of_interest1
foo_1_col_of_interest2
foo_2_col_of_interest1
foo_2_col_of_interest2
foo_3_col_of_interest1
foo_3_col_of_interest2
x1
01-01-2021
100
01-06-2021
2000
01-08-2021
0
Edit
#lemon and #MTO suggests dynamic queries are necesarry, otherwise I was considering whether not using aggregation would do
Dynamic Pivot in Oracle's SQL
Pivot rows to columns without aggregate
TSQL Pivot without aggregate function
You can use the MIN and MAX aggregation functions and to get the correlated minimums and maximums for col_of_interest2 you can use KEEP (DENSE_RANK ...):
SELECT t1.IDvar,
MIN(CASE WHEN t2.criteria_col = 'foo' THEN t2.col_of_interest1 END)
AS foo_1_col_of_interest1,
MIN(col_of_interest2) KEEP (
DENSE_RANK FIRST ORDER BY
CASE WHEN t2.criteria_col = 'foo' THEN t2.col_of_interest1 END
ASC NULLS LAST
) AS foo_1_col_of_interest2,
MAX(CASE WHEN t2.criteria_col = 'foo' THEN t2.col_of_interest1 END)
AS foo_2_col_of_interest1,
MAX(col_of_interest2) KEEP (
DENSE_RANK FIRST ORDER BY
CASE WHEN t2.criteria_col = 'foo' THEN t2.col_of_interest1 END
DESC NULLS LAST
) AS foo_2_col_of_interest2
FROM table1 t1
FULL JOIN table2 t2
ON t1.IDvar = t2.table1_IDvar
GROUP BY t1.IDvar
Which, for the sample data:
CREATE TABLE table1 ( idvar ) AS
SELECT 1 FROM DUAL UNION ALL
SELECT 2 FROM DUAL UNION ALL
SELECT 3 FROM DUAL;
CREATE TABLE table2 ( table1_idvar, criteria_col, col_of_interest1, col_of_interest2 ) AS
SELECT 1, 'foo', DATE '2021-01-01', 100 FROM DUAL UNION ALL
SELECT 1, 'foo', DATE '2021-03-01', 500 FROM DUAL UNION ALL
SELECT 1, 'foo', DATE '2021-06-01', 2000 FROM DUAL UNION ALL
SELECT 1, 'bar', DATE '2021-06-01', 2000 FROM DUAL UNION ALL
SELECT 2, 'foo', DATE '2021-01-02', 200 FROM DUAL UNION ALL
SELECT 2, 'foo', DATE '2021-03-02', 300 FROM DUAL UNION ALL
SELECT 2, 'bar', DATE '2021-06-02', 400 FROM DUAL UNION ALL
SELECT 3, 'foo', DATE '2021-01-03', 700 FROM DUAL;
Outputs:
IDVAR
FOO_1_COL_OF_INTEREST1
FOO_1_COL_OF_INTEREST2
FOO_2_COL_OF_INTEREST1
FOO_2_COL_OF_INTEREST2
1
2021-01-01 00:00:00
100
2021-06-01 00:00:00
2000
2
2021-01-02 00:00:00
200
2021-03-02 00:00:00
300
3
2021-01-03 00:00:00
700
2021-01-03 00:00:00
700
SQL (not just Oracle) requires each query to have a known, fixed number of columns; if you want a dynamic number of columns then you should perform the pivot in whatever third-party application (Java, C#, PHP, etc.) that you are using to talk to the database.
If you want to pivot a fixed maximum number of columns then you can use the ROW_NUMBER analytic function. For example, if you want the 3 minimum values for col_of_interest1 then you can use:
SELECT idvar,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 1 THEN col_of_interest1 END)
AS foo_1_col_of_interest1,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 1 THEN col_of_interest2 END)
AS foo_1_col_of_interest2,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 2 THEN col_of_interest1 END)
AS foo_2_col_of_interest1,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 2 THEN col_of_interest2 END)
AS foo_2_col_of_interest2,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 3 THEN col_of_interest1 END)
AS foo_3_col_of_interest1,
MAX(CASE WHEN criteria_col = 'foo' AND rn = 3 THEN col_of_interest2 END)
AS foo_3_col_of_interest2
FROM (
SELECT t1.IDvar,
criteria_col,
col_of_interest1,
col_of_interest2,
ROW_NUMBER() OVER (
PARTITION BY t1.IDvar, criteria_col
ORDER BY col_of_interest1, col_of_interest2
) AS rn
FROM table1 t1
FULL JOIN table2 t2
ON t1.IDvar = t2.table1_IDvar
WHERE criteria_col IN ('foo' /*, 'bar', 'etc'*/)
)
GROUP BY idvar
Which outputs:
IDVAR
FOO_1_COL_OF_INTEREST1
FOO_1_COL_OF_INTEREST2
FOO_2_COL_OF_INTEREST1
FOO_2_COL_OF_INTEREST2
FOO_3_COL_OF_INTEREST1
FOO_3_COL_OF_INTEREST2
1
2021-01-01 00:00:00
100
2021-03-01 00:00:00
500
2021-06-01 00:00:00
2000
2
2021-01-02 00:00:00
200
2021-03-02 00:00:00
300
null
null
3
2021-01-03 00:00:00
700
null
null
null
null
fiddle
Unknown number of columns is an issue that you can't ignore. The question is - are there any possible expected limits. I question who would get any meaningfull insight from a resulting dataset with hundreds of columns. It doesnt make sense. If you could setup the limit to 10 or 20 or whatever like that then you could build a datagrid structure using pivot where the number of columns would be the same and the data within could be placed as in your question.
Just as an example how - here is the code that does it with up to 6 pairs of your data of interest (COL_DATE and COL_VALUE) - it could be 20 or 30 or ...
First your sample data and some preparation for pivoting (CTE named grid):
WITH -- S a m p l e d a t a
tbl AS
(
SELECT 1 "ID", 'foo' "CRITERIA", DATE '2021-01-01' "INTEREST_1", 100 "INTEREST_2" FROM DUAL UNION ALL
SELECT 1, 'foo', DATE '2021-03-01', 500 FROM DUAL UNION ALL
SELECT 1, 'foo', DATE '2021-06-01', 2000 FROM DUAL UNION ALL
SELECT 1, 'bar', DATE '2021-06-01', 2000 FROM DUAL UNION ALL
SELECT 2, 'foo', DATE '2021-01-02', 200 FROM DUAL UNION ALL
SELECT 2, 'foo', DATE '2021-03-02', 300 FROM DUAL UNION ALL
SELECT 2, 'bar', DATE '2021-06-02', 400 FROM DUAL UNION ALL
SELECT 3, 'foo', DATE '2021-01-03', 700 FROM DUAL
),
grid AS
(SELECT * FROM
( Select ID "ID", CRITERIA "GRP", INTEREST_1 "COL_DATE", INTEREST_2 "COL_VALUE",
Count(*) OVER(Partition By ID, CRITERIA) "ROWS_TOT",
ROW_NUMBER() OVER(Partition By ID, CRITERIA Order By ID, CRITERIA) "RN_GRP_ID",
ROW_NUMBER() OVER(Partition By ID, CRITERIA Order By ID, CRITERIA) "RN_GRP_ID_2"
From tbl t )
ORDER BY ID ASC, GRP DESC, ROWS_TOT DESC
),
Result (grid)
ID GRP COL_DATE COL_VALUE ROWS_TOT RN_GRP_ID RN_GRP_ID_2
---------- --- --------- ---------- ---------- ---------- -----------
1 foo 01-JAN-21 100 3 3 3
1 foo 01-JUN-21 2000 3 2 2
1 foo 01-MAR-21 500 3 1 1
1 bar 01-JUN-21 2000 1 1 1
2 foo 02-MAR-21 300 2 2 2
2 foo 02-JAN-21 200 2 1 1
2 bar 02-JUN-21 400 1 1 1
3 foo 03-JAN-21 700 1 1 1
... next is pivoting (another CTE named grid_pivot) and designing another grid that will be populated with your data of interest...
grid_pivot AS
( SELECT
ID, GRP, ROWS_TOT,
MAX(GRP_1_LINK) "GRP_1_LINK", CAST(Null as DATE) "GRP_1_DATE", CAST(Null as NUMBER) "GRP_1_VALUE",
MAX(GRP_2_LINK) "GRP_2_LINK", CAST(Null as DATE) "GRP_2_DATE", CAST(Null as NUMBER) "GRP_2_VALUE",
MAX(GRP_3_LINK) "GRP_3_LINK", CAST(Null as DATE) "GRP_3_DATE", CAST(Null as NUMBER) "GRP_3_VALUE",
MAX(GRP_4_LINK) "GRP_4_LINK", CAST(Null as DATE) "GRP_4_DATE", CAST(Null as NUMBER) "GRP_4_VALUE",
MAX(GRP_5_LINK) "GRP_5_LINK", CAST(Null as DATE) "GRP_5_DATE", CAST(Null as NUMBER) "GRP_5_VALUE",
MAX(GRP_6_LINK) "GRP_6_LINK", CAST(Null as DATE) "GRP_6_DATE", CAST(Null as NUMBER) "GRP_6_VALUE"
-- ... ... ... ...
FROM
( Select *
From ( Select * From grid )
PIVOT ( Max(RN_GRP_ID) "LINK" --Min(RN_GRP_ID) "GRP_FROM",
FOR RN_GRP_ID_2 IN(1 "GRP_1", 2 "GRP_2", 3 "GRP_3", 4 "GRP_4", 5 "GRP_5", 6 "GRP_6" ) ) -- ... ...
Order By ROWS_TOT DESC, GRP DESC, ID ASC
)
GROUP BY GRP, ROWS_TOT, ID
ORDER BY ROWS_TOT DESC, GRP DESC, ID ASC
)
Result (grid_pivot)
ID GRP ROWS_TOT GRP_1_LINK GRP_1_DATE GRP_1_VALUE GRP_2_LINK GRP_2_DATE GRP_2_VALUE GRP_3_LINK GRP_3_DATE GRP_3_VALUE GRP_4_LINK GRP_4_DATE GRP_4_VALUE GRP_5_LINK GRP_5_DATE GRP_5_VALUE GRP_6_LINK GRP_6_DATE GRP_6_VALUE
---------- --- ---------- ---------- ---------- ----------- ---------- ---------- ----------- ---------- ---------- ----------- ---------- ---------- ----------- ---------- ---------- ----------- ---------- ---------- -----------
1 foo 3 1 2 3
2 foo 2 1 2
3 foo 1 1
1 bar 1 1
2 bar 1 1
... and, finaly, mixing grid_pivot data with grid data using 6 left joins to fit 6 pairs of your data of interest into the grid.
SELECT gp.ID, gp.GRP,
g1.COL_DATE "GRP_1_DATE", g1.COL_VALUE "GRP_1_VALUE",
g2.COL_DATE "GRP_2_DATE", g2.COL_VALUE "GRP_2_VALUE",
g3.COL_DATE "GRP_3_DATE", g3.COL_VALUE "GRP_3_VALUE",
g4.COL_DATE "GRP_1_DATE", g4.COL_VALUE "GRP_4_VALUE",
g5.COL_DATE "GRP_2_DATE", g5.COL_VALUE "GRP_5_VALUE",
g6.COL_DATE "GRP_3_DATE", g6.COL_VALUE "GRP_6_VALUE"
-- ... ... ... ...
FROM grid_pivot gp
LEFT JOIN grid g1 ON(g1.ID = gp.ID And g1.GRP = gp.GRP And g1.RN_GRP_ID = gp.GRP_1_LINK)
LEFT JOIN grid g2 ON(g2.ID = gp.ID And g2.GRP = gp.GRP And g2.RN_GRP_ID = gp.GRP_2_LINK)
LEFT JOIN grid g3 ON(g3.ID = gp.ID And g3.GRP = gp.GRP And g3.RN_GRP_ID = gp.GRP_3_LINK)
LEFT JOIN grid g4 ON(g4.ID = gp.ID And g4.GRP = gp.GRP And g4.RN_GRP_ID = gp.GRP_4_LINK)
LEFT JOIN grid g5 ON(g5.ID = gp.ID And g5.GRP = gp.GRP And g5.RN_GRP_ID = gp.GRP_5_LINK)
LEFT JOIN grid g6 ON(g6.ID = gp.ID And g6.GRP = gp.GRP And g6.RN_GRP_ID = gp.GRP_6_LINK)
-- ... ... ... ...
ORDER BY gp.ROWS_TOT DESC, gp.GRP DESC, gp.ID ASC
R e s u l t :
ID GRP GRP_1_DATE GRP_1_VALUE GRP_2_DATE GRP_2_VALUE GRP_3_DATE GRP_3_VALUE GRP_1_DATE GRP_4_VALUE GRP_2_DATE GRP_5_VALUE GRP_3_DATE GRP_6_VALUE
---------- --- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- ----------- ---------- -----------
1 foo 01-MAR-21 500 01-JUN-21 2000 01-JAN-21 100
2 foo 02-JAN-21 200 02-MAR-21 300
3 foo 03-JAN-21 700
1 bar 01-JUN-21 2000
2 bar 02-JUN-21 400
Anyway you will probably need dynamic solution so, this could be interesting for something else, who knows what, when and where...

How to find the last non null value of a column and recursively find the sum value of another column

Suppose I have a column A and currently fetched value of A is null. I need to go back to previous rows and find the non -null value of column A. Then I need to find the sum of another column B from the point non value is seen till the current point. After that I need to add the sum of B with A, which will be new value of A.
For finding the column A non null value I have written the query as
nvl(last_value(nullif(A,0)) ignore nulls over (order by A),0)
But I need to do the calculation of B as mentioned above.
nvl(last_value(nullif(A,0)) ignore nulls over (order by A),0)
Can anyone please help me out ?
Sample data
A B date
null 20 14/06/2019
null 40 13/06/2019
10 50 12/06/2019
here value of A on 14/06/2019 should be replaced by sum of B + value of A on 12/06/2019(which is the 1st non null value of A)=20+40+50+10=120
If you have version 12c or higher:
with t( A,B, dte ) as
(
select null, 20, date'2019-06-14' from dual union all
select null, 40, date'2019-06-13' from dual union all
select 10 ,50, date'2019-06-12' from dual
)
select * from t
match_recognize(
order by dte desc
measures
nvl(
first(a),
y.a + sum(b)
) as a,
first(b) as b,
first(dte) as dte
after match skip to next row
pattern(x* y{0,1})
define x as a is null,
y as a is not null
);
A B DTE
------ ---------- ----------
120 20 2019-14-06
100 40 2019-13-06
10 50 2019-12-06
Use conditional count to divide data into separate groups, then use this group for analytical calculation:
select a, b, dt, grp, sum(nvl(a, 0) + nvl(b, 0)) over (partition by grp order by dt) val
from (
select a, b, dt, count(case when a is not null then 1 end) over (order by dt) grp
from t order by dt desc)
order by dt desc
Sample result:
A B DT GRP VAL
------ ---------- ----------- ---------- ----------
20 2019-06-14 4 120
40 2019-06-13 4 100
10 50 2019-06-12 4 60
5 2 2019-06-11 3 7
6 1 2019-06-10 2 7
3 2019-06-09 1 14
7 4 2019-06-08 1 11
demo
I think what you want is handled by using
sum(<column>) over (...) together with last_value over (...) function as below
:
with t( A,B, "date" ) as
(
select null, 20, date'2019-06-14' from dual union all
select null, 40, date'2019-06-13' from dual union all
select 10 ,50, date'2019-06-12' from dual
)
select nvl(a,sum(b) over (order by 1)+
last_value(a) ignore nulls
over (order by 1 desc)
) as a,
b, "date"
from t;
A B date
--- -- ----------
120 20 14.06.2019
120 40 13.06.2019
10 50 12.06.2019
Demo

Oracle - Order by Alpha numeric

I need to order the rows in my result set by a column that holds varchar2 K-12.
Example:
ID Grade Expense
1 1 500
1 10 500
1 11 500
1 12 500
1 2 500
1 3 500
1 4 500
1 5 500
1 6 500
1 7 500
1 8 500
1 9 500
1 K 500
This is my order by clause which works, but I would like to have the
row with Grade = K as the first row for each ID in my result set.
order by ID, to_number(regexp_substr(grade, '^[[:digit:]]*'))
As it stands, the result set has the row with ID = K is last and not
first. How can i make it the first row for each ID in my result set?
ID Grade Expense
1 K 500
1 1 500
1 2 500
1 3 500
1 4 500
1 5 500
1 6 500
1 7 500
1 8 500
1 9 500
1 10 500
1 11 500
1 12 500
Thanks in advance
Simply use a case statement to set K to something below 1. this has the advantage if you have a Pre-K later, you can modify the case to handle it as well.
With CTE as
(SELECT '1' as grade from dual union
SELECT '2' from dual union
select '10' from dual union
select 'K' from dual)
SELECT * FROM CTE
ORDER BY CASE GRADE when 'K' then -1
else to_number(regexp_substr(grade, '^[[:digit:]]*')) end
This is a bit of a kludge, but since the regex for 'K' returns null, change the order by to:
order by ID, nvl(to_number(regexp_substr(grade, '^[[:digit:]]*')),0)
This will return 0 for 'K' and sort it properly.
You can do the following:
WITH g1 AS (
SELECT 1 AS id, TO_CHAR(level) AS grade, 500 AS expense FROM dual
CONNECT BY level <= 12
UNION
SELECT 1, 'K', 500 FROM dual
UNION
SELECT 1, 'J', 500 FROM dual
)
SELECT g1.*, TO_NUMBER(REGEXP_SUBSTR(grade, '^\d+'))
, DECODE(grade, 'K', -1, TO_NUMBER(REGEXP_SUBSTR(grade, '^\d+')))
FROM g1
ORDER BY DECODE(grade, 'K', -1, TO_NUMBER(REGEXP_SUBSTR(grade, '^\d+'))) NULLS LAST
In this query I'm using CONNECT BY to build your grade table; of course you'll want to ignore that part. Note I added an extra row with a J for the grade level.
In my order by I am using DECODE() so that if grade = 'K', it will give a value of -1. For any grades that can be converted to numeric values (that is, if they start with at least one digit), I use a regex to get as many digits if possible (you can use [:digit:] or [0-9] in place of \d; but \d is nice and short).
I am specifying NULLS LAST so that any rows for which grade cannot be converted to a number, other than K, will be last.
I'm including the extra computed columns just to give a glimpse into what is actually going on and how the values are generated. They aren't needed for the query.
Please see SQL Fiddle demo here.
Only change the ORDER BY clause, this way:
order by ID asc, decode(grade,'K',-1,grade) asc

one sql instead of 2 for counting

I have read a thread on this but when I tried it I can`t manage to make it work.
I want to count all the male and females from a table like so:
Select
count(case when substr(id,1, 1) in (1,2) then 1 else 0 end) as M,
count(case when substr(id,1, 1) in (3,4) then 1 else 0 end) as F
from users where activated=1
The ideea is that a user having an id starting with 1 or 2 is male
My table has 3 male entries and 2 are activated and it returns (the case statement doesn`t work)
M,F
2,2
Any input would be appreciated
id activated
123 1
234 0
154 1
You should use SUM instead. COUNT will count all non null values.
Select
SUM(case when substr(id,1, 1) in (1,2) then 1 else 0 end) as M,
SUM(case when substr(id,1, 1) in (3,4) then 1 else 0 end) as F
from users where activated=1
COUNT will give you the number of non-null values, whatever they are. Try SUM instead.
If your Oracle version is 10g or later, as an alternative, you can use regexp_count function. I assume that the ID column is of number data type, so in the example it explicitly converted to varchar2 data type using TO_CHAR function. If the data type of the ID column is varchar2 or char then there is no need of any type of data type conversion.
Here is an example:
SQL> create table M_F(id, activated) as(
2 select 123, 1 from dual union all
3 select 234, 0 from dual union all
4 select 434, 1 from dual union all
5 select 154, 1 from dual
6 );
Table created
SQL> select sum(regexp_count(to_char(id), '^[12]')) as M
2 , sum(regexp_count(to_char(id), '^[34]')) as F
3 from M_F
4 where activated = 1
5 ;
M F
---------- ----------
2 1
Demo

How to transpose recordset columns into rows

I have a query whose code looks like this:
SELECT DocumentID, ComplexSubquery1 ... ComplexSubquery5
FROM Document
WHERE ...
ComplexSubquery are all numerical fields that are calculated using, duh, complex subqueries.
I would like to use this query as a subquery to a query that generates a summary like the following one:
Field DocumentCount Total
1 dc1 s1
2 dc2 s2
3 dc3 s3
4 dc4 s4
5 dc5 s5
Where:
dc<n> = SUM(CASE WHEN ComplexSubquery<n> > 0 THEN 1 END)
s <n> = SUM(CASE WHEN Field = n THEN ComplexSubquery<n> END)
How could I do that in SQL Server?
NOTE: I know I could avoid the problem by discarding the original query and using unions:
SELECT '1' AS TypeID,
SUM(CASE WHEN ComplexSubquery1 > 0 THEN 1 END) AS DocumentCount
SUM(ComplexSubquery1) AS Total
FROM (SELECT DocumentID, BLARGH ... AS ComplexSubquery1) T
UNION ALL
SELECT '2' AS TypeID,
SUM(CASE WHEN ComplexSubquery2 > 0 THEN 1 END) AS DocumentCount
SUM(ComplexSubquery2) AS Total
FROM (SELECT DocumentID, BLARGH ... AS ComplexSubquery2) T
UNION ALL
...
But I want to avoid this route, because redundant code makes my eyes bleed. (Besides, there is a real possibility that the number of complex subqueries grow in the future.)
WITH Document(DocumentID, Field) As
(
SELECT 1, 1 union all
SELECT 2, 1 union all
SELECT 3, 2 union all
SELECT 4, 3 union all
SELECT 5, 4 union all
SELECT 6, 5 union all
SELECT 7, 5
), CTE AS
(
SELECT DocumentID,
Field,
(select 10) As ComplexSubquery1,
(select 20) as ComplexSubquery2,
(select 30) As ComplexSubquery3,
(select 40) as ComplexSubquery4,
(select 50) as ComplexSubquery5
FROM Document
)
SELECT Field,
SUM(CASE WHEN RIGHT(Query,1) = Field AND QueryValue > 1 THEN 1 END ) AS DocumentCount,
SUM(CASE WHEN RIGHT(Query,1) = Field THEN QueryValue END ) AS Total
FROM CTE
UNPIVOT (QueryValue FOR Query IN
(ComplexSubquery1, ComplexSubquery2, ComplexSubquery3,
ComplexSubquery4, ComplexSubquery5)
)AS unpvt
GROUP BY Field
Returns
Field DocumentCount Total
----------- ------------- -----------
1 2 20
2 1 20
3 1 30
4 1 40
5 2 100
I'm not 100% positive from your example, but perhaps the PIVOT operator will help you out here? I think if you selected your original query into a temporary table, you could pivot on the document ID and get the sums for the other queries.
I don't have much experience with it though, so I'm not sure how complex you can get with your subqueries - you might have to break it down.