I have a table in a Postgres that stores 10x10 matrices, where each row has it's own entry, defined as:
id, matrix_id, row_id, col1, col2, col3...
I'd like to compute the trace (sum of main diagonal) for every matrix identified by its matrix_id, that is, for every matrix_id, I would like to get (col1 where row_id=1) + (col2 where row_id=2) + (col3 where row_id=3)...
I've tried grouping it by matrix_id but then I cannot use subqueries, something like:
select matrix_id, (select col1 where row_id=1) + (col2 where row_id=2) +
(col3 where row_id=3) ... from matrix group by matrix_id;
but it doesn't work this way.
How could I do that?
So long as they are all 10x10 matrices, use a case statement like so:
select matrix_id,
sum(
case row_id
when 1 then col1
when 2 then col2
when 3 then col3
when 4 then col4
when 5 then col5
when 6 then col6
when 7 then col7
when 8 then col8
when 9 then col9
when 10 then col10
end
) as trace
from matrix
group by matrix_id;
Had variable-sized matrices been allowed, you could transpose columns to rows via to_jsonb() and then sum where row_id = <column suffix>.
EDIT TO ADD
Based on your comment, you really should update your version of PostgreSQL. That said, try a CTE to filter on the new trace column:
with traces as (
select matrix_id,
sum(
case row_id
when 1 then col1
when 2 then col2
when 3 then col3
when 4 then col4
when 5 then col5
when 6 then col6
when 7 then col7
when 8 then col8
when 9 then col9
when 10 then col10
end
) as trace
from matrix
group by matrix_id
)
select *
from traces
where trace > 100;
Related
I've put together a reconciliation tool in SQL Server which identifies the number of record breaks by field (col 2 - col 4) between two identical (data types/structure) sources. The output returned is in the format below, grouped on col 1.
Col1 Col2 Col3 Col4
X 0 0 1
Y 0 1 1
Z 1 0 1
I am trying to manipulate the output so that it provides a list of the Col 1 identifier and the name of any column names (col 2 - col 4) which have breaks (value > 0).
The expected output based on the above data would look like this.
Col1 FieldBreak
X Col2
Y Col3
Y Col4
Z Col2
Z Col4
I'm newer to SQL (6 months of professional experience) and am stuck. Any help would be much appreciated!
In any database, you can use:
select col1, 'col2' as col
from t
where col2 = 1
union all
select col1, 'col3' as col
from t
where col3 = 1
union all
select col1, 'col4' as col
from t
where col4 = 1;
There are probably more efficient methods, but those depend on the database. And for a small table efficiency may not be a concern.
In SQL Server, you would unpivot using apply:
select t.col1, v.*
from t cross apply
(values ('col2', t.col2), ('col3', t.col3) . . .
) v(col, val)
where v.val is not null;
If you have a lot of columns, you can construct the expression using a SQL statement (from INFORMATION_SCHEMA.COLUMNS) and/or using a spreadsheet.
I have a table like this one
ID Col1 Col2 Col3
-- ---- ---- ----
1 7 NULL 12
2 2 46 NULL
3 NULL NULL NULL
4 245 1 792
I wanted a query that yields the following result
ID Col1 Col2 Col3 MIN
-- ---- ---- ---- ---
1 7 NULL 12 7
2 2 46 NULL 2
3 NULL NULL NULL NULL
4 245 1 792 1
I mean, I wanted a column containing the minimum values out of Col1, Col2, and Col 3 for each row ignoring NULL values. In a previous question (What's the best way to select the minimum value from multiple columns?) there is an answer for non NULL values. I need a query as efficient as possible for a huge table.
Select Id,
Case When Col1 < Col2 And Col1 < Col3 Then Col1
When Col2 < Col1 And Col2 < Col3 Then Col2
Else Col3
End As MIN
From YourTableNameHere
Assuming you can define some "max" value (I'll use 9999 here) that your real values will never exceed:
Select Id,
Case When Col1 < COALESCE(Col2, 9999)
And Col1 < COALESCE(Col3, 9999) Then Col1
When Col2 < COALESCE(Col1, 9999)
And Col2 < COALESCE(Col3, 9999) Then Col2
Else Col3
End As MIN
From YourTableNameHere;
You didn't specify which version of Teradata you're using. If you're using version 14+ then you can use least.
Unfortunately least will return null if any of its arguments are null. From the docs:
LEAST supports 1-10 numeric values.
If numeric_value is the data type of the first argument, the return
data type is numeric. The remaining arguments in the input list must
be the same or compatible types. If either input parameter is NULL,
NULL is returned.
But you can get around that by using coalesce as Joe did in his answer.
select id,
least(coalesce(col1,9999),coalesce(col2,9999),coalesce(col3,9999))
from mytable
This might work:
Select id, Col1, Col2, Col3, least(Col1, Col2, Col3) as MIN From YourTableNameHere
in this way you don't need to check for nulls, just use min and a subquery
select tbl.id,tbl.col1,tbl.col2,tbl.col3,
(select min(t.col)
from (
select col1 as col from tbl_name t where t.id=tbl.id
union all
select col2 as col from tbl_name t where t.id=tbl.id
union all
select col3 as col from tbl_name t where t.id=tbl.id
)t)
from tbl_name tbl
Output:
1 7 NULL 12 7
2 2 46 NULL 2
3 NULL NULL NULL NULL
4 245 1 792 1
Just modify your query with coalesce():
Select Id,
(Case When Col1 <= coalesce(Col2, col3, col1) And
Col1 <= coalesce(Col3, col2, col1)
Then Col1
When Col2 <= coalesce(Col1, col3, col2) And
Col2 <= coalesce(Col3, col1, col2)
Then Col2
Else Col3
End) As MIN
From YourTableNameHere;
This doesn't require inventing a "magic" number or over-complicating the logic.
I found this solution to be more efficient than using multiple case statement clauses, which can get extremely lengthy when evaluating data from several columns across one row.
Also, I can't take credit for this solution as I found it on some website a year or so ago. Today I needed a refresh on this logic, and I couldn't find it anywhere. I found my old code and decided to share it in this forum now.
Creating your test table:
create table #testTable(ID int, Col1 int, Col2 int, Col3 int)
Insert into #testTable values(1,7,null,12)
Insert into #testTable values(2,2,46,null)
Insert into #testTable values(3,null,null,null)
Insert into #testTable values(4,245,1,792)
Finding min value in row data:
Select ID, Col1, Col2, Col3 ,(SELECT Min(v) FROM ( VALUES (Col1), (Col2), (Col3) ) AS value(v)) [MIN] from #testTable order by ID
Using SSRS 2008R2
I have a matrix set up displaying % values in each cell:
ColGrp1 ColGrp2 ColGrp3
RowGrp1 5% 80% 50%
RowGrp2 .. .. ..
RowGrp3 .. .. ..
The expression deriving the percentage value is as follows:
=Sum(Fields!FieldX.Value)
/
Count(Fields!FieldX.Value)
Field X contains either a 0 or a 1 in the dataset, so I'm seeing the percentage of the 1's in the data above - this is fine.
My issue is that I need to include the min, max and avg values for each row group:
Col1 Col2 Col3 Min Max Avg
5% 80% 50% .. .. ..
.. .. .. .. .. ..
.. .. .. .. .. ..
The avg value is fine as I have just created a column outside of the column group and used the same expression as above.
However, I'm unable to find a way to get the Min and Max % values.
Any ideas?
SELECT Col1, Col2, Col3, CASE WHEN Col1 < Col2 AND Col1 < Col3
THEN Col1
WHEN Col2 < Col1 AND Col2 < Col3 THEN Col2
ELSE Col3
END AS Min,
CASE WHEN Col1 > Col2 AND Col1 > Col3
THEN Col1
WHEN Col2 > Col1 AND Col2 > Col3 THEN Col2
ELSE Col3
END AS Max,
((Col1 + col2 + col3) / 3 ) AS Avg
FROM table
Try this :
SELECT
*,
(select MIN(col) from (VALUES(Col1),(Col2),(Col3),......) x([col])) [MIN],
(select MAX(col) from (VALUES(Col1),(Col2),(Col3),......) x([col])) [MAX]
FROM
(
-- your query --
) AS T
Reference : Get the minimum value between several columns
Is it possible to output a resultset as a grid? For example I output the following resultset using sql:
col1 col2 col3 col4 col5 col6 col7 col8 col9
10 23 54 12 23 45 56 24 2
but instead of the output forming one long row is there a function I can use to get it to output as:
col1 col2 col3
10 23 54
12 23 45
56 24 2
So I'm essentially breaking the results row every three columns.
Also the output would be a combinations of various calculations performed on the data in joined sql tables just in case this makes a difference.
If you don't mind manually defining the columns to split, and the format will remain fixed you can use CROSS APPLY ... VALUES to unpivot the data. e.g.
SELECT c.Col1, c.Col2, c.Col3
FROM T
CROSS APPLY
( VALUES
(Col1, Col2, Col3),
(Col4, Col5, Col6),
(Col7, Col8, Col9)
) c (Col1, Col2, Col3);
Example on SQL Fiddle
I am using toad for oracle and I experienced different issues.
Aliasing - When I want to use the same column twice?!
Let us asume that we have a table x which has col1, col2, col3. Col1 contains a customer contact numbers (211,212,213, and more)
And there is another table, y, that has col1,col4,col5. Col1 in both tables are equal. Col4 shows whether a number is main or secondary.
Table y
(Col1,col4,col5)
(211,Main,v)
(212,Secondary,s)
(213,Secondary,w)
What I want to do is as follow :
SELECT col2, col1 as mainNumbet, col1 as secondNumber
FROM x
WHERE mainNumber IN (SELECT col1
FROM y
WHERE col4 = 'main')
AND SecondNumber IN (SELECT col1
FROM y
WHERE col4 = "secondary")
But it states that there is a problem !??
There are several problems with your code.
Perhaps this is what you want:
SELECT x.col2,
CASE WHEN col4 ='main' THEN x.col1 END AS mainNumber,
CASE WHEN col4 ='secondary' THEN x.col1 END AS secondNumber,
FROM x
JOIN y
ON x.col1 = y.col1
You don't say what col2 is, but you are taking the same column (col1) from the same row of the same table and trying to assign different meanings to it (main_number and second_number)
SELECT col2, col1 as mainNumbet, col1 as secondNumber
FROM x
If COL1 is unique on 'y', then it can only be the main OR the secondary, so this should work
SELECT col2, col1 as number, (select col4 from y where y.col1=x.col1) type
FROM x
If COL1 is NOT unique on 'y', then it can be a main and a secondary, so this should work
SELECT col2, col1 as number,
(select col4 from y where y.col1=x.col1 and col4 = 'main' and rownum=1) m_ind,
(select col4 from y where y.col1=x.col1 and col4 = 'secondary' and rownum=1) s_ind
FROM x