I have the following schema
CREATE TABLE QUOTE (id int, amount int);
CREATE TABLE QUOTE_LINE (id int, quote_id int, line_amount int);
INSERT INTO QUOTE VALUES(1, 100);
INSERT INTO QUOTE VALUES(2, 200);
INSERT INTO QUOTE VALUES(3, 100);
INSERT INTO QUOTE VALUES(4, 300);
INSERT INTO QUOTE_LINE VALUES(1, 1, 5);
INSERT INTO QUOTE_LINE VALUES(2, 1, 6);
INSERT INTO QUOTE_LINE VALUES(3, 1, 4);
INSERT INTO QUOTE_LINE VALUES(4, 1, 2);
INSERT INTO QUOTE_LINE VALUES(1, 2, 5);
INSERT INTO QUOTE_LINE VALUES(2, 2, 5);
INSERT INTO QUOTE_LINE VALUES(3, 2, 5);
INSERT INTO QUOTE_LINE VALUES(4, 2, 5);
And I need to run the following query:
SELECT QUOTE.id,
line_amount*12 AS amount,
amount*2 as amount_doubled
from QUOTE_LINE
LEFT JOIN QUOTE ON QUOTE_LINE.quote_id=QUOTE.id;
The 3rd line in the query amount*2 as amount_double needs to reference the amount calculated in the prior line i.e. line_amount*12 AS amount.
However if I run this query, it picks the amount from the QUOTE table instead the amount that was calculated. How can I make my query use the calculated amount without changing the name of the calculated field?
Here is the sqlfiddle for this:
http://sqlfiddle.com/#!17/914b2/1
Note: I understand that I can create a sub-query, CTE or a lateral join, but the tables I am working are very very wide tables, and the queries have many many joins. As such, I need to keep the LEFT INNER JOINS and also I don't always know if a calculated field will be duplicated in JOINed table or not. Table structures change.
Move the definition to the FROM clause using a LATERAL JOIN:
select q.id, v.amount, v.amount * 2 as as amount_doubled
from QUOTE_LINE ql left join
QUOTE q
on ql.quote_id = q.id CROSS JOIN LATERAL
(values (line_amount*12)) v(amount);
You can also use a subquery or CTE, but I like the lateral join method.
Note: I would expect QUOTE to be the first table in the LEFT JOIN.
Qualify all column names with the table name and use a subquery:
SELECT q.id,
q.amount,
q.amount * 2 AS amount_doubled
FROM (SELECT quote.id,
quote_line.line_amount * 12 AS amount,
FROM quote_line
LEFT JOIN quite
ON quote_line.quote_id = quote.id
) AS q;
Just a little simple algebra resolves the issue quite easily. It is clear that calculated amount is 12 times the line_amount and that amount_doubles is 2 times that. So
select q.id
, ql.line_amount*12 as amount
, ql.line_amount*12*2 as amount_doubled
from quote_line ql
left join quote q
on ql.quote_id = q.id;
However, child left join parent seems strange as it basically says "Give me the quote line amounts where there is no quote". One would hope a FK from line to quote would prevent that from happening.
If so then a inner join would suffice. Further if the id is the only column from quote the join can removed by taking quote_id from quote_line. So perhaps reducing to:
select ql.quote_id as id
, ql.line_amount*12 as amount
, ql.line_amount*24 as amount_doubled
from quote_line ql;
Related
I have my table in this structure. I am trying to find all the unique ID's whose word's do not appear in the list. How can I achieve this in MS SQL Server.
id word
1 hello
2 friends
2 world
3 cat
3 dog
2 country
1 phone
4 eyes
I have a list of words
**List**
phone
eyes
hair
body
Expected Output
Except the words from the list, I need all the unique ID's. In this case it is,
2
3
I & 4 is not in the output as their words appears in the List
I tried the below code
Select count(distinct ID)
from Table1
where word not in ('phone','eyes','hair','body')
I tried Not Exists also which did not work
You can also use GROUP BY
SELECT id
FROM Table1
GROUP BY id
HAVING MAX(CASE WHEN word IN('phone', 'eyes', 'hair', 'body') THEN 1 ELSE 0 END) = 0
One way to do it is to use not exists, where the inner query is linked to the outer query by id and is filtered by the search words.
First, create and populate sample table (Please save us this step in your future questions):
DECLARE #T AS TABLE (
id int,
word varchar(20)
)
INSERT INTO #T VALUES
(1, 'hello'),
(2, 'friends'),
(2, 'world'),
(3, 'cat'),
(3, 'dog'),
(2, 'country'),
(1, 'phone'),
(4, 'eyes')
The query:
SELECT DISTINCT id
FROM #T t0
WHERE NOT EXISTS
(
SELECT 1
FROM #T t1
WHERE word IN('phone', 'eyes', 'hair', 'body')
AND t0.Id = t1.Id
)
Result:
id
2
3
SELECT t.id FROM dbo.table AS t
WHERE NOT EXISTS (SELECT 1 FROM dbo.table AS t2
INNER JOIN
(VALUES('phone'),('eyes'),('hair'),('body')) AS lw(word)
ON t2.word = lw.word
AND t2.id = t.id)
GROUP BY t.id;
You can try this as well: this is a dynamic table structure:
DECLARE #T AS TABLE (id int, word varchar(20))
INSERT INTO #T VALUES
(1, 'hello'),
(2, 'friends'),
(2, 'world'),
(3, 'cat'),
(3, 'dog'),
(2, 'country'),
(1, 'phone'),
(4, 'eyes')
DECLARE #tblNotUsed AS TABLE ( id int, word varchar(20))
DECLARE #tblNotUsedIds AS TABLE (id int)
INSERT INTO #tblNotUsed VALUES
(1, 'phone'),
(2, 'eyes'),
(3, 'hair'),
(4, 'body')
INSERT INTO #tblNotUsedIds (id)
SELECT [#T].id FROM #T INNER JOIN #tblNotUsed ON [#tblNotUsed].word = [#T].word
SELECT DISTINCT id FROM #T
WHERE id NOT IN (SELECT id FROM #tblNotUsedIds)
The nice thing about SQL is there are sometimes many ways to do things. Here is one way is to place your list of known values into a #temp table and then run something like this.
Select * from dbo.maintable
EXCEPT
Select * from #tempExcludeValues
The results will give you all records that aren't in your predefined list. A second way is to do the join like Larnu has mentioned in the comment above. NOT IN is typically not the fastest way to do things on larger datasets. JOINs are by far the most efficient method of filtering data. Many times better than using a IN or NOT IN clause.
I have T-SQL Table below.
ID Cost MaxCost
-------------------------------
2 200 300
3 400 1000
6 20 100
The above table must have 10 rows with IDs 1 to 10. So its missing 7 rows. How do i insert missing rows with proper ID. The cost & maxcost for missing rows will be zero. Do i need to create a temp table that holds 1 to 10 numbers?
No need for temp table, simple tally derived table and LEFT OUTER JOIN are sufficient:
CREATE TABLE #tab(ID INT, Cost INT, MaxCost INT);
INSERT INTO #tab(ID, Cost, MaxCost)
VALUES (2, 200,300),(3, 400, 1000) ,(6, 20, 100);
DECLARE #range_start INT = 1
,#range_end INT = 10;
;WITH tally AS
(
SELECT TOP 1000 r = ROW_NUMBER() OVER (ORDER BY name)
FROM master..spt_values
)
INSERT INTO #tab(id, Cost, MaxCost)
SELECT t.r, 0, 0
FROM tally t
LEFT JOIN #tab c
ON t.r = c.ID
WHERE t.r BETWEEN #range_start AND #range_end
AND c.ID IS NULL;
SELECT *
FROM #tab
ORDER BY ID;
LiveDemo
EDIT:
Tally table is simply number table. There are many ways to achieve it with subquery:
recursive cte
ROW_NUMBER() from system table that holds many values (used here)
UNION ALL and CROSS JOIN
VALUES(...)
using OPENJSON (SQL Server 2016+)
...
The TOP 1000 will generate only 1000 records if you know that you need more you can use:
SELECT TOP 1000000 r = ROW_NUMBER() OVER (ORDER BY (SELECT 1))
FROM master..spt_values c
CROSS JOIN master..spt_values c2;
Since your number of rows is low you could just define the data explicitly...
CREATE TABLE Data(ID INT, Cost INT, MaxCost INT);
INSERT INTO Data(ID, Cost, MaxCost) VALUES(2, 200, 300);
INSERT INTO Data(ID, Cost, MaxCost) VALUES(3, 400, 1000);
INSERT INTO Data(ID, Cost, MaxCost) VALUES(6, 20, 100);
and the query...
select *
from (VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) RowNums (ID)
left outer join Data on RowNums.ID = Data.ID
The first part defines a column ID with rows 1-10, it then left outer joins to your data. The beauty of this is that it is very readable.
I like to google for new and better ways to do things.. so i stumbled over this post and...Well what worked good in SQL7 and works good in SQL2016 is to just use a outer join and look for NULL values(null is missing data) ....
insert into DestTable (keyCol1,col1,col2,col3...)
select keyCol1,col1,col2,col3,...)
from SouceTable as s
left outer join DestTable as d on d.KeyCol1=s.KeyCol1
where d.KeyCol1 is null
and ...
feel free to test it
wrap your statement in a transaction, delete a few rows and see them come back in the select statement that would normally insert the rows in the destination table...
BEGIN TRAN
--> delete a subset of the data, in this case 5 rows
set rowcount 5;
-->delete and show what is deleted
delete from DestTable;
OUTPUT deleted.*,'DELETD' as [Action]
--> Perform the select to see if the selected rows that are retured match the deleted rows
--insert into DestTable (keyCol1,col1,col2,col3...)
Select keyCol1,col1,col2,col3,...)
from SouceTable as s
left outer join DestTable as d on d.KeyCol1=s.KeyCol1
where d.KeyCol1 is null
and ...
ROLLBACK
another way would be a TSQL merge, google that if you need to also update and optionally delete...
I have an interesting query I'm trying to figure out. I have a view which is getting a column added to it. This column is pivoted data coming from other tables, to form into a single row. Now, I need to wipe out duplicate entries in this pivoted data. Listagg is great for getting the data to a single row, but I need to make it unique. While I know how to make it unique, I'm tripping up on the fact that correlated sub-queries only go 1 level deep. So... not really sure how to get a distinct list of values. I can get it to work if I don't do the distinct just fine. Anyone out there able to work some SQL magic?
Sample data:
drop table test;
drop table test_widget;
create table test (id number, description Varchar2(20));
create table test_widget (widget_id number, test_fk number, widget_type varchar2(20));
insert into test values(1, 'cog');
insert into test values(2, 'wheel');
insert into test values(3, 'spring');
insert into test_widget values(1, 1, 'A');
insert into test_widget values(2, 1, 'A');
insert into test_widget values(3, 1, 'B');
insert into test_widget values(4, 1, 'A');
insert into test_widget values(5, 2, 'C');
insert into test_widget values(6, 2, 'C');
insert into test_widget values(7, 2, 'B');
insert into test_widget values(8, 3, 'A');
insert into test_widget values(9, 3, 'C');
insert into test_widget values(10, 3, 'B');
insert into test_widget values(11, 3, 'B');
insert into test_widget values(12, 3, 'A');
commit;
Here is an example of the query that works, but shows duplicate data:
SELECT A.ID
, A.DESCRIPTION
, (SELECT LISTAGG (WIDGET_TYPE, ', ') WITHIN GROUP (ORDER BY WIDGET_TYPE)
FROM TEST_WIDGET
WHERE TEST_FK = A.ID) widget_types
FROM TEST A
Here is an example of what does NOT work due to the depth of where I try to reference the ID:
SELECT A.ID
, A.DESCRIPTION
, (SELECT LISTAGG (WIDGET_TYPE, ', ') WITHIN GROUP (ORDER BY WIDGET_TYPE)
FROM (SELECT DISTINCT WIDGET_TYPE
FROM TEST_WIDGET
WHERE TEST_FK = A.ID))
WIDGET_TYPES
FROM TEST A
Here is what I want displayed:
1 cog A, B
2 wheel B, C
3 spring A, B, C
If anyone knows off the top of their head, that would fantastic! Otherwise, I can post up some sample create statements to help you with dummy data to figure out the query.
You can apply the distinct in a subquery, which also has the join - avoiding the level issue:
SELECT ID
, DESCRIPTION
, LISTAGG (WIDGET_TYPE, ', ')
WITHIN GROUP (ORDER BY WIDGET_TYPE) AS widget_types
FROM (
SELECT DISTINCT A.ID, A.DESCRIPTION, B.WIDGET_TYPE
FROM TEST A
JOIN TEST_WIDGET B
ON B.TEST_FK = A.ID
)
GROUP BY ID, DESCRIPTION
ORDER BY ID;
ID DESCRIPTION WIDGET_TYPES
---------- -------------------- --------------------
1 cog A, B
2 wheel B, C
3 spring A, B, C
I was in a unique situation using the Pentaho reports writer and some inconsistent data. The Pentaho writer uses Oracle to query data, but has limitations. The data pieces were unique but not classified in a consistent manner, so I created a nested listagg inside of a left join to present the data the way I wanted to:
left join
(
select staff_id, listagg(thisThing, ' --- '||chr(10) ) within group (order by this) as SCHED_1 from
(
SELECT
staff_id, RPT_STAFF_SHIFTS.ORGANIZATION||': '||listagg(
RPT_STAFF_SHIFTS.DAYS_OF_WEEK
, ',' ) within group (order by BEGIN_DATE desc)
as thisThing
FROM "RPT_STAFF_SHIFTS" where "RPT_STAFF_SHIFTS"."END_DATE" is null
group by staff_id, organization)
group by staff_id
) schedule_1 on schedule_1.staff_id = "RPT_STAFF"."STAFF_ID"
where "RPT_STAFF"."STAFF_ID" ='555555'
This is a different approach than using the nested query, but it some situations it might work better by taking into account the level issue when developing the query and taking an extra step to fully concatenate the results.
With the following table definition:
CREATE TABLE Nodes(id INTEGER, child INTEGER);
INSERT INTO Nodes(id, child) VALUES(1, 10);
INSERT INTO Nodes(id, child) VALUES(1, 11);
INSERT INTO Nodes(id, child) VALUES(1, 12);
INSERT INTO Nodes(id, child) VALUES(10, 100);
INSERT INTO Nodes(id, child) VALUES(10, 101);
INSERT INTO Nodes(id, child) VALUES(10, 102);
INSERT INTO Nodes(id, child) VALUES(2, 20);
INSERT INTO Nodes(id, child) VALUES(2, 21);
INSERT INTO Nodes(id, child) VALUES(2, 22);
INSERT INTO Nodes(id, child) VALUES(20, 200);
INSERT INTO Nodes(id, child) VALUES(20, 201);
INSERT INTO Nodes(id, child) VALUES(20, 202);
With the following query:
WITH RECURSIVE members(base, id, level) AS (
SELECT n1.id, n1.id, 0
FROM Nodes n1
LEFT OUTER JOIN Nodes n2 ON n2.child = n1.id
WHERE n2.id IS NULL
UNION
SELECT m.base, n.child, m.level + 1
FROM members m
INNER JOIN Nodes n ON m.id=n.id
)
SELECT m.id, m.level
FROM members m
WHERE m.base IN (1)
Is the outer WHERE clause optimized in a Recursive CTE? An alternate that I have considered using is:
WITH RECURSIVE members(id, level) AS (
VALUES (1, 0)
UNION
SELECT n.child, m.level + 1
FROM members m
INNER JOIN Nodes n ON m.id=n.id
)
SELECT m.id, m.level
FROM members m
but it has the problem of not being able to create a view out of it. Therefore, if the performance difference between the two is minimal, I'd prefer to create a view out of the recursive CTE and then just query that.
To be able to apply the WHERE clause to the queries inside the CTE, the database would be required to prove that
all values in the first column are unchanged by the recursion and go back to the base query, and, in general, that
it is not possible for any filtered-out row to have any children that could show up in the result of the query, or affect the CTE in any other way.
Such a prover does not exist.
See restriction 22 of Subquery flattening.
To see why your first query is non-optimal, try running both with UNION ALL instead of just UNION. With the sample data given, the first will return 21 rows while the second returns only 7.
The duplicate rows in the actual first query are subsequently eliminated by performing a sort and duplicate elimination, while this step is not necessary in the actual second query.
Here is the code that will help you understand my question:
create table con ( content_id number);
create table mat ( material_id number, content_id number, resolution number, file_location varchar2(50));
create table con_groups (content_group_id number, content_id number);
insert into con values (99);
insert into mat values (1, 99, 7, 'C:\foo.jpg');
insert into mat values (2, 99, 2, '\\server\xyz.mov');
insert into mat values (3, 99, 5, '\\server2\xyz.wav');
insert into con values (100);
insert into mat values (4, 100, 5, 'C:\bar.png');
insert into mat values (5, 100, 3, '\\server\xyz.mov');
insert into mat values (6, 100, 7, '\\server2\xyz.wav');
insert into con_groups values (10, 99);
insert into con_groups values (10, 100);
commit;
SELECT m.material_id,
(SELECT file_location
FROM (SELECT file_location
FROM mat
WHERE mat.content_id = m.content_id
ORDER BY resolution DESC) special_mats_for_this_content
WHERE rownum = 1) special_mat_file_location
FROM mat m
WHERE m.material_id IN (select material_id
from mat
inner join con on con.content_id = mat.content_id
inner join con_groups on con_groups.content_id = con.content_id
where con_groups.content_group_id = 10);
Please consider the number 10 at the end of the query to be a parameter. In other words this value is just hardcoded in this example; it would change depending on the input.
My question is: Why do I get the error
"M"."CONTENT_ID": invalid identifier
for the nested, correlated subquery? Is there some sort of nesting limit? This subquery needs to be ran for every row in the resultset because the results will change based on the content_id, which can be different for each row. How can I accomplish this with Oracle?
Not that I'm trying to start a SQL Server vs Oracle discussion, but I come from a SQL Server background and I'd like to point out that the following, equivalent query runs fine on SQL Server:
create table con ( content_id int);
create table mat ( material_id int, content_id int, resolution int, file_location varchar(50));
create table con_groups (content_group_id int, content_id int);
insert into con values (99);
insert into mat values (1, 99, 7, 'C:\foo.jpg');
insert into mat values (2, 99, 2, '\\server\xyz.mov');
insert into mat values (3, 99, 5, '\\server2\xyz.wav');
insert into con values (100);
insert into mat values (4, 100, 5, 'C:\bar.png');
insert into mat values (5, 100, 3, '\\server\xyz.mov');
insert into mat values (6, 100, 7, '\\server2\xyz.wav');
insert into con_groups values (10, 99);
insert into con_groups values (10, 100);
SELECT m.material_id,
(SELECT file_location
FROM (SELECT TOP 1 file_location
FROM mat
WHERE mat.content_id = m.content_id
ORDER BY resolution DESC) special_mats_for_this_content
) special_mat_file_location
FROM mat m
WHERE m.material_id IN (select material_id
from mat
inner join con on con.content_id = mat.content_id
inner join con_groups on con_groups.content_id = con.content_id
where con_groups.content_group_id = 10);
Can you please help me understand why I can do this in SQL Server but not Oracle 9i? If there is a nesting limit, how can I accomplish this in a single select query in Oracle without resorting to looping and/or temporary tables?
Recent versions of Oracle do not have a limit but most older versions of Oracle have a nesting limit of 1 level deep.
This works on all versions:
SELECT (
SELECT *
FROM dual dn
WHERE dn.dummy = do.dummy
)
FROM dual do
This query works in 12c and 18c but does not work in 10g and 11g. (However, there is at least one version of 10g that allowed this query. And there is a patch to enable this behavior in 11g.)
SELECT (
SELECT *
FROM (
SELECT *
FROM dual dn
WHERE dn.dummy = do.dummy
)
WHERE rownum = 1
)
FROM dual do
If necessary you can workaround this limitation with window functions (which you can use in SQL Server too:)
SELECT *
FROM (
SELECT m.material_id, ROW_NUMBER() OVER (PARTITION BY content_id ORDER BY resolution DESC) AS rn
FROM mat m
WHERE m.material_id IN
(
SELECT con.content_id
FROM con_groups
JOIN con
ON con.content_id = con_groups.content_id
WHERE con_groups.content_group_id = 10
)
)
WHERE rn = 1
#Quassnoi This was the case in oracle 9. From Oracle 10 ...
From Oracle Database SQL Reference 10g Release 1 (10.1)
Oracle performs a correlated subquery when a nested subquery references a column from a table referred to a parent statement any number of levels above the subquery
From Oracle9i SQL Reference Release 2 (9.2)
Oracle performs a correlated subquery when the subquery references a column from a table referred to in the parent statement.
A subquery in the WHERE clause of a SELECT statement is also called a nested subquery. You can nest up to 255 levels of subqueries in the a nested subquery.
I don't think it works if you have something like select * from (select * from ( select * from ( ....))))
Just select * from TableName alias where colName = (select * from SomeTable where someCol = (select * from SomeTable x where x.id = alias.col))
Check out http://forums.oracle.com/forums/thread.jspa?threadID=378604
Quassnoi answered my question about nesting, and made a great call by suggesting window analytic functions. Here is the exact query that I need:
SELECT m.material_id, m.content_id,
(SELECT max(file_location) keep (dense_rank first order by resolution desc)
FROM mat
WHERE mat.content_id = m.content_id) special_mat_file_location
FROM mat m
WHERE m.material_id IN (select material_id
from mat
inner join con on con.content_id = mat.content_id
inner join con_groups on con_groups.content_id = con.content_id
where con_groups.content_group_id = 10);
Thanks!