How do I write a subquery using a JOIN in SQL - sql

I have the following code to retrieve and display the first and last names of writers who are also editors (ED_ID = WRT_ID).
SELECT Writers.WRT_LastName AS LastName, Writers.WRT_FirstName AS FirstName
FROM Writers INNER JOIN Editors ON Editors.ED_ID = Writers.WRT_ID;
It produces the results I want, but how would I write it using a subquery and produce the same results?
I am using Access 2013
Let me know if I need to provide more info.

Presumably, you don't want duplicate results. I point that out, because the subquery is going to be different from the join if the Editors table has duplicates.
Typical ways of writing this are using IN or EXISTS. I tend to go with the latter, although the two are usually pretty equivalent performance wise (on the other hand, NOT EXISTS is preferable to NOT IN semantically).
SELECT Writers.WRT_LastName AS LastName, Writers.WRT_FirstName AS FirstName
FROM Writers
WHERE EXISTS (SELECT 1 FROM Editors WHERE Editors.ED_ID = Writers.WRT_ID);

You just need to put the join conditions into a where clause subquery:
SELECT WRT_LastName as LastName, WRT_FirstName as FirstName
FROM WRITERS
WHERE WRT_ID in (SELECT ED_ID FROM EDITORS)

Related

How exactly is the value of count(*) determined in BigQuery?

I am joining a table of about 70000 rows with a slightly bigger second table through inner join each. Now count(a.business_column) and count(*) give different results. The former correctly reports back ~70000, while the latter gives ~200000. But this only happens when I select count(*) alone, when I select them together they give the same result (~70000). How is this possible?
select
count(*)
/*,count(a.business_column)*/
from table_a a
inner join each table_b b
on b.key_column = a.business_column
UPDATE: For a step by step explanation on how this works, see BigQuery flattens when using field with same name as repeated field instead.
To answer the title question: COUNT(*) in BigQuery is always accurate.
The caveat is that in SQL COUNT(*) and COUNT(column) have semantically different meanings - and the sample query can be interpreted in different ways.
See: http://www.xaprb.com/blog/2009/04/08/the-dangerous-subtleties-of-left-join-and-count-in-sql/
There they have this sample query:
select user.userid, count(email.subject)
from user
inner join email on user.userid = email.userid
group by user.userid;
That query turns out to be ambigous, and the article author changes it for a more explicit one, adding this comment:
But what if that’s not what the author of the query meant? There’s no
way to really know. There are several possible intended meanings for
the query, and there are several different ways to write the query to
express those meanings more clearly. But the original query is
ambiguous, for a few reasons. And everyone who reads this query
afterwards will end up guessing what the original author meant. “I
think I can safely change this to…”
UPDATE: For a step by step explanation on how this works, see BigQuery flattens when using field with same name as repeated field instead.
COUNT(*) counts most repeated field in your query, if you want to count full records - use COUNT(0).

In an EXISTS can my JOIN ON use a value from the original select

I have an order system. Users with can be attached to different orders as a type of different user. They can download documents associated with an order. Documents are only given to certain types of users on the order. I'm having trouble writing the query to check a user's permission to view a document and select the info about the document.
I have the following tables and (applicable) fields:
Docs: DocNo, FileNo
DocAccess: DocNo, UserTypeWithAccess
FileUsers: FileNo, UserType, UserNo
I have the following query:
SELECT Docs.*
FROM Docs
WHERE DocNo = 1000
AND EXISTS (
SELECT * FROM DocAccess
LEFT JOIN FileUsers
ON FileUsers.UserType = DocAccess.UserTypeWithAccess
AND FileUsers.FileNo = Docs.FileNo /* Errors here */
WHERE DocAccess.UserNo = 2000 )
The trouble is that in the Exists Select, it does not recognize Docs (at Docs.FileNo) as a valid table. If I move the second on argument to the where clause it works, but I would rather limit the initial join rather than filter them out after the fact.
I can get around this a couple ways, but this seems like it would be best. Anything I'm missing here? Or is it simply not allowed?
I think this is a limitation of your database engine. In most databases, docs would be in scope for the entire subquery -- including both the where and in clauses.
However, you do not need to worry about where you put the particular clause. SQL is a descriptive language, not a procedural language. The purpose of SQL is to describe the output. The SQL engine, parser, and compiler should be choosing the most optimal execution path. Not always true. But, move the condition to the where clause and don't worry about it.
I am not clear why do you need to join with FileUsers at all in your subquery?
What is the purpose and idea of the query (in plain English)?
In any case, if you do need to join with FileUsers then I suggest to use the inner join and move second filter to the WHERE condition. I don't think you can use it in JOIN condition in subquery - at least I've never seen it used this way before. I believe you can only correlate through WHERE clause.
You have to use aliases to get this working:
SELECT
doc.*
FROM
Docs doc
WHERE
doc.DocNo = 1000
AND EXISTS (
SELECT
*
FROM
DocAccess acc
LEFT OUTER JOIN
FileUsers usr
ON
usr.UserType = acc.UserTypeWithAccess
AND usr.FileNo = doc.FileNo
WHERE
acc.UserNo = 2000
)
This also makes it more clear which table each field belongs to (think about using the same table twice or more in the same query with different aliases).
If you would only like to limit the output to one row you can use TOP 1:
SELECT TOP 1
doc.*
FROM
Docs doc
INNER JOIN
FileUsers usr
ON
usr.FileNo = doc.FileNo
INNER JOIN
DocAccess acc
ON
acc.UserTypeWithAccess = usr.UserType
WHERE
doc.DocNo = 1000
AND acc.UserNo = 2000
Of course the second query works a bit different than the first one (both JOINS are INNER). Depeding on your data model you might even leave the TOP 1 out of that query.

Subqueries vs Inner joins - Which one executes faster?

I hope it's ok to make a posting like this.
I have been using SQL for quite some time and people at work have been using 2 different ways to return the same number or rows in a database.
For example:
SELECT Name
FROM
Employees
WHERE
DepartmentID IN (SELECT DepartmentID
FROM
Departments
WHERE
Department LIKE '%Engineering')
SELECT Employees.Name
FROM
Departments
INNER JOIN Employees
ON Departments.DepartmentID = Employees.DepartmentID
WHERE
Departments.Department LIKE '%Engineering'
Both return the same data. People have been telling me that using subqueries is the best way to do it.
My question is this:
Which of these 2 will execute faster? My guess would be the one with the inner join but I may be wrong.
Thanks.
In this case, both are equivalent. However, for the one with the INNER JOIN, if there is more than 1 department selected with the same DepartmentID, employees with that DepartmentID will be returned multiple times.
SQL is a declarative language, which means that the language is not supposed to say how the query should be performed, only what result should be found. It is up to the DMBS to work out how to perform it.
A decent SQL database will probably optimize them to both do the same or similar things.
To check if they are doing the same thing, run EXPLAIN on the query.
If they have the same steps, obviously they will take the same amount of time to run. Otherwise, you will see what ways the database is treating the differently. How the DBMS optimizes it is implementation dependent. So the best way is to use EXPLAIN.
Note: EXPLAIN is an SQL command, just like SELECT etc. See http://www.sql.org/sql-database/postgresql/manual/sql-explain.html.
Assuming it executes with the same steps, the way you write it will be the way you think appears clearest what the intention is.
These two queries are not equivalent. The equivalent to the "IN" query is the following:
SELECT Name
FROM Employees e join
(SELECT distinct DepartmentID
FROM Departments
WHERE Department LIKE '%Engineering'
) d
on e.DepartmentID = d.DepartmentId
In other words, the "IN" version can never return duplicate rows. The join version can return duplicate rows. In this case, I don't think it would because departmentid is probably unique in the table.
The next question is about optimization. SQL engines really should optimize the two versions equivalently. That means that the "IN" version is easier to read.
However, there is one big exception to this. MySQL is definitely suboptimal when it comes to IN. When using that database engine, it is much better to use the join version.
By the way, there are other ways to do this, such as:
select name
from employees e
where exists (select 1 from Departments d
where e.Departmentid = d.DepartmentId and
d.Department LIKE '%Engineering'
)
Which works best . . . you need to measure on whatever database you are using.
The query optimizers in most modern RDBMS will probably internally work with them the same way, so there won't be a performance difference.
Older RDBMS might probably show differences, but this depends on the RDBMS itself.

Why is selecting specified columns, and all, wrong in Oracle SQL?

Say I have a select statement that goes..
select * from animals
That gives a a query result of all the columns in the table.
Now, if the 42nd column of the table animals is is_parent, and I want to return that in my results, just after gender, so I can see it more easily. But I also want all the other columns.
select is_parent, * from animals
This returns ORA-00936: missing expression.
The same statement will work fine in Sybase, and I know that you need to add a table alias to the animals table to get it to work ( select is_parent, a.* from animals ani), but why must Oracle need a table alias to be able to work out the select?
Actually, it's easy to solve the original problem. You just have to qualify the *.
select is_parent, animals.* from animals;
should work just fine. Aliases for the table names also work.
There is no merit in doing this in production code. We should explicitly name the columns we want rather than using the SELECT * construct.
As for ad hoc querying, get yourself an IDE - SQL Developer, TOAD, PL/SQL Developer, etc - which allows us to manipulate queries and result sets without needing extensions to SQL.
Good question, I've often wondered this myself but have then accepted it as one of those things...
Similar problem is this:
sql>select geometrie.SDO_GTYPE from ngg_basiscomponent
ORA-00904: "GEOMETRIE"."SDO_GTYPE": invalid identifier
where geometrie is a column of type mdsys.sdo_geometry.
Add an alias and the thing works.
sql>select a.geometrie.SDO_GTYPE from ngg_basiscomponent a;
Lots of good answers so far on why select * shouldn't be used and they're all perfectly correct. However, don't think any of them answer the original question on why the particular syntax fails.
Sadly, I think the reason is... "because it doesn't".
I don't think it's anything to do with single-table vs. multi-table queries:
This works fine:
select *
from
person p inner join user u on u.person_id = p.person_id
But this fails:
select p.person_id, *
from
person p inner join user u on u.person_id = p.person_id
While this works:
select p.person_id, p.*, u.*
from
person p inner join user u on u.person_id = p.person_id
It might be some historical compatibility thing with 20-year old legacy code.
Another for the "buy why!!!" bucket, along with why can't you group by an alias?
The use case for the alias.* format is as follows
select parent.*, child.col
from parent join child on parent.parent_id = child.parent_id
That is, selecting all the columns from one table in a join, plus (optionally) one or more columns from other tables.
The fact that you can use it to select the same column twice is just a side-effect. There is no real point to selecting the same column twice and I don't think laziness is a real justification.
Select * in the real world is only dangerous when referring to columns by index number after retrieval rather than by name, the bigger problem is inefficiency when not all columns are required in the resultset (network traffic, cpu and memory load).
Of course if you're adding columns from other tables (as is the case in this example it can be dangerous as these tables may over time have columns with matching names, select *, x in that case would fail if a column x is added to the table that previously didn't have it.
why must Oracle need a table alias to be able to work out the select
Teradata is requiring the same. As both are quite old (maybe better call it mature :-) DBMSes this might be historical reasons.
My usual explanation is: an unqualified * means everything/all columns and the parser/optimizer is simply confused because you request more than everything.

SELECT with ORs including table joins

I've got a database with three tables: Books (with book details, PK is CopyID), Keywords (list of keywords, PK is ID) and KeywordsLink which is the many-many link table between Books and Keywords with the fields ID, BookID and KeywordID.
I'm trying to make an advanced search form in my app where you can search on various criteria. At the moment I have it working with Title, Author and Publisher (all from the Book table). It produces SQL like:
SELECT * FROM Books WHERE Title Like '%Software%' OR Author LIKE '%Spolsky%';
I want to extend this search to also search using tags - basically to add another OR clause to search the tags. I've tried to do this by doing the following
SELECT *
FROM Books, Keywords, Keywordslink
WHERE Title LIKE '%Joel%'
OR (Name LIKE '%good%' AND BookID=Books.CopyID AND KeywordID=Keywords.ID)
I thought using the brackets might separate the 2nd part into its own kinda clause, so the join was only evaluated in that part - but it doesn't seem to be so. All it gives me is a long list of multiple copies of the one book that satisfies the Title LIKE '%Joel%' bit.
Is there a way of doing this using pure SQL, or would I have to use two SQL statements and combine them in my app (removing duplicates in the process).
I'm using MySQL at the moment if that matters, but the app uses ODBC and I'm hoping to make it DB agnostic (might even use SQLite eventually or have it so the user can choose what DB to use).
You need to join the 3 tables together, which gives you a tablular resultset. You can then check any columns you like, and make sure you get distinct results (i.e. no duplicates).
Like this:
select distinct b.*
from books b
left join keywordslink kl on kl.bookid = b.bookid
left join keywords k on kl.keywordid = k.keywordid
where b.title like '%assd%'
or k.keyword like '%asdsad%'
You should also try to avoid starting your LIKE values with a percent sign (%), as this means SQL Server can't use an index on that column and has to perform a full (and slow) table scan. This starts to make your query into a "starts with" query.
Maybe consider the full-text search options in SQL Server, also.
What you've done here is made a cartesian result set by having the tables joined with the commas but not having any join criteria. Switch your statements to use outer join statements and that should allow you to reference the keywords. I don't know your schema, but maybe something like this would work:
SELECT
*
FROM
Books
LEFT OUTER JOIN KeywordsLink ON KeywordsLink.BookID = Books.CopyID
LEFT OUTER JOIN Keywords ON Keywords.ID = KeywordsLink.KeywordID
WHERE Books.Title LIKE '%JOEL%'
OR Keywords.Name LIKE '%GOOD%'
Use UNION.
(SELECT Books.* FROM <first kind of search>)
UNION
(SELECT Books.* FROM <second kind of search>)
The point is that you could write two (or more) simple and efficient queries instead of one complicated query that tries to do everything at once.
If number of resulting rows is low, then UNION will have very little overhead (and you can use faster UNION ALL if you don't have duplicates or don't care about them).
SELECT * FROM books WHERE title LIKE'%Joel%' OR bookid IN
(SELECT bookid FROM keywordslink WHERE keywordid IN
(SELECT id FROM keywords WHERE name LIKE '%good%'))
Beware that older versions of MySQL didn't like subselects. I think they've fixed that.
You must also limit the product of the join by specifying something like
Books.FK1 = Keywords.FK1 and
Books.FK2 = Keywordslink.FK2 and
Keywords.FK3 = Keywordslink.FK3
But i don't know your exact data model so your solution may be slightly different.
I'm not aware of any way to accomplish a "conditional join" in SQL. I think you'll be best served with executing the two statements separately and combining them in the application. This approach is also more likely to stay DB-agnostic.
It looks like Neil Barnwell has covered the answer that I would have given, but I'll add one thing...
Books can have more than one author. If your data model is really designed as your query implies you might want to consider changing it to accommodate that fact.