Select count(*) returns 0 but select * returns 2 rows - sql

This is very similar to this question with a full minimal example.
I have a simple select query (from a non-empty table) with only left joins. The last left join happens to be with an empty table.
The query returns 2 non-null rows as it should, but simply changing it to a count(*) query makes it return 0 as the count of rows.
The same SQL works properly on both MySQL and MSSQL (after fixing the PK syntax).
Full (re-runnable if uncomented) example:
-- DROP TABLE first;
-- DROP TABLE second;
-- DROP TABLE empty;
CREATE TABLE first (
pk int,
fk int
);
ALTER TABLE first
ADD CONSTRAINT PK_first PRIMARY KEY (pk);
CREATE TABLE second (
pk int
);
ALTER TABLE second
ADD CONSTRAINT PK_second PRIMARY KEY (pk);
CREATE TABLE empty (
pk int
);
ALTER TABLE first ADD CONSTRAINT FK_first FOREIGN KEY (fk)
REFERENCES second (pk) ENABLE;
INSERT INTO second (pk)
VALUES (5);
INSERT INTO first (pk, fk)
VALUES (1, 5);
INSERT INTO first (pk, fk)
VALUES (2, 5);
SELECT
COUNT(*)
FROM first
LEFT OUTER JOIN second
ON (first.fk = second.pk)
LEFT OUTER JOIN empty
ON (1 = 1);
The last query returns 0 on my machine, but changing the count(*) to just * makes it return 2 rows.
Can anyone reproduce this? My db_version is 11.2.0.2.
Explain plan seems to see the 2 rows that should be returned:
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 3 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | MERGE JOIN CARTESIAN| | 2 | 26 | 3 (0)| 00:00:01 |
| 3 | VIEW | | 1 | | 2 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | EMPTY | 1 | | 2 (0)| 00:00:01 |
| 5 | BUFFER SORT | | 2 | 26 | 3 (0)| 00:00:01 |
| 6 | INDEX FULL SCAN | PK_FIRST | 2 | 26 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
I don't know much about dynamic sampling, but if i alter session set OPTIMIZER_DYNAMIC_SAMPLING=0;, then the plan shows 82 rows in each step.
Removing the primary keys fixes the problem on Oracle, but that is hardly a proper solution.
Removing the join into the empty table also fixes the problem, but it is an outer join with tautological filter, so it should be a noop.
Is this actually the intended behavior on Oracle for some reason? Or is my server just bugged?
Both MSSQL and MySQL return 2 as the count.
Edit: Round 2
It was enough to add 2 more tables and the bug shows also in 11.2.0.4. Can anyone reproduce it on more current Oracle versions?
An online fiddle here.
CREATE TABLE first (
pk int,
fk int
);
ALTER TABLE first
ADD CONSTRAINT PK_first PRIMARY KEY (pk);
CREATE TABLE second (
pk int,
fk int
);
ALTER TABLE second
ADD CONSTRAINT PK_second PRIMARY KEY (pk);
CREATE TABLE third (
pk int,
fk int
);
ALTER TABLE third
ADD CONSTRAINT PK_third PRIMARY KEY (pk);
CREATE TABLE fourth (
pk int
);
ALTER TABLE fourth
ADD CONSTRAINT PK_fourth PRIMARY KEY (pk);
CREATE TABLE empty (
pk int
);
ALTER TABLE first ADD CONSTRAINT FK_first FOREIGN KEY (fk)
REFERENCES second (pk) ENABLE;
ALTER TABLE second ADD CONSTRAINT FK_second FOREIGN KEY (fk)
REFERENCES third (pk) ENABLE;
ALTER TABLE third ADD CONSTRAINT FK_third FOREIGN KEY (fk)
REFERENCES fourth (pk) ENABLE;
INSERT INTO fourth (pk)
VALUES (50);
INSERT INTO third (pk, fk)
VALUES (10, 50);
INSERT INTO third (pk, fk)
VALUES (11, 50);
INSERT INTO second (pk, fk)
VALUES (5, 10);
INSERT INTO second (pk, fk)
VALUES (6, 10);
INSERT INTO first (pk, fk)
VALUES (1, 5);
INSERT INTO first (pk, fk)
VALUES (2, 5);
SELECT
COUNT(*)
FROM first
LEFT OUTER JOIN second
ON (first.fk = second.pk)
LEFT OUTER JOIN third
ON (first.pk = third.pk)
LEFT OUTER JOIN fourth
ON (third.fk = fourth.pk)
LEFT OUTER JOIN empty
ON (1 = 1);
Anyway the consensus seems to be that this is a bug in obsolete Oracle releases.

11.2.0.2 is too old version (EOL already )and looks like it even has never been patched.
The obvious workaroud for your bug is the hint no_query_transformation, try:
SELECT--+ no_query_transformation
COUNT(*)
FROM first
LEFT OUTER JOIN second
ON (first.fk = second.pk)
LEFT OUTER JOIN empty
ON (1 = 1);
Update and addition: you can just disable join elimination using hint NO_ELIMINATE_JOIN:
http://sqlfiddle.com/#!4/9cf338/10
SELECT--+ NO_ELIMINATE_JOIN(second)
COUNT(*)
FROM first
LEFT OUTER JOIN second
ON (first.fk = second.pk)
LEFT OUTER JOIN empty e
ON (1 = 1);
or _optimizer_join_elimination_enabled:
http://sqlfiddle.com/#!4/9cf338/10
SELECT--+ opt_param('_optimizer_join_elimination_enabled' 'false')
COUNT(*)
FROM first
LEFT OUTER JOIN second
ON (first.fk = second.pk)
LEFT OUTER JOIN third
ON (first.pk = third.pk)
LEFT OUTER JOIN fourth
ON (third.fk = fourth.pk)
LEFT OUTER JOIN empty
ON (1 = 1);

Related

SQL N:M query merging results by condition flag in intermediate table

[First of all, if this is a duplicate, sorry, I couldn't find a response for this, as this is a strange solution for a limitation on an ORM and I'm clearly a noobie on SQL]
Domain requirements:
A brigades must be composed by one user (the commissar one) and, optionally, one and only one assistant (1:1)
A user can only be part of one brigade (1:1)
CREATE TABLE Users
(
id SERIAL PRIMARY KEY,
username VARCHAR(100) NOT NULL UNIQUE,
password VARCHAR(100) NOT NULL
);
CREATE TABLE Brigades
(
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL
);
-- N:M relationship with a flag inside which determine if that user is a commissar or not
CREATE TABLE Brigade_User
(
brigade_id INT NOT NULL REFERENCES Brigades(id)
ON DELETE CASCADE
ON UPDATE CASCADE,
user_id INT NOT NULL REFERENCES Users(id)
ON DELETE CASCADE
ON UPDATE CASCADE,
is_commissar BOOLEAN NOT NULL
PRIMARY KEY(brigade_id, user_id)
);
Ideally, as relations are 1:1, Brigade_User intermediate table could be erased and a Brigade table with two foreign keys could be created instead (this is not supported by Diesel Rust ORM, so I think I'm coupled to first approach)
CREATE TABLE Brigades
(
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL
-- 1:1
commisar_id INT NOT NULL REFERENCES Users(id)
ON DELETE CASCADE
ON UPDATE CASCADE,
-- 1:1
assistant_id INT NOT NULL REFERENCES Users(id)
ON DELETE CASCADE
ON UPDATE CASCADE
);
An example...
> SELECT * FROM brigade_user LEFT JOIN brigades ON brigade_user.brigade_id = brigades.id;
brigade_id | user_id | is_commissar | id | name
------------+---------+--------------+----+------------------
1 | 1 | t | 1 | Patrulla gatuna
1 | 2 | f | 1 | Patrulla gatuna
2 | 3 | t | 2 | Patrulla perruna
2 | 4 | f | 2 | Patrulla perruna
3 | 6 | t | 3 | Patrulla canina
3 | 5 | f | 3 | Patrulla canina
(4 rows)
Is it possible to make a query which returns a table like this?
brigade_id | commissar_id | assistant_id | name
-----------+--------------+--------------+--------------------
1 | 1 | 2 | Patrulla gatuna
2 | 3 | 4 | Patrulla perruna
3 | 6 | 5 | Patrulla canina
See that each two rows have been merged into one (remember, a brigade is composed by one commissary and, optionally, one assistant) depending on the flag.
Could this model be improved (having in mind the limitation on multiple foreign keys referencing the same table, discussed here)
Try the following:
with cte as
(
SELECT A.brigade_id,A.user_id,A.is_commissar,B.name
FROM brigade_user A LEFT JOIN brigades B ON A.brigade_id = B.id
)
select C1.brigade_id, C1.user_id as commissar_id , C2.user_id as assistant_id, C1.name from
cte C1 left join cte C2
on C1.brigade_id=C2.brigade_id
and C1.user_id<>C2.user_id
where C1.is_commissar=true
See a demo from here.

Error "duplicate key value violates unique constraint" while updating multiple rows

I created a table in PostgreSQL and Oracle as
CREATE TABLE temp(
seqnr smallint NOT NULL,
defn_id int not null,
attr_id int not null,
input CHAR(50) NOT NULL,
CONSTRAINT pk_id PRIMARY KEY (defn_id, attr_id, seqnr)
);
This temp table has primary key as (defn_id,attr_id,seqnr) as a whole!
Then I inserted the record in the temp table as
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (1,100,100,'test1');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (2,100,100,'test2');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (3,100,100,'test3');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (4,100,100,'test4');
INSERT INTO temp(seqnr,defn_id,attr_id,input)
VALUES (5,100,100,'test5');
in both Oracle and Postgres!
The table now contains:
seqnr | defn_id | attr_id | input
1 | 100 | 100 | test1
2 | 100 | 100 | test2
3 | 100 | 100 | test3
4 | 100 | 100 | test4
5 | 100 | 100 | test5
When I run the command:
UPDATE temp SET seqnr=seqnr+1
WHERE defn_id = 100 AND attr_id = 100 AND seqnr >= 1;
In case of ORACLE it is Updating 5 Rows and the O/p is
seqnr | defn_id | attr_id | input
2 | 100 | 100 | test1
3 | 100 | 100 | test2
4 | 100 | 100 | test3
5 | 100 | 100 | test4
6 | 100 | 100 | test5
But in case of PostgreSQL it is giving an error!
DETAIL: Key (defn_id, attr_id, seqnr)=(100, 100, 2) already exists.
Why does this happen and how can I replicate the same result in Postgres as Oracle?
Or how can the same result be achieved in Postgres without any errors?
UNIQUE an PRIMARY KEY constraints are checked immediately (for each row) unless they are defined DEFERRABLE - which is the solution you demand.
ALTER TABLE temp
DROP CONSTRAINT pk_id
, ADD CONSTRAINT pk_id PRIMARY KEY (defn_id, attr_id, seqnr) DEFERRABLE
;
Then your UPDATE just works.
db<>fiddle here
This comes at a cost, though. The manual:
Note that deferrable constraints cannot be used as conflict
arbitrators in an INSERT statement that includes an ON CONFLICT DO UPDATE clause.
And for FOREIGN KEY constraints:
The referenced columns must be the columns of a non-deferrable unique
or primary key constraint in the referenced table.
And:
When a UNIQUE or PRIMARY KEY constraint is not deferrable,
PostgreSQL checks for uniqueness immediately whenever a row is
inserted or modified. The SQL standard says that uniqueness should be
enforced only at the end of the statement; this makes a difference
when, for example, a single command updates multiple key values. To
obtain standard-compliant behavior, declare the constraint as
DEFERRABLE but not deferred (i.e., INITIALLY IMMEDIATE). Be aware
that this can be significantly slower than immediate uniqueness
checking.
See:
Constraint defined DEFERRABLE INITIALLY IMMEDIATE is still DEFERRED?
I would avoid a DEFERRABLE PK if at all possible. Maybe you can work around the demonstrated problem? This usually works:
UPDATE temp t
SET seqnr = t.seqnr + 1
FROM (
SELECT defn_id, attr_id, seqnr
FROM temp
WHERE defn_id = 100 AND attr_id = 100 AND seqnr >= 1
ORDER BY defn_id, attr_id, seqnr DESC
) o
WHERE (t.defn_id, t.attr_id, t.seqnr)
= (o.defn_id, o.attr_id, o.seqnr);
db<>fiddle here
But there are no guarantees as ORDER BY is not specified for UPDATE in Postgres.
Related:
UPDATE with ORDER BY

Making combinations of attributes unique in PostgreSQL

There is an option in postgresql where we can have a constraint such that we can have multiple attributes of a table together as unique
UNIQUE (A, B, C)
Is it possible to take attributes from multiple tables and make their entire combination as unique in some way
Edit:
Table 1: List of Book
Attributes: ID, Title, Year, Publisher
Table 2: List of Author
Attributes: Name, ID
Table 3: Written By: Relation between Book and Author
Attributes: Book_ID, Author_ID
Now I have situation where I don't want (Title, Year, Publisher, Authors) get repeated in my entire database
There are 3 solutions to this problem:
You add a column "authorID" to the table "book", as a foreign key. You can then add the UNIQUE constraint to the table "book".
We can have a foreign key on the 2 columns (bookID, author ID) which references the table bookAuthor.
You create a Trigger on insert on the table "book" which checks whether the combination exist and does not insert if it does exist. You will find a working example of this option below.
Whilst working on this option I realised that the JOIN to WrittenBy must be done on Title and not ID. Otherwise we can record the same book as many times as we like just by using a new ID. The problem with using the title is that the slightest change in spelling or punctuation means that it is treated as a new title.
In the example the 3rd insert has failed because it already exists. In the 4th have left 2 spaces in "Tom Sawyer" and it is accepted as a different title.
Also as we use a join to find out the author the real effect of our rule is exactly the same as if we had a UNIQUE constraint on the table books on columns Title, Year and Publisher. This means that all that I have coded is a waste of time.
We thus decide, after coding it, that this option is not effective.
We could create a fourth table with the 4 columns and a UNIQUE constraint on all 4. This seems a heavy solution compared to option 1.
CREATE TABLE Book (
ID int primary key,
Title varchar(25),
Year int,
Publisher varchar(10) );
CREATE TABLE Author (
ID int primary key,
Name varchar(10)
);
CREATE TABLE WrittenBy(
Book_ID int primary key,
Titlew varchar(25),
Author_ID int
);
CREATE FUNCTION book_insert_trigger_function()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS $$
DECLARE
authID INTEGER;
coun INTEGER;
BEGIN
IF pg_trigger_depth() <> 1 THEN
RETURN NEW;
END IF;
SELECT MAX(Author_ID) into authID
FROM WrittenBy w
WHERE w.Titlew = NEW.Title;
SELECT COUNT(*) INTO coun FROM
Book b LEFT JOIN WrittenBy w ON
b.Title = w.Titlew
WHERE NEW.year = b.year
AND NEW.title=b.title
AND NEW.publisher=b.publisher
AND authID = COALESCE(w.Author_ID,authID);
IF coun > 0 THEN
RETURN null; -- this means that we do not insert
ELSE
RETURN NEW;
END IF;
END;
$$
;
CREATE TRIGGER book_insert_trigger
BEFORE INSERT
ON Book
FOR EACH ROW
EXECUTE PROCEDURE book_insert_trigger_function();
INSERT INTO WrittenBy VALUES
(1,'Tom Sawyer',1),
(2,'Huckleberry Finn',1);
INSERT INTO Book VALUES (1,'Tom Sawyer',1950,'Classics');
INSERT INTO Book VALUES (2,'Huckleberry Finn',1950,'Classics');
INSERT INTO Book VALUES (3,'Tom Sawyer',1950,'Classics');
INSERT INTO Book VALUES (3,'Tom Sawyer',1950,'Classics');
SELECT *
FROM Book b
LEFT JOIN WrittenBy w on w.Titlew = b.Title
LEFT JOIN Author a on w.author_ID = a.ID;
>
> id | title | year | publisher | book_id | titlew | author_id | id | name
> -: | :--------------- | ---: | :-------- | ------: | :--------------- | --------: | ---: | :---
> 1 | Tom Sawyer | 1950 | Classics | 1 | Tom Sawyer | 1 | null | null
> 2 | Huckleberry Finn | 1950 | Classics | 2 | Huckleberry Finn | 1 | null | null
> 3 | Tom Sawyer | 1950 | Classics | null | null | null | null | null
>
db<>fiddle here

Result of query as column value

I've got three tables:
Lessons:
CREATE TABLE lessons (
id SERIAL PRIMARY KEY,
title text NOT NULL,
description text NOT NULL,
vocab_count integer NOT NULL
);
+----+------------+------------------+-------------+
| id | title | description | vocab_count |
+----+------------+------------------+-------------+
| 1 | lesson_one | this is a lesson | 3 |
| 2 | lesson_two | another lesson | 2 |
+----+------------+------------------+-------------+
Lesson_vocabulary:
CREATE TABLE lesson_vocabulary (
lesson_id integer REFERENCES lessons(id),
vocabulary_id integer REFERENCES vocabulary(id)
);
+-----------+---------------+
| lesson_id | vocabulary_id |
+-----------+---------------+
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 2 | 2 |
| 2 | 4 |
+-----------+---------------+
Vocabulary:
CREATE TABLE vocabulary (
id integer PRIMARY KEY,
hiragana text NOT NULL,
reading text NOT NULL,
meaning text[] NOT NULL
);
Each lesson contains multiple vocabulary, and each vocabulary can be included in multiple lessons.
How can I get the vocab_count column of the lessons table to be calculated and updated whenevr I add more rows to the lesson_vocabulary table. Is this possible, and how would I go about doing this?
Thanks
You can use SQL triggers to serve your purpose. This would be similar to mysql after insert trigger which updates another table's column.
The trigger would look somewhat like this. I am using Oracle SQL, but there would just be minor tweaks for any other implementation.
CREATE TRIGGER vocab_trigger
AFTER INSERT ON lesson_vocabulary
FOR EACH ROW
begin
for lesson_cur in (select LESSON_ID, COUNT(VOCABULARY_ID) voc_cnt from LESSON_VOCABULARY group by LESSON_ID) LOOP
update LESSONS
set VOCAB_COUNT = LESSON_CUR.VOC_CNT
where id = LESSON_CUR.LESSON_ID;
end loop;
END;
It's better to create a view that calculates that (and get rid of the column in the lessons table):
select l.*, lv.vocab_count
from lessons l
left join (
select lesson_id, count(*)
from lesson_vocabulary
group by lesson_id
) as lv(lesson_id, vocab_count) on l.id = lv.lesson_id
If you really want to update the lessons table each time the lesson_vocabulary changes, you can run an UPDATE statement like this in a trigger:
update lessons l
set vocab_count = t.cnt
from (
select lesson_id, count(*) as cnt
from lesson_vocabulary
group by lesson_id
) t
where t.lesson_id = l.id;
I would recommend using a query for this information:
select l.*,
(select count(*)
from lesson_vocabulary lv
where lv.lesson_id = l.lesson_id
) as vocabulary_cnt
from lessons l;
With an index on lesson_vocabulary(lesson_id), this should be quite fast.
I recommend this over an update, because the data remains correct.
I recommend this over a trigger, because it is simpler.
I recommend this over a subquery with aggregation because it should be faster, particularly if you are filtering on the lessons table.

SQL Query 2 tables null results

I was asked this question in an interview:
From the 2 tables below, write a query to pull customers with no sales orders.
How many ways to write this query and which would have best performance.
Table 1: Customer - CustomerID
Table 2: SalesOrder - OrderID, CustomerID, OrderDate
Query:
SELECT *
FROM Customer C
RIGHT OUTER JOIN SalesOrder SO ON C.CustomerID = SO.CustomerID
WHERE SO.OrderID = NULL
Is my query correct and are there other ways to write the query and get the same results?
Answering for MySQL instead of SQL Server, cause you tagged it later with SQL Server, so I thought (since this was an interview question, that it wouldn't bother you, for which DBMS this is). Note though, that the queries I wrote are standard sql, they should run in every RDBMS out there. How each RDBMS handles those queries is another issue, though.
I wrote this little procedure for you, to have a test case. It creates the tables customers and orders like you specified and I added primary keys and foreign keys, like one would usually do it. No other indexes, as every column worth indexing here is already primary key. 250 customers are created, 100 of them made an order (though out of convenience none of them twice / multiple times). A dump of the data follows, posted the script just in case you want to play around a little by increasing the numbers.
delimiter $$
create procedure fill_table()
begin
create table customers(customerId int primary key) engine=innodb;
set #x = 1;
while (#x <= 250) do
insert into customers values(#x);
set #x := #x + 1;
end while;
create table orders(orderId int auto_increment primary key,
customerId int,
orderDate timestamp,
foreign key fk_customer (customerId) references customers(customerId)
) engine=innodb;
insert into orders(customerId, orderDate)
select
customerId,
now() - interval customerId day
from
customers
order by rand()
limit 100;
end $$
delimiter ;
call fill_table();
For me, this resulted in this:
CREATE TABLE `customers` (
`customerId` int(11) NOT NULL,
PRIMARY KEY (`customerId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `customers` VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15),(16),(17),(18),(19),(20),(21),(22),(23),(24),(25),(26),(27),(28),(29),(30),(31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41),(42),(43),(44),(45),(46),(47),(48),(49),(50),(51),(52),(53),(54),(55),(56),(57),(58),(59),(60),(61),(62),(63),(64),(65),(66),(67),(68),(69),(70),(71),(72),(73),(74),(75),(76),(77),(78),(79),(80),(81),(82),(83),(84),(85),(86),(87),(88),(89),(90),(91),(92),(93),(94),(95),(96),(97),(98),(99),(100),(101),(102),(103),(104),(105),(106),(107),(108),(109),(110),(111),(112),(113),(114),(115),(116),(117),(118),(119),(120),(121),(122),(123),(124),(125),(126),(127),(128),(129),(130),(131),(132),(133),(134),(135),(136),(137),(138),(139),(140),(141),(142),(143),(144),(145),(146),(147),(148),(149),(150),(151),(152),(153),(154),(155),(156),(157),(158),(159),(160),(161),(162),(163),(164),(165),(166),(167),(168),(169),(170),(171),(172),(173),(174),(175),(176),(177),(178),(179),(180),(181),(182),(183),(184),(185),(186),(187),(188),(189),(190),(191),(192),(193),(194),(195),(196),(197),(198),(199),(200),(201),(202),(203),(204),(205),(206),(207),(208),(209),(210),(211),(212),(213),(214),(215),(216),(217),(218),(219),(220),(221),(222),(223),(224),(225),(226),(227),(228),(229),(230),(231),(232),(233),(234),(235),(236),(237),(238),(239),(240),(241),(242),(243),(244),(245),(246),(247),(248),(249),(250);
CREATE TABLE `orders` (
`orderId` int(11) NOT NULL AUTO_INCREMENT,
`customerId` int(11) DEFAULT NULL,
`orderDate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`orderId`),
KEY `fk_customer` (`customerId`),
CONSTRAINT `orders_ibfk_1` FOREIGN KEY (`customerId`) REFERENCES `customers` (`customerId`)
) ENGINE=InnoDB AUTO_INCREMENT=128 DEFAULT CHARSET=utf8;
INSERT INTO `orders` VALUES (1,247,'2013-06-24 19:50:07'),(2,217,'2013-07-24 19:50:07'),(3,8,'2014-02-18 20:50:07'),(4,40,'2014-01-17 20:50:07'),(5,52,'2014-01-05 20:50:07'),(6,80,'2013-12-08 20:50:07'),(7,169,'2013-09-10 19:50:07'),(8,135,'2013-10-14 19:50:07'),(9,115,'2013-11-03 20:50:07'),(10,225,'2013-07-16 19:50:07'),(11,112,'2013-11-06 20:50:07'),(12,243,'2013-06-28 19:50:07'),(13,158,'2013-09-21 19:50:07'),(14,24,'2014-02-02 20:50:07'),(15,214,'2013-07-27 19:50:07'),(16,25,'2014-02-01 20:50:07'),(17,245,'2013-06-26 19:50:07'),(18,182,'2013-08-28 19:50:07'),(19,166,'2013-09-13 19:50:07'),(20,69,'2013-12-19 20:50:07'),(21,85,'2013-12-03 20:50:07'),(22,44,'2014-01-13 20:50:07'),(23,103,'2013-11-15 20:50:07'),(24,19,'2014-02-07 20:50:07'),(25,33,'2014-01-24 20:50:07'),(26,102,'2013-11-16 20:50:07'),(27,41,'2014-01-16 20:50:07'),(28,94,'2013-11-24 20:50:07'),(29,43,'2014-01-14 20:50:07'),(30,150,'2013-09-29 19:50:07'),(31,218,'2013-07-23 19:50:07'),(32,131,'2013-10-18 19:50:07'),(33,77,'2013-12-11 20:50:07'),(34,2,'2014-02-24 20:50:07'),(35,45,'2014-01-12 20:50:07'),(36,230,'2013-07-11 19:50:07'),(37,101,'2013-11-17 20:50:07'),(38,31,'2014-01-26 20:50:07'),(39,56,'2014-01-01 20:50:07'),(40,176,'2013-09-03 19:50:07'),(41,223,'2013-07-18 19:50:07'),(42,145,'2013-10-04 19:50:07'),(43,26,'2014-01-31 20:50:07'),(44,62,'2013-12-26 20:50:07'),(45,195,'2013-08-15 19:50:07'),(46,153,'2013-09-26 19:50:07'),(47,179,'2013-08-31 19:50:07'),(48,104,'2013-11-14 20:50:07'),(49,7,'2014-02-19 20:50:07'),(50,209,'2013-08-01 19:50:07'),(51,86,'2013-12-02 20:50:07'),(52,110,'2013-11-08 20:50:07'),(53,204,'2013-08-06 19:50:07'),(54,187,'2013-08-23 19:50:07'),(55,114,'2013-11-04 20:50:07'),(56,38,'2014-01-19 20:50:07'),(57,236,'2013-07-05 19:50:07'),(58,79,'2013-12-09 20:50:07'),(59,96,'2013-11-22 20:50:07'),(60,37,'2014-01-20 20:50:07'),(61,207,'2013-08-03 19:50:07'),(62,22,'2014-02-04 20:50:07'),(63,120,'2013-10-29 20:50:07'),(64,200,'2013-08-10 19:50:07'),(65,51,'2014-01-06 20:50:07'),(66,181,'2013-08-29 19:50:07'),(67,4,'2014-02-22 20:50:07'),(68,123,'2013-10-26 19:50:07'),(69,108,'2013-11-10 20:50:07'),(70,55,'2014-01-02 20:50:07'),(71,76,'2013-12-12 20:50:07'),(72,6,'2014-02-20 20:50:07'),(73,18,'2014-02-08 20:50:07'),(74,211,'2013-07-30 19:50:07'),(75,53,'2014-01-04 20:50:07'),(76,216,'2013-07-25 19:50:07'),(77,32,'2014-01-25 20:50:07'),(78,74,'2013-12-14 20:50:07'),(79,138,'2013-10-11 19:50:07'),(80,197,'2013-08-13 19:50:07'),(81,221,'2013-07-20 19:50:07'),(82,118,'2013-10-31 20:50:07'),(83,61,'2013-12-27 20:50:07'),(84,28,'2014-01-29 20:50:07'),(85,16,'2014-02-10 20:50:07'),(86,39,'2014-01-18 20:50:07'),(87,3,'2014-02-23 20:50:07'),(88,46,'2014-01-11 20:50:07'),(89,189,'2013-08-21 19:50:07'),(90,59,'2013-12-29 20:50:07'),(91,249,'2013-06-22 19:50:07'),(92,127,'2013-10-22 19:50:07'),(93,47,'2014-01-10 20:50:07'),(94,178,'2013-09-01 19:50:07'),(95,141,'2013-10-08 19:50:07'),(96,188,'2013-08-22 19:50:07'),(97,220,'2013-07-21 19:50:07'),(98,15,'2014-02-11 20:50:07'),(99,175,'2013-09-04 19:50:07'),(100,206,'2013-08-04 19:50:07');
Okay, now to the queries. Three ways came to my mind, I omitted the right join that MDiesel did, because it's actually just another way of writing left join. It was invented for lazy sql developers, that don't want to switch table names, but instead just rewrite one word.
Anyway, first query:
select
c.*
from
customers c
left join orders o on c.customerId = o.customerId
where o.customerId is null;
Results in an execution plan like this:
+----+-------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+
| 1 | SIMPLE | c | index | NULL | PRIMARY | 4 | NULL | 250 | Using index |
| 1 | SIMPLE | o | ref | fk_customer | fk_customer | 5 | wtf.c.customerId | 1 | Using where; Using index |
+----+-------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+
Second query:
select
c.*
from
customers c
where c.customerId not in (select distinct customerId from orders);
Results in an execution plan like this:
+----+--------------------+--------+----------------+---------------+-------------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+--------+----------------+---------------+-------------+---------+------+------+--------------------------+
| 1 | PRIMARY | c | index | NULL | PRIMARY | 4 | NULL | 250 | Using where; Using index |
| 2 | DEPENDENT SUBQUERY | orders | index_subquery | fk_customer | fk_customer | 5 | func | 2 | Using index |
+----+--------------------+--------+----------------+---------------+-------------+---------+------+------+--------------------------+
Third query:
select
c.*
from
customers c
where not exists (select 1 from orders o where o.customerId = c.customerId);
Results in an execution plan like this:
+----+--------------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+
| 1 | PRIMARY | c | index | NULL | PRIMARY | 4 | NULL | 250 | Using where; Using index |
| 2 | DEPENDENT SUBQUERY | o | ref | fk_customer | fk_customer | 5 | wtf.c.customerId | 1 | Using where; Using index |
+----+--------------------+-------+-------+---------------+-------------+---------+------------------+------+--------------------------+
We can see in all execution plans, that the customers table is read as a whole, but from the index (the implicit one as the only column is primary key). This may change, when you select other columns from the table, that are not in an index.
The first one seems to be the best. For each row in customers only one row in orders is read. The id column suggests, that MySQL can do this in one step, as only indexes are involved.
The second query seems to be the worst (though all 3 queries shouldn't perform too bad). For each row in customers the subquery is executed (the select_type column tells this).
The third query is not much different in that it uses a dependent subquery, but should perform better than the second query. Explaining the small differences would lead to far now. If you're interested, here's the manual page that explains what each column and their values mean here: EXPLAIN output
Finally: I'd say, that the first query will perform best, but as always, in the end one has to measure, to measure and to measure.
I can thing of two other ways to write this query:
SELECT C.*
FROM Customer C
LEFT OUTER JOIN SalesOrder SO ON C.CustomerID = SO.CustomerID
WHERE SO.CustomerID IS NULL
SELECT C.*
FROM Customer C
WHERE NOT C.CustomerID IN(SELECT CustomerID FROM SalesOrder)
The solutions involving outer joins will perform better than a solution using NOT IN.