Why do WHERE and HAVING exist as separate clauses in SQL? - sql

I understand the distinction between WHERE and HAVING in a SQL query, but I don't see why they are separate clauses. Couldn't they be combined into a single clause that could handle both aggregated and non-aggregated data?

Here's the rule. If a condition refers to an aggregate function, put that condition in the HAVING clause. Otherwise, use the WHERE clause.
Here's another rule: You can't use HAVING unless you also use GROUP BY.
The main difference is that WHERE cannot be used on grouped item (such as SUM(number)) whereas HAVING can.The reason is the WHERE is done before the grouping and HAVING is done after the grouping is done.
ANOTHER DIFFERENCE IS WHERE clause requires a condition to be a column in a table, but HAVING clause can use both column and alias.
Here's the difference:
SELECT `value` v FROM `table` WHERE `v`>5;
Error #1054 - Unknown column 'v' in 'where clause'
SELECT `value` v FROM `table` HAVING `v`>5; -- Get 5 rows
WHERE clause requires a condition to be a column in a table, but HAVING clause can use both column and alias.
This is because WHERE clause filters data before select, but HAVING clause filters data after select.
So put the conditions in WHERE clause will be more effective if you have many many rows in a table.
Try EXPLAIN to see the key difference:
EXPLAIN SELECT `value` v FROM `table` WHERE `value`>5;
+----+-------------+-------+-------+---------------+-------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+-------+---------+------+------+--------------------------+
| 1 | SIMPLE | table | range | value | value | 4 | NULL | 5 | Using where; Using index |
+----+-------------+-------+-------+---------------+-------+---------+------+------+--------------------------+
EXPLAIN SELECT `value` v FROM `table` having `value`>5;
+----+-------------+-------+-------+---------------+-------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+-------+---------+------+------+-------------+
| 1 | SIMPLE | table | index | NULL | value | 4 | NULL | 10 | Using index |
+----+-------------+-------+-------+---------------+-------+---------+------+------+-------------+
You can see either WHERE or HAVING uses index, but the rows are different.
So there is a need of both of them especially when we need grouping and additional filters.

This question seems to illustrate a misunderstanding that WHERE and HAVING are both missing up to 1/2 of the information necessary to fully process a query.
Consider the following SQL:
drop table if exists foo; create table foo (
ID int,
bar int
); insert into foo values (1, 1);
select now() as d, bar as b
from foo
where bar = 1 and d <= now()
having bar = 1 and ID = 1
;
In the where clause, d is not available because the selected items have not been processed to create it yet.
In the having clause ID has been discarded because it was not selected. In aggregate queries ID may not even have meaning in context of multiple rows combined into one. ID may also be meaningless when joining different tables into a single result.

Could it be done? Sure, but on the back-end it'd do the same as it does now, because you have to aggregate something before you can filter based on that aggregation. Ultimately that's the reason, it's a logical separation of different processes. Why waste resources aggregating records you could have filtered with a WHERE?

The question could only be fully answered by the designer since it asks intent. But the implication is that both clauses do the same thing only against aggregated vs. non-aggregated data. That's not true. "The HAVING clause is typically used together with the GROUP BY clause to filter the results of aggregate values. However, HAVING can be specified without GROUP BY."
As I understand it, the important thing is that "The HAVING clause specifies additional filters that are applied after the WHERE clause filters."
http://technet.microsoft.com/en-us/library/ms179270(v=sql.105).aspx

Related

SQL Query with part of the key possibly being NULL

I've been working on a SQL query which needs to pull a value with a two-column key, where one of the columns may be null.And if it's null, I want to pick that value only if there is no row with the specific key
So.
CUSTOM_____PLAN_____COST
VENDCO_____LMNK_____50
VENDCO_____null_____25
BALLCO_____null_____10
I'm trying to run a query that will pull this into one field, i.e., the value of VENDCO at 50, and the value of BUYCO at 10, ignoring the VENDCO row with 25. This would be as part of a joined subquery, so I can't use the actual keys of VENDCO/BUYCO etc. Essentially, pick the cost value with the plan if it exists, but the one where it's null if the plan is not there.
It might also be worthwhile to point out that if I "select * from table where PLAN is null" I don't get results -- I have to select where PLAN=''. I'm not sure if that indicates anything weird about the data.
Hope I'm making myself clear.
I think that not exists should do what you want:
select t.*
from mytable t
where
plan is not null
or not exists (
select 1 from mytable t1 where t1.custom = t.custom and t1.plan is not null
)
Basically this gives priority to rows where plan is not null in groups sharing the same custom.
Demo on DB Fiddle:
CUSTOM | PLAN | COST
:----- | :--- | ---:
VENDCO | LMNK | 50
BALLCO | null | 10

Self-Join with Natural Join

What is the difference between
select * from degreeprogram NATURAL JOIN degreeprogram ;
and
select * from degreeprogram d1 NATURAL JOIN degreeprogram d2;
in oracle?
I expected that they return the same result set, however, they do not. The second query does what I expect: it joins the two relations using the same named attributes and so it returns the same tuples as stored in degreeprogram. However, the first query is confusing for me: here, each tuple occurs several times in the result set-> what join condition is used here?
Thank you
NATURAL JOIN means join the two tables based on all columns having the same name in both tables.
I imagine that for each column in your table, Oracle is internally writing a condition like:
degreeprogram.column1 = degreeprogram.column1
(which you would not be able to write yourself due to ORA-00918 column ambiguously defined error)
And then, I imagine, Oracle is optimizing that away to just
degreeprogram.column1 is not null
So, you're not exactly getting a CROSS JOIN of your table with itself -- only a CROSS JOIN of those rows having no null columns.
UPDATE: Since this was the selected answer, I will just add from Thorsten Kettner's answer that this behavior is probably a bug on Oracle's part. In 18c, Oracle behaves properly and returns an ORA-00918 error when you try to NATURAL JOIN a table to itself.
The difference between those two statements is that the second explicitly defines a self join on the table, where the first statement, the optimizer is trying to figure out what you really want. On my database, the first statement performs a cartesian merge join and is not optimized at all, and the second statement has a better explain plan, using a single full table access with index scanning.
I'd call this a bug. This query:
select * from degreeprogram d1 NATURAL JOIN degreeprogram d2;
translates to
select col1, col2, ... -- all columns
from degreeprogram d1
join degreeprogram d2 using (col1, col2, ...)
and gives you all rows from the table where all columns are not null (because using(col) never matches nulls).
This query, however:
select * from degreeprogram NATURAL JOIN degreeprogram;
is invalid according to standard SQL, because every table must have a unique name or alias in a query. Oracle lets this pass, but doing so it should do something still to keep the table instances apart (e.g. create internally an alias for them). It obviously doesn't and multiplies the result with the number of rows in the table. A bug.
A so-called natural join instructs the database to
Find all column names common to both tables (in this case, degreeprogram and degreeprogram, which of course have the same columns.)
Generate a join condition for each pair of matching column names, in the form table1.column1 = table2.column1 (in this case, there will be one for every column in degreeprogram.)
Therefore a query like this
select count(*) from demo natural join demo;
will be transformed into
select count(*) from demo, demo where demo.x = demo.x;
I checked this by creating a table with one column and two rows:
create table demo (x integer);
insert into demo values (1);
insert into demo values (2);
commit;
and then tracing the session:
SQL> alter session set tracefile_identifier='demo_trace';
Session altered.
SQL> alter session set events 'trace [SQL_Compiler.*]';
Session altered.
SQL> select /* nj test */ count(*) from demo natural join demo;
COUNT(*)
----------
4
1 row selected.
SQL> alter session set events 'trace [SQL_Compiler.*] off';
Session altered.
Then in twelve_ora_6196_demo_trace.trc I found this line:
Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)" FROM "WILLIAM"."DEMO" "DEMO","WILLIAM"."DEMO" "DEMO" WHERE "DEMO"."X"="DEMO"."X"
and a few lines later:
try to generate single-table filter predicates from ORs for query block SEL$58A6D7F6 (#0)
finally: "DEMO"."X" IS NOT NULL
(This is merely an optimisation on top of the generated query above, as column X is nullable but the join allows the optimiser to infer that only non-null values are required. It doesn't replace the joins.)
Hence the execution plan:
-----------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-----------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 7 | |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | MERGE JOIN CARTESIAN | | 4 | 52 | 7 | 00:00:01 |
| 3 | TABLE ACCESS FULL | DEMO | 2 | 26 | 3 | 00:00:01 |
| 4 | BUFFER SORT | | 2 | | 4 | 00:00:01 |
| 5 | TABLE ACCESS FULL | DEMO | 2 | | 2 | 00:00:01 |
-----------------------------------------+-----------------------------------+
Query Block Name / Object Alias(identified by operation id):
------------------------------------------------------------
1 - SEL$58A6D7F6
3 - SEL$58A6D7F6 / DEMO_0001#SEL$1
5 - SEL$58A6D7F6 / DEMO_0002#SEL$1
------------------------------------------------------------
Predicate Information:
----------------------
3 - filter("DEMO"."X" IS NOT NULL)
Alternatively, let's see what dbms_utility.expand_sql_text does with it. I'm not quite sure what to make of this given the trace file above, but it shows a similar expansion taking place:
SQL> var result varchar2(1000)
SQL> exec dbms_utility.expand_sql_text('select count(*) from demo natural join demo', :result)
PL/SQL procedure successfully completed.
RESULT
----------------------------------------------------------------------------------------------------------------------------------
SELECT COUNT(*) "COUNT(*)" FROM (SELECT "A2"."X" "X" FROM "WILLIAM"."DEMO" "A3","WILLIAM"."DEMO" "A2" WHERE "A2"."X"="A2"."X") "A1"
Lesson: NATURAL JOIN is evil. Everybody knows this.

Access SQL unique records with latest date including null dates from single table

I have a table with the following sample structure:
Identifier| Latitude | Longitude |...many columns...|DateWhenStatusObserved|ID|
----------+----------+---------- +------------------+----------------------+--+
2823DC012 | 28.76285 | 23.70195 | ... | 1994/10/28| 1|
2823DC012 | 28.76285 | 23.70195 | ... | 1995/04/05| 2|
2822DD030 | 28.76147 | 22.98270 | ... | NULL| 3|
...
There are many more columns, but these columns do not have to be evaluated, all columns should just be returned from the query.
I would like the SQL query to return only unique records for the Identifier column with the latest date per unique Identifier. Unfortunately there are also records were date is NULL in the DateWhenStatusObserved column and in many instances the only record for an Identifier (geosite) has a NULL date.
There are already many answers for similar SQL questions such as:
How can I include null values in a MIN or MAX?
SELECT only rows with either the MAX date or NULL
http://bytes.com/topic/access/answers/719627-create-query-evaluate-max-date-recognizing-null-high-value
These are however not specific on how exactly does one use the iif statement with an aggregate Max function to allow the NULL date records to pass through while maintaining unique identifier (geosite) records.
I only get non-NULL max date records returned using a subquery and combination of Max(IIF()). I finally got a reasonable result from a basic subquery without Joins and relied on WHERE clauses, but I get duplicate identifier records from NULL dates, because I have to use OR instead of AND to get any rows returned.
Here is one of my attempts returning only non-NULL max date records:
SELECT BasicInfoTable.*
FROM Basic_information_WUA AS BasicInfoTable
INNER JOIN
(
SELECT Identifier, MAX (IIF(DateWhenStatusObserved IS NULL, 0, DateWhenStatusObserved)) AS MaxDate
FROM Basic_information_WUA
GROUP BY Identifier
)
AS Table2 ON BasicInfoTable.Identifier = Table2.Identifier AND BasicInfoTable.DateWhenStatusObserved = Table2.MaxDate;
So why is this not working for the NULL date cases?
I would appreciate any help with finding the near-optimal query for this problem.
Thanks
You need to provide similar (is NULL) logic to BasicInfoTable.DateWhenStatusObserved = Table2.MaxDate. Nulls cannot be "compared".

How can I order the rows inside a table by a column (but not the `SELECT`'s response)?

For example, I want to order a table like this
Foo | Bar
---------
1 | a
5 | d
2 | c
1 | b
2 | a
to this:
Foo | Bar
---------
1 | a
1 | b
2 | a
2 | c
5 | d
(ordered by Foo column)
That's because I only want to select the Bars that have a given Foo, and if it's already ordered I guess they will be faster to select because I won't have to use ORDER BY.
And if it's possible, once sorting by columns Foo, I want to sort the rows which have the same Foo by Bar column.
Of course, if I INSERT or UPDATE to table, it should remain ordered.
In SQL, tables are inherently unordered. This is a very important characteristic of databases. For instance, you can delete a row in the middle of a table, and when a new row is inserted, it uses up the space occupied by the deleted row. This is more efficient that just appending rows to the end of the data.
In other words, the order by clause is used basically for output purposes only. Okay, I can think of two other situations . . . with limit (or a related clause) and with window functions (which SQLite does not support).
In any case, ordering the data also would not matter for a query such as this:
select bar
from t
where foo = $FOO
The SQL engine does not "know" that the table is ordered. So, it will start at the beginning of the table and do the comparison for each row.
The way to make this more efficient is by building an index on foo. Then you will be able to get the efficiencies that you want.

How can I optimize this query?

I have the following query:
SELECT `masters_tp`.*, `masters_cp`.`cp` as cp, `masters_cp`.`punti` as punti
FROM (`masters_tp`)
LEFT JOIN `masters_cp` ON `masters_cp`.`nickname` = `masters_tp`.`nickname`
WHERE `masters_tp`.`stake` = 'report_A'
AND `masters_cp`.`stake` = 'report_A'
ORDER BY `masters_tp`.`tp` DESC, `masters_cp`.`punti` DESC
LIMIT 400;
Is there something wrong with this query that could affect the server memory?
Here is the output of EXPLAIN
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+------+---------------+------+---------+------+-------+----------------------------------------------+
| 1 | SIMPLE | masters_cp | ALL | NULL | NULL | NULL | NULL | 8943 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | masters_tp | ALL | NULL | NULL | NULL | NULL | 12693 | Using where |
Run the same query prefixed with EXPLAIN and add the output to your question - this will show what indexes you are using and the number of rows being analyzed.
You can see from your explain that no indexes are being used, and its having to look at thousands of rows to get your result. Try adding an index on the columns used to perform the join, e.g. nickname and stake:
ALTER TABLE masters_tp ADD INDEX(nickname),ADD INDEX(stake);
ALTER TABLE masters_cp ADD INDEX(nickname),ADD INDEX(stake);
(I've assumed the columns might have duplicated values, if not, use UNIQUE rather than INDEX). See the MySQL manual for more information.
Replace the "masters_tp.* " bit by explicitly naming only the fields from that table you actually need. Even if you need them all, name them all.
There's actually no reason to do a left join here. You're using your filters to whisk away any leftiness of the join. Try this:
SELECT
`masters_tp`.*,
`masters_cp`.`cp` as cp,
`masters_cp`.`punti` as punti
FROM
`masters_tp`
INNER JOIN `masters_cp` ON
`masters_tp`.`stake` = `masters_cp`.stake`
and `masters_tp`.`nickname` = `masters_cp`.`nickname`
WHERE
`masters_tp`.`stake` = 'report_A'
ORDER BY
`masters_tp`.`tp` DESC,
`masters_cp`.`punti` DESC
LIMIT 400;
inner joins tend to be faster than left joins. The query can limit the number of rows that have to be joined using the predicates (aka the where clause). This means that the database is handling, potentially, a lot less rows, which obviously speeds things up.
Additionally, make sure you have a non-clustered index on stake and nickname (in that order).
It is simple query. I think everything is ok with it. You can try add indexes on 'stake' fields or make limit lower.