lets say I have a huge select on a certain table. One value for a column is calculated with complex logc and its called ColumnA. Now, for another column, I need the value from ColumnA and add some other static value to it.
Sample SQL:
select table.id, table.number, complex stuff [ColumnA], [ColumnA] + 10 .. from table ...
The [ColumnA] + 10 is what im looking for. The complex stuff is a huge case/when block.
Ideas?
If you want to reference a value that's computed in the SELECT clause, you need to move the existing query into a sub-SELECT:
SELECT
/* Other columns */,
ColumnA,
ColumnA + 10 as ColumnB
FROM
(select table.id, table.number, complex stuff [ColumnA].. from table ...
) t
You have to introduce an alias for this table (in the above, t, after the closing bracket) even if you're not going to use it.
(Equivalently - assuming you're using SQL Server 2005 or later - you can move your existing query into a CTE):
;WITH PartialResults as (
select table.id, table.number, complex stuff [ColumnA].. from table ...
)
SELECT /* other columns */, ColumnA, ColumnA+10 as ColumnB from PartialResults
CTEs tend to look cleaner if you've got multiple levels of partial computations being done, I.e. if you've now got a calculation that depends on ColumnB to include in your query.
Unfortunately, in SQL Server 2016:
SELECT 3 AS a, 6/a AS b;
Error: Invalid column name: 'a'.
You could solve this with a subquery and column aliases.
Here's an example:
SELECT MaxId + 10
FROM (SELECT Max(t.Id) As MaxId
FROM SomeTable t) As SomeTableMaxId
You could:
Do the + 10 in the client code
Write a scalar-valued function to encapsulate the logic for complex stuff. It will be optimized into a single call.
Copy complex stuff logic for the other column. It should get optimized out into 1 call.
Use a sub-select to apply the additional calculation
One convenient option to reuse scalar expressions in a query is to use APPLY (or LATERAL in standard SQL):
SELECT
table.id,
table.number,
[ColumnA],
[ColumnA] + 10
FROM
table
CROSS APPLY (SELECT complex stuff [ColumnA]) t
Related
As we all know, the ORDER BY clause is processed after the SELECT clause, so a column alias in the SELECT clause can be used.
However, I find that I can’t use the aliased column in a calculation in the ORDER BY clause.
WITH data AS(
SELECT *
FROM (VALUES
('apple'),
('banana'),
('cherry'),
('date')
) AS x(item)
)
SELECT item AS s
FROM data
-- ORDER BY s; -- OK
-- ORDER BY item + ''; -- OK
ORDER BY s + ''; -- Fails
I know there are alternative ways of doing this particular query, and I know that this is a trivial calculation, but I’m interested in why the column alias doesn’t work when in a calculation.
I have tested in PostgreSQL, MariaDB, SQLite and Oracle, and it works as expected. SQL Server appears to be the odd one out.
The documentation clearly states that:
The column names referenced in the ORDER BY clause must correspond to
either a column or column alias in the select list or to a column
defined in a table specified in the FROM clause without any
ambiguities. If the ORDER BY clause references a column alias from
the select list, the column alias must be used standalone, and not as
a part of some expression in ORDER BY clause:
Technically speaking, your query should work since order by clause is logically evaluated after select clause and it should have access to all expressions declared in select clause. But without looking at having access to the SQL specs I cannot comment whether it is a limitation of SQL Server or the other RDBMS implementing it as a bonus feature.
Anyway, you can use CROSS APPLY as a trick.... it is part of FROM clause so the expressions should be available in all subsequent clauses:
SELECT item
FROM t
CROSS APPLY (SELECT item + '') AS CA(item_for_sort)
ORDER BY item_for_sort
It is simply due to the way expressions are evaluated. A more illustrative example:
;WITH data AS
(
SELECT * FROM (VALUES('apple'),('banana')) AS sq(item)
)
SELECT item AS s
FROM data
ORDER BY CASE WHEN 1 = 1 THEN s END;
This returns the same Invalid column name error. The CASE expression (and the concatenation of s + '' in the simpler case) is evaluated before the alias in the select list is resolved.
One workaround for your simpler case is to append the empty string in the select list:
SELECT
item + '' AS s
...
ORDER BY s;
There are more complex ways, like using a derived table or CTE:
;WITH data AS
(
SELECT * FROM (VALUES('apple'),('banana') AS sq(item)
),
step2 AS
(
SELECT item AS s FROM data
)
SELECT s FROM step2 ORDER BY s+'';
This is just the way that SQL Server works, and I think you could say "well SQL Server is bad because of this" but SQL Server could also say "what the heck is this use case?" :-)
Most online documentation or tutorials discussing OUTER|CROSS APPLY describe something like:
SELECT columns
FROM table OUTER|CROSS APPLY (SELECT … FROM …);
The subquery is normally a full SELECT … FROM … query.
I must have read somewhere that the subquery doesn’t need a FROM in which case the columns appear to come from the main query:
SELECT columns
FROM table OUTER|CROSS APPLY (SELECT … );
because I have used it routinely as a method to pre-calculate columns.
The question is what is really happening if the FROM is omitted from the sub query? Is it short for something else? I found that it does not mean the same as from the main table.
I have a sample here: http://sqlfiddle.com/#!18/0188f7/4/1
First consider
SELECT o.name, o.type
FROM sys.objects o
Now consider
SELECT o.name, (SELECT o.type) AS type
FROM sys.objects o
A SELECT without a FROM is as though selecting from an imaginary single row table. The above doesn't change the results the scalar subquery just acts as a correlated sub query and uses the value from the outer query.
APPLY behaves in the same way. References to columns from the outer query are just passed in as correlated parameters. So this is the same as
SELECT o.name, ca.type
FROM sys.objects o
CROSS APPLY (SELECT o.type) AS ca
But APPLY in general is more capable than a scalar subquery in the SELECT (in that it can act to expand a row out or remove rows from the result)
What you have mentioned is not SUBQUERY. It is separate table expression. Whether you use FROM clause in the right expression or not problem.
If you use FROM clause in right table expression then you have got a source for the data in right table expression.
If you dont use FROM clause in the right expression, your source of data comes from left table expression.
First we will see what is APPLY operator. Reference BOL
Using APPLY
Both the left and right operands of the APPLY operator are table
expressions. The main difference between these operands is that the
right_table_source can use a table-valued function that takes a column
from the left_table_source as one of the arguments of the function.
The left_table_source can include table-valued functions, but it
cannot contain arguments that are columns from the right_table_source.
The APPLY operator works in the following way to produce the table
source for the FROM clause:
Evaluates right_table_source against each row of the left_table_source to produce rowsets.
The values in the right_table_source depend on left_table_source.
right_table_source can be represented approximately this way:
TVF(left_table_source.row), where TVF is a table-valued function.
Combines the result sets that are produced for each row in the evaluation of right_table_source with the left_table_source by
performing a UNION ALL operation.
The list of columns produced by the result of the APPLY operator is
the set of columns from the left_table_source that is combined with
the list of columns from the right_table_source.
Based on the way you are using APPLY operator, it will behave as correlated subquery or CROSS JOIN
Using values of the left table expression in right table expression
-- without FROM (similar to Correlated Subquery)
SELECT id, data, value
FROM test OUTER APPLY(SELECT data*10 AS value) AS sq;
Not using values of left table expression in right table expression
-- FROM table (Similar to cross join)
SELECT id, data, value
FROM test OUTER APPLY(SELECT data*10 AS value FROM test) AS sq;
Omitting the FROM statement is not specific to a CROSS/OUTER APPLY; any valid SQL select statement can omit it. By not using FROM you have no source for your data, so you can't specify columns within that source. Rather you can only select values that already exist; be that constants defined in the statement itself, or in some cases (e.g. subqueries) columns referenced from other parts of the query.
This is simpler to understand if you're familiar with Oracle's Dual table; a table with 1 row. In MS SQL that table would look like this:
-- Ref: https://blog.sqlauthority.com/2010/07/20/sql-server-select-from-dual-dual-equivalent/
CREATE TABLE DUAL
(
DUMMY VARCHAR(1) NOT NULL
, CONSTRAINT CHK_ColumnD_DocExc CHECK (DUMMY = 'X') -- ensure this column can only hold the value X
, CONSTRAINT PK_DUAL PRIMARY KEY (DUMMY) -- ensure we can only have unique values... combined with the above means we can only ever have 1 row
)
GO
INSERT INTO DUAL (DUMMY)
VALUES ('X')
GO
You can then do select 1 one, 'something else' two from dual. You're not really using dual; just ensuring that you have a table which will always return exactly 1 row.
Now in SQL anywhere you omit a FROM statement consider that statement as if it said FROM DUAL / it has the same meaning, only SQL allows this more shorthand approach.
Update
You mention in the comments that you don't see how you can reference columns from the original statement when in a subquery (e.g. of the kind you may see when using APPLY). The below code shows this without the APPLY scenario. Admittedly the demo code here's not somehting you'd ever use (since you could just to where Something like '%o%' on the original statement without needing the subquery/in statement), but for illustrative purposes it shows exactly the same sort of scenario as you've got with your APPLY scenario; i.e. the statement is just returning the value of SOMETHING for the current row.
declare #someTable table (
Id bigint not null identity(1,1)
, Something nvarchar(32) not null
)
insert #someTable (Something) values ('one'), ('two'), ('three')
select *
from #someTable x
where x.Something in
(
-- this subquery references the SOMETHING column from above, but doesn't have a FROM statement
-- note: there is only 1 value at a time for something here; not all 3 values at once; it's the same single value as Something as we have before the in keyword above
select Something
where Something like '%o%'
)
I have a table valued function in sql server which returns multiple rows and single column such as below
1
2
3
I use the syntax select * from dbo.function to use the values returned by this function in where clause of my queries.
Now apart from the value returned by the function I want to put certain hard coded values in that where clause.
For example :
Select * from dbo.table where ID in (Select * from dbo.function + **I want to add some more values here**)
So that if function returns
1
2
3
I want to add lets say
4
5
in that list such that final query becomes as follows :
select * from dbo.table where ID in (1,2,3,4,5)
Use or:
Select *
from dbo.table
where ID in (Select * from dbo.function) or
ID in (4, 5)
Although you could mangle the subquery using union all, the above makes the query easier to follow (in my opinion). Also, in the event that "function" is really a table, it is easier for the optimizer to recognize appropriate indexes.
I would like to query a DB2 table and get all the results of a query in addition to all of the rows returned by the select statement in a separate column.
E.g., if the table contains columns 'id' and 'user_id', assuming 100 rows, the result of the query would appear in this format: (id) | (user_id) | 100.
I do not wish to use a 'group by' clause in the query. (Just in case you are confused about what i am asking) Also, I could not find an example here: http://mysite.verizon.net/Graeme_Birchall/cookbook/DB2V97CK.PDF.
Also, if there is a more efficient way of getting both these results (values + count), I would welcome any ideas. My environment uses zend framework 1.x, which does not have an ODBC adapter for DB2. (See issue http://framework.zend.com/issues/browse/ZF-905.)
If I understand what you are asking for, then the answer should be
select t.*, g.tally
from mytable t,
(select count(*) as tally
from mytable
) as g;
If this is not what you want, then please give an actual example of desired output, supposing there are 3 to 5 records, so that we can see exactly what you want.
You would use window/analytic functions for this:
select t.*, count(*) over() as NumRows
from table t;
This will work for whatever kind of query you have.
I have a SQL query, that returns a set of rows:
SELECT id, name FROM users where group = 2
I need to also include a column that has an incrementing integer value, so the first row needs to have a 1 in the counter column, the second a 2, the third a 3 etc
The query shown here is just a simplified example, in reality the query could be arbitrarily complex, with several joins and nested queries.
I know this could be achieved using a temporary table with an autonumber field, but is there a way of doing it within the query itself ?
For starters, something along the lines of:
SELECT my_first_column, my_second_column,
ROW_NUMBER() OVER (ORDER BY my_order_column) AS Row_Counter
FROM my_table
However, it's important to note that the ROW_NUMBER() OVER (ORDER BY ...) construct only determines the values of Row_Counter, it doesn't guarantee the ordering of the results.
Unless the SELECT itself has an explicit ORDER BY clause, the results could be returned in any order, dependent on how SQL Server decides to optimise the query. (See this article for more info.)
The only way to guarantee that the results will always be returned in Row_Counter order is to apply exactly the same ordering to both the SELECT and the ROW_NUMBER():
SELECT my_first_column, my_second_column,
ROW_NUMBER() OVER (ORDER BY my_order_column) AS Row_Counter
FROM my_table
ORDER BY my_order_column -- exact copy of the ordering used for Row_Counter
The above pattern will always return results in the correct order and works well for simple queries, but what about an "arbitrarily complex" query with perhaps dozens of expressions in the ORDER BY clause? In those situations I prefer something like this instead:
SELECT t.*
FROM
(
SELECT my_first_column, my_second_column,
ROW_NUMBER() OVER (ORDER BY ...) AS Row_Counter -- complex ordering
FROM my_table
) AS t
ORDER BY t.Row_Counter
Using a nested query means that there's no need to duplicate the complicated ORDER BY clause, which means less clutter and easier maintenance. The outer ORDER BY t.Row_Counter also makes the intent of the query much clearer to your fellow developers.
In SQL Server 2005 and up, you can use the ROW_NUMBER() function, which has options for the sort order and the groups over which the counts are done (and reset).
The simplest way is to use a variable row counter. However it would be two actual SQL commands. One to set the variable, and then the query as follows:
SET #n=0;
SELECT #n:=#n+1, a.* FROM tablename a
Your query can be as complex as you like with joins etc. I usually make this a stored procedure. You can have all kinds of fun with the variable, even use it to calculate against field values. The key is the :=
Heres a different approach.
If you have several tables of data that are not joinable, or you for some reason dont want to count all the rows at the same time but you still want them to be part off the same rowcount, you can create a table that does the job for you.
Example:
create table #test (
rowcounter int identity,
invoicenumber varchar(30)
)
insert into #test(invoicenumber) select [column] from [Table1]
insert into #test(invoicenumber) select [column] from [Table2]
insert into #test(invoicenumber) select [column] from [Table3]
select * from #test
drop table #test