Is it possible to use a bitAnd() condition in coldfusion QoQ SQL?
I have checked adobe's documentation on QoQ (http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec0e4fd-7ff0.html). It doesn't say anything about bitwise functions, but past experiences tell me that the coldfusion documentation isn't always complete.
Qoq SQL:
SELECT *
FROM srcTable
WHERE bitAnd(member_type_bit,2) = 2
This throws the error:
Query Of Queries syntax error. Encountered "bitAnd ( member_type_bit
,. Incorrect conditional expression, Expected one of
[like|null|between|in|comparison] condition,
Is it just not supported in QoQ or do I need to use a different syntax?
No, there's no bitAnd() function in the SQL dialect that QoQ uses.
You'll need to do it row by row, ie: loop over the recordset, and build a new recordset with only the rows you want. Or push this back to the DB and do it there (if poss).
For future reference, the entirety of what QoQ supports is listed here:
http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec0e4fd-7ff0.html
That's all of it.
Related
I was researching packages from DB, when I saw following query:
SELECT ALL TABLE1.CODE, nvl(TABLE1.EXPLANATION, '') as Explanation
FROM TABLE1;
I couldn't find what is the usage of ALL in SELECT statement, I know that using ALL in WHERE has meaning of AND.
Could you please clear this out for me?
As the documentation specifies:
ALL
Specify ALL if you want the database to return all rows selected, including all copies of duplicates. The default is ALL.
I don't recall off-hand if any other databases support SELECT ALL. I have never seen it used in the real world.
A bit late, but I would like to remark somthing on this topic:
Yes, its right that it is the standard for select statements but it makes a difference for other functions e.g. "count":
count (all <col>)
ignores NULL values, whereas
count (<col>)
includes NULL values.
[Remark: ORACLE SQL, see https://www.oracletutorial.com/oracle-aggregate-functions/oracle-count/]
Hi is there a way to programmatically detect if a query string is in the legacy or SQL-2011 syntax? I know the former uses [project:dataset.table] for table references while the later uses `project.dataset.table` but this doesn't seem very bullet proof.
There's no way to tell just from the query text in all cases, which is why BigQuery has the "Use Legacy SQL" checkbox in the UI and the use_legacy_sql option for the query API. For example, consider this query:
SELECT *
FROM (SELECT 1 AS x), (SELECT 2), (SELECT 3);
The results are very different despite the query being valid in both dialects.
Standard SQL queries can still contain [], too, such as for array literals.
Assuming query is syntax-wise correct and expected to actually work - you can do Dry Run using both options (Legacy and Standard) and see which fails and which not. Based on result you can potentially derive the answer
I suppose I have always naively assumed that scalar functions in the select part of a SQL query will only get applied to the rows that meet all the criteria of the where clause.
Today I was debugging some code from a vendor and had that assumption challenged. The only reason I can think of for this code failing is that the Substring() function is getting called on data that should have been filtered out by the WHERE clause. But it appears that the substring call is being applied before the filtering happens, the query is failing.
Here is an example of what I mean. Let's say we have two tables, each with 2 columns and having 2 rows and 1 row respectively. The first column in each is just an id. NAME is just a string, and NAME_LENGTH tells us how many characters in the name with the same ID. Note that only names with more than one character have a corresponding row in the LONG_NAMES table.
NAMES: ID, NAME
1, "Peter"
2, "X"
LONG_NAMES: ID, NAME_LENGTH
1, 5
If I want a query to print each name with the last 3 letters cut off, I might first try something like this (assuming SQL Server syntax for now):
SELECT substring(NAME,1,len(NAME)-3)
FROM NAMES;
I would soon find out that this would give me an error, because when it reaches "X" it will try using a negative number for in the substring call, and it will fail.
The way my vendor decided to solve this was by filtering out rows where the strings were too short for the len - 3 query to work. He did it by joining to another table:
SELECT substring(NAMES.NAME,1,len(NAMES.NAME)-3)
FROM NAMES
INNER JOIN LONG_NAMES
ON NAMES.ID = LONG_NAMES.ID;
At first glance, this query looks like it might work. The join condition will eliminate any rows that have NAME fields short enough for the substring call to fail.
However, from what I can observe, SQL Server will sometimes try to calculate the the substring expression for everything in the table, and then apply the join to filter out rows. Is this supposed to happen this way? Is there a documented order of operations where I can find out when certain things will happen? Is it specific to a particular Database engine or part of the SQL standard? If I decided to include some predicate on my NAMES table to filter out short names, (like len(NAME) > 3), could SQL Server also choose to apply that after trying to apply the substring? If so then it seems the only safe way to do a substring would be to wrap it in a "case when" construct in the select?
Martin gave this link that pretty much explains what is going on - the query optimizer has free rein to reorder things however it likes. I am including this as an answer so I can accept something. Martin, if you create an answer with your link in it i will gladly accept that instead of this one.
I do want to leave my question here because I think it is a tricky one to search for, and my particular phrasing of the issue may be easier for someone else to find in the future.
TSQL divide by zero encountered despite no columns containing 0
EDIT: As more responses have come in, I am again confused. It does not seem clear yet when exactly the optimizer is allowed to evaluate things in the select clause. I guess I'll have to go find the SQL standard myself and see if i can make sense of it.
Joe Celko, who helped write early SQL standards, has posted something similar to this several times in various USENET newsfroups. (I'm skipping over the clauses that don't apply to your SELECT statement.) He usually said something like "This is how statements are supposed to act like they work". In other words, SQL implementations should behave exactly as if they did these steps, without actually being required to do each of these steps.
Build a working table from all of
the table constructors in the FROM
clause.
Remove from the working table those
rows that do not satisfy the WHERE
clause.
Construct the expressions in the
SELECT clause against the working table.
So, following this, no SQL dbms should act like it evaluates functions in the SELECT clause before it acts like it applies the WHERE clause.
In a recent posting, Joe expands the steps to include CTEs.
CJ Date and Hugh Darwen say essentially the same thing in chapter 11 ("Table Expressions") of their book A Guide to the SQL Standard. They also note that this chapter corresponds to the "Query Specification" section (sections?) in the SQL standards.
You are thinking about something called query execution plan. It's based on query optimization rules, indexes, temporaty buffers and execution time statistics. If you are using SQL Managment Studio you have toolbox over your query editor where you can look at estimated execution plan, it shows how your query will change to gain some speed. So if just used your Name table and it is in buffer, engine might first try to subquery your data, and then join it with other table.
My company has just started using LINQ and I still am having a little trouble with the abstractness (if thats a word) of the LINQ command and the SQL, my question is
Dim query = (From o In data.Addresses _
Select o.Name).Count
In the above in my mind, the SQL is returning all rows and the does a count on the number rows in the IQueryable result, so I would be better with
Dim lstring = Aggregate o In data.Addresses _
Into Count()
Or am I over thinking the way LINQ works ? Using VB Express at home so I can't see the actual SQL that is being sent to the database (I think) as I don't have access to the SQL profiler
As mentioned, these are functionally equivalent, one just uses query syntax.
As mentioned in my comment, if you evaluate the following as a VB Statement(s) in LINQPad:
Dim lstring = Aggregate o In Test _
Into Count()
You get this in the generated SQL output window:
SELECT COUNT(*) AS [value]
FROM [Test] AS [t0]
Which is the same as the following VB LINQ expression as evaluated:
(From o In Test_
Select o.Symbol).Count
You get the exact same result.
I'm not familiar with Visual Basic, but based on
http://msdn.microsoft.com/en-us/library/bb546138.aspx
Those two approaches are the same. One uses method syntax and the other uses query syntax.
You can find out for sure by using SQL Profiler as the queries run.
PS - The "point" of LINQ is you can easily do query operations without leaving code/VB-land.
An important thing here, is that the code you give will work with a wide variety of data sources. It will hopefully do so in a very efficient way, though that can't be fully guaranteed. It certainly will be done in an efficient way with a SQL source (being converted into a SELECT COUNT(*) SQL query. It will be done efficiently if the source was an in-memory collection (it gets converted to calling the Count property). It isn't done very efficiently if the source is an enumerable that is not a collection (in this case it does read everything and count as it goes), but in that case there really isn't a more efficient way of doing this.
In each case it has done the same conceptual operation, in the most efficient manner possible, without you having to worry about the details. No big deal with counting, but a bigger deal in more complex cases.
To a certain extent, you are right when you say "in my mind, the SQL is returning all rows and the does a count on the number rows". Conceptually that is what is happening in that query, but the implementation may differ. Compare with how the real query in SQL may not match the literal interpretation of the SQL command, to allow the most efficient approach to be picked.
I think you are missing the point as Linq with SQL has late binding the search is done when you need it so when you say I need the count number then a Query is created.
Before that Linq for SQL creates Expression trees that will be "translated" in to SQL when you need it....
http://weblogs.asp.net/scottgu/archive/2007/05/19/using-linq-to-sql-part-1.aspx
http://msdn.microsoft.com/en-us/netframework/aa904594.aspx
How to debug see Scott
http://weblogs.asp.net/scottgu/archive/2007/07/31/linq-to-sql-debug-visualizer.aspx
(source: scottgu.com)
This question already has answers here:
Subquery using Exists 1 or Exists *
(6 answers)
Closed 7 years ago.
I've seen some people use EXISTS (SELECT 1 FROM ...) rather than EXISTS (SELECT id FROM ...) as an optimization--rather than looking up and returning a value, SQL Server can simply return the literal it was given.
Is SELECT(1) always faster? Would Selecting a value from the table require work that Selecting a literal would avoid?
In SQL Server, it does not make a difference whether you use SELECT 1 or SELECT * within EXISTS. You are not actually returning the contents of the rows, but that rather the set determined by the WHERE clause is not-empty. Try running the query side-by-side with SET STATISTICS IO ON and you can prove that the approaches are equivalent. Personally I prefer SELECT * within EXISTS.
For google's sake, I'll update this question with the same answer as this one (Subquery using Exists 1 or Exists *) since (currently) an incorrect answer is marked as accepted. Note the SQL standard actually says that EXISTS via * is identical to a constant.
No. This has been covered a bazillion times. SQL Server is smart and knows it is being used for an EXISTS, and returns NO DATA to the system.
Quoth Microsoft:
http://technet.microsoft.com/en-us/library/ms189259.aspx?ppud=4
The select list of a subquery
introduced by EXISTS almost always
consists of an asterisk (*). There is
no reason to list column names because
you are just testing whether rows that
meet the conditions specified in the
subquery exist.
Also, don't believe me? Try running the following:
SELECT whatever
FROM yourtable
WHERE EXISTS( SELECT 1/0
FROM someothertable
WHERE a_valid_clause )
If it was actually doing something with the SELECT list, it would throw a div by zero error. It doesn't.
EDIT: Note, the SQL Standard actually talks about this.
ANSI SQL 1992 Standard, pg 191 http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt
3) Case:
a) If the <select list> "*" is simply contained in a <subquery> that is immediately contained in an <exists predicate>, then the <select list> is equivalent to a <value expression> that is an arbitrary <literal>.
When you use SELECT 1, you clearly show (to whoever is reading your code later) that you are testing whether the record exists. Even if there is no performance gain (which is to be discussed), there is gain in code readability and maintainability.
Yes, because when you select a literal it does not need to read from disk (or even from cache).
doesn't matter what you select in an exists clause. most people do select *, then sql server automatically picks the best index
As someone pointed out sql server ignores the column selection list in EXISTS so it doesn't matter. I personally tend to use "SELECT null ..." to indicate that the value is not used at all.
If you look at the execution plan for
select COUNT(1) from master..spt_values
and look at the stream aggregate you will see that it calculates
Scalar Operator(Count(*))
So the 1 actually gets converted to *
However I have read somewhere in the "Inside SQL Server" series of books that * might incur a very slight overhead for checking column permissions. Unfortunately the book didn't go into any more detail than that as I recall.
Select 1 should be better to use in your example. Select * gets all the meta-data assoicated with the objects before runtime which adss overhead during the compliation of the query. Though you may not see differences when running both types of queries in your execution plan.