Why is a query under a IF statement that is false running? - sql

I have a application that uses a lot of string interpolation for SQL queries. I know it is a SQL injection threat, this is something that the customer and us know about and is hopefully something we can focus on next big refactor. I say that to make sense of the {Root Container.property} things that come from a GUI.
I have this query
IF ({Root Container.UserSelectedProduct}=1)
begin
DECLARE #TestNumbers {Root Container.SQLProductType};
INSERT INTO #TestNumbers SELECT * FROM {Root Container.DBTable};
SELECT *
FROM {Root Container.SQLProductFunction} (#TestNumbers)
WHERE [ID] = {Root Container.Level};
end
else
Select 0
Before a user selects a product it looks like this
IF (0=1)     
BEGIN
DECLARE #TestNumbers myDataType;
INSERT INTO #TestNumbers SELECT * FROM [MySchema].[TheWrongTable];     
SELECT * FROM [dbo].[myfunction] (#TestNumbers)
WHERE [ID] = 1;
END
ELSE
SELECT 0
Which is giving me the error:
Column name or number of supplied values does not match table definition.
I am aware why this error shows up, the table I am selecting from is not made for that data type.
However, why is it even attempting to run the first IF clause when I have IF (0=1) - how come this part is not just skipped and the SELECT 0 is only run? I would have thought that is how it was supposed to work, but I keep getting the error regarding column name/number not matching the table definition. When the user does select a Product and I get IF (1=1) and I have the appropriate table/function/datatype, it all works smoothly. I just don't know why it throws me an error prior when IF(1=0). Why does this happen/how can I get my intended behavior that everything inside my BEGIN\END under my first IF statement does not run unless the expression is true.

T-SQL is not interpreted. It must make sense regardless of what the runtime conditions are. It doesn't even do short-circuiting, in fact. Your code is invalid, and it doesn't matter that it's unreachable - T-SQL isn't going to ignore a piece of invalid code just because it could be eliminated, that's a thing that is a common source of bugs (e.g. in C++ where it's pretty common with templates).
Just make sure you still get valid SQL for the case where no product is selected; use the wrong table (or a helper table) if you have to.

The answer is simple: SQL code is fully compiled by the server before being executed, so this is basically a compile error. It's a bit like trying to compile the following in C#
if(someBoolWhichIsFalse)
intValue = "hello";
It's simply not valid.
The runtime code has not even been executed, it's still in the parsing and lexing stage. Nothing is being skipped, it just needs to be fully valid code, irrespective of runtime conditions.
This happens in every scope, i.e. on every call to a procedure or ad-hoc batch, that code must be compilable.

Related

create table statement resulted in "Failed: Warning: execution completed with warning" but no errors

My SQL code ran, seemed to work, but gave me a warning yet I cannot find the details of the warning.
I tried
select * from user_errors and
select * from dba_errors and
select * from all_errors
When I did show errors, it said "No Errors."
I saw this in more than one place, and each time it was when the table being created was large or at least was created from a large or complex select. For now, I'd like to learn where else these errors are shown. Maybe I can ignore it, but would like to see what Oracle's concern was before doing so.
The SQL that fails is a create as select statement, I think notable is that the select could could contain millions of rows. I understand Oracle only allows DDL statements outside of PL/SQL (begin/end) blocks for a reason that I do not understand. Nonetheless, I'd like to know if I'd get the same warning outside of PL/SQL as I would inside a PL/SQL block.
So, I need to be clear, the statement that failed was something like
CREATE TABLE USAGE_COUNT as (
SELECT* FROM SOME_REALLY_LARGE_TABLE_1 T1
LEFT JOIN SOME_OTHER_MEDIUM_SIZED_TABLE_2 T2
ON T1.ID = T2.ID
I really wish I could share the exact SQL, but I don't know how much I could get away with and I want to keep my job. But in this case, I'm not asking for specifics on how to fix the error or warning, only specifics on how to get the actual warning.
So to be clear, the statement that causes the warning is executed completely outside of any PL/SQL, and is directly invoked using SQLDeveloper in the worksheet. I do, however, have PL/SQL that I'm also writing, and I'd like to know where the error would show up had it occurred there as well.

Detect if SQL statement is correct

Question: Is there any way to detect if an SQL statement is syntactically correct?
Explanation:
I have a very complex application, which, at some point, need very specific (and different) processing for different cases.
The solution was to have a table where there is a record for each condition, and an SQL command that is to be executed.
That table is not accessible to normal users, only to system admins who define those cases when a new special case occurs. So far, a new record was added directly to the table.
However, from time to time there was typos, and the SQL was malformed, causing issues.
What I want to accomplish is to create a UI for managing that module, where to let admins to type the SQL command, and validate it before save.
My idea was to simply run the statement in a throw block and then capture the result (exception, if any), but I'm wondering of there is a more unobtrusive approach.
Any suggestion on this validation?
Thanks
PS. I'm aware of risk of SQL injection here, but it's not the case - the persons who have access to this are strictly controlled, and they are DBA or developers - so the risk of SQL injection here is the same as the risk to having access to Enterprise Manager
You can use SET PARSEONLY ON at the top of the query. Keep in mind that this will only check if the query is syntactically correct, and will not catch things like misspelled tables, insufficient permissions, etc.
Looking at the page here, you can modify the stored procedure to take a parameter:
CREATE PROC TestValid #stmt NVARCHAR(MAX)
AS
BEGIN
IF EXISTS (
SELECT 1 FROM sys.dm_exec_describe_first_result_set(#stmt, NULL, 0)
WHERE error_message IS NOT NULL
AND error_number IS NOT NULL
AND error_severity IS NOT NULL
AND error_state IS NOT NULL
AND error_type IS NOT NULL
AND error_type_desc IS NOT NULL )
BEGIN
SELECT error_message
FROM sys.dm_exec_describe_first_result_set(#stmt, NULL, 0)
WHERE column_ordinal = 0
END
END
GO
This will return an error if one exists and nothing otherwise.

Error in unreachable SQL Code

The following tsql fails:
IF OBJECT_ID('FDSCorp.XLFILES') IS NOT NULL
BEGIN
DELETE FROM FDSCorp.XLFILES;
INSERT INTO FDSCorp.XLFILES
SELECT DISTINCT * FROM dbo.XLFILES;
END
ELSE
exec sp_changeobjectowner XLFILES, FDSCorp;
Error:
The image data type cannot be selected as DISTINCT because it is not comparable.
Yes XLFilES has an image column, but in this case FDSCorp.XLFILES doesn't exist so that distinct code would never get to run.
This code is generated for each table in the database and I know that this section of the code will never be run on a table where it could fail due to the distinct issue.
I really don't want to have to overcomplicate the code checking for types which I can't use distinct with if that scenario could never happen in a real situation.
Is there some way I can bypass this check?
The only way to avoid the error is for you to prevent the server from "seeing" the code you don't want it to compile. Each batch is compiled entirely (including every statement, ignoring control flow) before execution starts:
IF OBJECT_ID('FDSCorp.XLFILES') IS NOT NULL
BEGIN
DELETE FROM FDSCorp.XLFILES;
exec sp_executesql N'INSERT INTO FDSCorp.XLFILES
SELECT DISTINCT * FROM dbo.XLFILES;';
END
ELSE
exec sp_changeobjectowner XLFILES, FDSCorp;
Now, when this batch is compiled, it won't attempt to compile the INSERT, since so far as this batch is concerned, it's just a string literal.

Why doesn't this PL/SQL procedure work?

I have a cursor which returns two values: one which I will use (and therefore will assign to an out variable) and another which I've only had returned to make the ROWNUM thing work.
If I run the cursor as a query, it works as expected. But if I execute the procedure the out variable comes empty. Is my approach somehow not supported? I mean, returning two values but only using one of them?
Here is my procedure code: (Don't delve too much on the query itself. It works, I know it's a bit ugly but it works. It was the only way I found to return the second-last row)
procedure retorna_infos_tabela_164(i_nip in varchar,
o_CODSDPANTERIOR out number) is
cursor c_tabela_164 is
select *
from(
select CODSDP,ROWNUM rn
from
(
select NRONIP,CODTIPOMOV,CODSDP
from TB164_HISTORICOMOVIMENTACOES
where NRONIP = i_nip and
CODTIPOMOV='S1'
order by DTHMOV desc
)
)
where rn=2;
v_temp_nr number;
begin
open c_tabela_164;
fetch c_tabela_164 into o_CODSDPANTERIOR,v_temp_nr;
close c_tabela_164;
end retorna_infos_tabela_164;
EDIT The way I've tried to run this procedure was by dbms_output.put_line(o_CODSDPANTERIOR) which didn't work. Then I googled a little bit and saw I should TO_CHAR() my var first and then have it output. Didn't work either.
There's no problem with passing a number to DBMS_OUTPUT.PUT_LINE. Oracle will silently convert other built-in types to VARCHAR2 using the default format. You only need to use TO_CHAR if you want to control the format used -- which is often a good idea, but not generally necessary.
One possibility, though, is that you are not seeing the output because you have not enabled it. If you are running your test in SQLPlus, make sure you SET SERVEROUTPUT ON before running code that includes DBMS_OUTPUT calls. If you are using some other client, consult its documentation for the proper way to enable DBMS_OUTPUT. (You can of course test if it's enabled by adding another call to output a string literal.)
There's nothing inherently wrong with the technique you're using to populate the out parameter. However, it's not necessary to return two columns from the cursor; your select * could simply be select CODSDP. You seem to be under the misconception that any column referenced in the predicates has to be in the select list, but that's not the case. In your innermost query, the select list does not need to include NRONIP or CODTIPOMOV, because they are not referenced in the outer blocks; the WHERE clause in that query can reference any column in the table, regardless of whether it is in the select list.
So, my first guess is that you simply don't have server output enabled. The only other possibility I can think of right now is that you're running your query and the procedure in two different sessions, and one of them has uncommitted transaction against the table, so they are actually seeing different data.
If those suggestions don't seem to be the problem, I'd suggest you run your tests of the standalone query and the procedure in a single SQLPlus session, then copy and paste the entire session here, so we can see exactly what you're doing.
I'm sorry I've had you guys take the time to answer me when the answer was something to do with the tool I'm using. I hope all you guys have learnt something.
The query does work for me at least, I've not come across any edge cases where it doesn't work, but I haven't tested it exhaustively.
The problem was that TOAD, the tool I'm using to run the procedures, sometimes populates the procedures with the parameters I tell it to but sometimes it doesn't. The issue here was that I was trying to execute the procedure with no parameters, yielding no results...
Lesson Learnt: double check the generated procedure code when you run a Procedure using Right Click > Run Procedure on TOAD version 9.

Select Fails With Nonexisitent Columns

Executing the following statement with SQL Server 2005 (My tests are through SSMS) results in success upon first execution and failure upon subsequent executions.
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
What this means is that something is comparing the columns I am accessing in my select statement against the columns that exist on a table when the script is "compiled". For my purposes this is undesirable functionality. My question is if there is anything that can be done so that this code would execute successfully on every run, or if that is not possible perhaps someone could explain why the demonstrated functionality is desirable. The only solutions I have currently is to wrap the select with EXEC or select *, but I don't like either of those solution.
Thanks
If you put:
IF OBJECT_ID('tempdb..#test') IS NOT NULL
DROP TABLE #test
GO
At the start, then the problem will go away, as the batch will get parsed before the #test table exists.
What you're asking is for the system to recognise that "1=0" will always evaluate to false. If it were ever true (which could potentially be the case for most real-life conditions), then you'd probably want to know that you were about to run something that would cause failure.
If you drop the temporary table and then create a stored procedure that does the same:
CREATE PROC dbo.test
AS
BEGIN
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
END
Then this will happily be created, and you can run it as many times as you like.
Rob
Whether or not this behaviour is "desirable" from a programmer's point of view is debatable of course -- it basically comes down to the difference between statically typed and dynamically typed languages. From a performance point of view, it's desirable because SQL Server needs complete information in order to compile and optimize the execution plan (and also cache execution plans).
In a word, T-SQL is not an interpretted or dynamically typed language, and so you cannot write code like this. Your options are either to use EXEC, or to use another language and embed the SQL queries within it.
This problem is also visible in these situations:
IF 1 = 1
select dummy = GETDATE() into #tmp
ELSE
select dummy = GETDATE() into #tmp
Although the second statement is never executed the same error occurs.
It seems the query engine first level validation ignores all conditional statements.
You say you have problems with subsequent request and that is because the object already exits. It it recommended that you drop your temporary tables as soon as possible when you are done with it.
Read more about temporary table performance at:
SQL Server performance.com