Empty SET statement (with no SET option assignments) is allowed - sql

The MS documentation about differences Between Compatibility Level 80 and Level 90 is telling on Compatibility Level 80, "Empty SET statement (with no SET option assignments) is allowed."
What is Empty SET statement (with no SET option assignments) ?
Please give me example to clarify this?

This seems to be very old behavior and the only thing I have found is from some German forum:
In SQL Server 2000, the list of deprecated features in SQL Server 2005
has compatibility level 80 with the entry "Empty SET statement (with
no SET option assignments) is allowed." and compatibility level 90
with "Empty SET clause." is not allowed. ".
But what is an Empty SET statement?
My first thought was a SET statement without any further assignment.
But actually this idea was too suspicious for me, because what brings
a SET statement without assignment?
Only after some research I found something. In fact, it is possible in
SQL Server 2000 to simply place a SET in an SP - without any
assignment.
There is also a somewhat strange syntax that is used - for whatever
reason:
DECLARE #CustomerId As nchar (5)
SET SELECT #CustomerId = 'ALFKI'
SELECT * FROM dbo.Customers WHERE CustomerId = #CustomerId
I do not even want to think about the sense and nonsense of the SET
SELECT statement. The fact is that SQL Server generates 2 statements
(SET and SELECT) out of it. And a SET statement without assignment is
no longer possible with SQL Server 2005.
As soon as nothing like this exists, you are save to change the compatibility level.

Related

How to use SET OPTION within a DB2 stored procedure

I read (and tried) that I cannot use WITH UR in DB2 stored procedures. I am told that I can use SET OPTION to achieve the same. However, when I implement it in my stored procedure, it fails to compile (I moved around its location same error). My questions are:
Can I really not use WITH UR after my SELECT statements within a procedure?
Why is my stored procedure failing to compile with the below error
message?
Here is a simplified version of my code:
CREATE OR REPLACE PROCEDURE MySchema.MySampleProcedure()
DYNAMIC RESULT SETS 1
LANGUAGE SQL
SET OPTION COMMIT=*CHG
BEGIN
DECLARE GLOBAL TEMPORARY TABLE TEMP_TABLE AS (
SELECT 'testValue' as "Col Name"
) WITH DATA
BEGIN
DECLARE exitCursor CURSOR WITH RETURN FOR
SELECT *
FROM SESSION.TEMP_TABLE;
OPEN exitCursor;
END;
END
#
Error Message:
SQL0104N An unexpected token "SET OPTION COMMIT=*CHG" was found
following " LANGUAGE SQL
Here is code/error when I use WITH UR
CREATE OR REPLACE PROCEDURE MySchema.MySampleProcedure()
LANGUAGE SQL
DYNAMIC RESULT SETS 1
--#SET TERMINATOR #
BEGIN
DECLARE GLOBAL TEMPORARY TABLE TEMP_TABLE AS (
SELECT UTI AS "Trade ID" FROM XYZ WITH UR
) WITH DATA;
BEGIN
DECLARE exitCursor CURSOR WITH RETURN FOR
SELECT *
FROM SESSION.TEMP_TABLE;
OPEN exitCursor;
END;
END
#
line 9 is where the DECLARE GLOBAL TEMPORARY ... is
DB21034E The command was processed as an SQL statement because it was
not a valid Command Line Processor command. During SQL processing it
returned: SQL0109N The statement or command was not processed because
the following clause is not supported in the context where it is
used: "WITH ISOLATION USE AND KEEP". LINE NUMBER=9. SQLSTATE=42601
Specifying the isolation level:
For static SQL:
If an isolation-clause is specified in the statement, the value of that clause is used.
If an isolation-clause is not specified in the statement, the isolation level that was specified for the package when the package was bound to the database is used.
You need to bind the routine package with UR, since your DECLARE GTT statement is static. Before CREATE OR REPLACE use the following in the same session:
CALL SET_ROUTINE_OPTS('ISOLATION UR')
P.S.: If you want to run your routine not only 1 time in the same session without an error, use additional WITH REPLACE option of DECLARE.
If your Db2 server runs on Linux/Unix/Windows (Db2-LUW), then there is no such statement as SET OPTION COMMIT=*CHG , and so Db2 will throw an exception for that invalid syntax.
It is important to only use the matching Db2 Knowledge Centre for your Db2 platform and your Db2-version. Don't use Db2-Z/OS documentation for Db2-LUW development. The syntax and functionalities differ per platform and per version.
A Db2-LUW SQL PL procedure can use with ur in its internal queries, and if you are getting an error then something else is wrong. You have to use with ur in the correct syntax however, i.e in a statement that supports this clause. For your example you get the error because the clause does not appear to be valid in the depicted context. You can achieve the desired result in other ways, one of them being to populate the table in a separate statement from the declaration (e.g insert into session.temp_table("Trade ID") select uti from xyz with ur; ) and other ways are also possible.
One reason to use the online Db2 Knowledge Cenbtre documentation is that it includes sample programs, including sample SQL PL procedures, which are also available in source code form in the sample directory of your DB2-LUW server, in addition to being available on github. It is wise to study these, and get them working for you.

Query against a view under master database is much slower than query directly under specific database

I am not sure whether there exists a general answer before I give more details.
For exmaple: I have a view named vw_View
I tried the following two queries to get the result:
Under master database select * From [test].[dbo].[vw_View]
Under test database select * From [dbo].[vw_View]
Could anyone tell me why query against the same query but from master database is much slower than query against from the other databases, I even tried the others by:
Use [db] --any other databases not master database
select * From [test].[dbo].[vw_View]
I have checked the actual execution plan, the join order differs but why it will change since I have already specify [test].[dbo].[vw_View] when under master
Just out of curiosity, thanks in advance.
Note this might not be the answer but it was too much text for a comment anyway...
One thing that we hear about a lot is when the developers complain about a slow running procedure which only runs slow when called from the application but runs fine when executing from the SSMS.
More often than not it is due to the different execution settings depending on from where the procedure is being called. To check if there is a difference in those setting I usually use SQL Profiler.
In your case you can open two different windows in the SSMS one in the context of Master database and the other in the context of the User Database and run SQL Profiler, the very first event profiler will capture, will be the Event Class = Existing Connections and Text Data = -- network protocol: LPC......
This record will show you all the default settings for each session where your are executing the commands, The settings would look something like....
-- network protocol: LPC
set quoted_identifier on
set arithabort off
set numeric_roundabort off
set ansi_warnings on
set ansi_padding on
set ansi_nulls on
set concat_null_yields_null on
set cursor_close_on_commit off
set implicit_transactions off
set language us_english
set dateformat mdy
set datefirst 7
set transaction isolation level read committed
Now compare the settings of both sessions and see what are the differences.
The profiler also has a column SIPD which will help you to identify which window is which. I am pretty sure the answer is around somewhere there.
Have the same issue - executing a view from master goes infinitely long, but executing the same view "under any other user database on this server" goes only 8 sec.
I have an environment where we just migrated to SQL Server 2017 and "all other databases" have Compatibility Level = 2008 (or 2012)
So I did a few tests:
If I create a new DB with default Compatibility Level = 2017 and run the query it executes infinitely long
If I change Compatibility Level to 2008 and reconnect - 8 sec
If I change Compatibility Level back to 2017 - long run again
And the final thing we have noticed about the query itself - the query is using CHARINDEX function and if I comment it out the query executes equally 8 sec for both compatibility levels.
So... it looks like we have an mixed issue with CHARINDEX function execution on legacy database under Compatibility Level = 2017 context.
The solution is (if you can call it this way...) - to execute legacy queries under (the same) legacy execution context.

SSIS Execute sql task

I have created EXECUTE SQL TASK in the SSIS package.
I am getting the Error called "INSERT failed because the following SET options have incorrect settings:
"ARITHABORT. Varify the set option are correct for use with indexed
views and/or indexes on computed columns or filtered indexes and query
notification"
But when i am trying execute ditectly in to SQL server management studio.It wont give any error.
Please let me know if you guys has come across this kind of issue.
Thanks
SET ARITHABORT in conjunction with SET ANSI WARNINGS controls how divide by zero and overflow errors are handled.
If you want to ignore the overflow and divide by zero, use this in front of your batch:
SET ARITHABORT OFF
SET ANSI WARNINGS OFF
If your database compatibility level is 80 or earlier, SET ARITHABORT must be on.

Exceeding maximum number of variable declarations in script on SQL Server 2005

An application I am currently working on will generate an SQL script to populate a database. A single transaction in the script looks like this (note I have changed table/variable names ;-)
USE MyDatabase
BEGIN TRANSACTION
SET XACT_ABORT ON
DECLARE #Foo int
SET IDENTITY_INSERT Table1 ON
INSERT INTO Table1 [...]
SET IDENTITY_INSERT Table1 OFF
SET IDENTITY_INSERT Table2 ON
INSERT INTO Table2 [...]
SET IDENTITY_INSERT Table2 OFF
INSERT INTO Table3 [...]
-- Here I reference #Foo
SET #Foo = dbo.SomeStoredProcedure()
-- Use #Foo in some query
COMMIT TRANSACTION
GO
SET NOCOUNT ON
This script will then generate n of these transactions, which will then be excuted on SQL Server 2005 to populate the database with n records.
The problem I am seeing is with the declaration of the #Foo variable shown above. When running the script, once we have reached 65535 records, I get the following error:
The variable name '#Foo' has already been declared.
Variable names must be unique within a query batch or stored procedure.
I think this is a misleading error message, because everything is fine until I hit 65535, and the significance of this number (2^16-1) leads me to believe I am hitting some sort of script limitation.
I have tried defining the #Foo variable once, at the top of the script, and re-using it within each transaction. But this doesn't work as it appears each transaction has its own scope.
Would creating an extra level of scope (i.e. an inner transaction) and declaring the variable within the deeper scope help address this issue?
Any other recommendations about the best way to fix this issue?
Looks like you've missed the GO delimiter, since I have scripts with many more lines. Check your scripting solution
Try The Go delimiter and also change the data type for the dynmc. variable. You are going out of its range and an "INT" is a cyclic data type in SQL Servers. It will come to the same address when the cycle of -65536 to 65535 completes.

Prepare a syntactically invalid query

I want to check the syntax of a SQL query. I thought to do in preparing it, with DbCommand.Prepare method.
Unfortunately, no error or exception.
For example: SELECT * FORM table
Is there a way to check the syntax without executing the query ?
To make it perfect, it has to work on SQL Server, Oracle and IBM DB2
For SQL Server, you can use SET FMTONLY and/or SET NOEXEC
set fmtonly on
go
SELECT * FORM table
go
set fmtonly off
Generally only the database you're using is going to know whether a given query is valid or not. One standard and portable trick is to add a WHERE clause that guarantees nothing will be done, then execute the query; for example execute SELECT * FORM table WHERE 1=0 and see what happens.