A command , starting with SELECT, cannot modify the database.
Is above statement always true, or are there exceptions?
Maybe in other words, can we create subqueries which include update command?
I don't know any RDMBS that has AFTER | INSTEAD OF Select Triggers implemented, but that would be the situation where SELECT can indirectly modify a database.
There could also be an auditing set up on your server, that tracks SELECT statements. For example in Oracle you have DBMS_FGA package, that you could use to essentially create ON SELECT Trigger, by creating a policy without audit_condition parameter. This will cause an event to fire on every select, and a procedure that modifies a database to be executed. I don't know about transactions in this case, but I think that rollback doesn't affect auditing, otherwise it will be simple to cheat it :).
Another example (Sql Server):
Select * from
OPENQUERY(servername, 'EXEC uspGetRows')
uspGetRows procedure can do a bunch of other stuff in addition to returning rows.
I figure that you rather had in mind explicit modifications, through insert/update/delete statements "merged" with select, but I never heard of something like this. So those are just two examples of situations similiar to INSERT..INTO from comments.
In short, I'm managing a bunch of versioned SQL Scripts where one requirement is that they need to be sort of backwards compatible in that the same scripts can be executed multiple times, still guaranteeing the same end result for the latest version. Basically, say we are on version 3. We need to be able to run scripts for versions 1, 2 and 3 over and over, without errors, and still guarantee that the end result is the same complete version 3.
Now this is easy with normal scenarios (just check if column / table / type is right and create / modify if not), but how do you deal with for instance a trigger that's way over 8000 characters long and can't be executed as dynamic SQL? As version 2 is installed, the triggers are dropped, at the end, new ones are created to match v2's datamodel. But if v3 removed one of the columns referred to by the v2 trigger, that trigger will now fail.
I can't make any kind of IF checks to see if our log has v3 scripts, or if the datamodel doesn't match the requirements. I'd hate to make others do manual labor to do something I'm sure can be automated one way or another. So is there any nice gimmick, trick or just something I missed that could help?
Thanks for the help. :)
but how do you deal with for instance a trigger that's way over 8000
characters long and can't be executed as dynamic SQL?
It can be executed using sp_executesql for which size of sql statement is limited only by available database server memory.
You need to check if object exists and create it if you need or delete otherwise.
if object_id(N'your_table_name','U') is null
CREATE TABLE
...
GO
/* add column */
if not exists (select * from sys.columns
where object_id=object_id('TableName','U') and name='ColumnName')
ALTER TABLE TableName
ADD ColumnName
GO
/* creating Stored Procedure */
if object_id(N'ProcedureName','P') is null
EXEC sp_executesql N'CREATE PROCEDURE ProcedureName AS print 1'
GO
ALTER PROCEDURE ProcedureName
AS
/*your actual code here*/
GO
/* and so on */
Object types for object_id function you can see here.
I'm wondering if its possible to execute a stored procedure in an update statement in TSQL.
I want to execute a stored procedure that will set the CategoryID for the number table, passing in the number from the row the update statement is currently on.
So something like:
UPDATE [TelephoneNumberManagement].[dbo].[Number]
SET [CategoryID] = exec goldennumbers2 [Number];
No.
You could do this if it were a function:
UPDATE [TelephoneNumberManagement].[dbo].[Number]
SET [CategoryID] = goldennumbers2([Number]);
Just keep in mind that a function can't have side-effects. If you're trying to run a bunch of DML statements in that procedure you should:
A) Use a trigger, if you have dependencies in other tables that need to be kept in-sync with your Number table. Even so you might want to...
B) Rethink your design. This feels like trying to mix set-based and iterative programming practices. There's almost certainly a more pure solution.
Not really, there are some options like user-defined functions. Triggers might be able to do what you want, depending upon what you're trying to do and why.
What exactly does your goldennumbers2 procedure do?
Executing the following statement with SQL Server 2005 (My tests are through SSMS) results in success upon first execution and failure upon subsequent executions.
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
What this means is that something is comparing the columns I am accessing in my select statement against the columns that exist on a table when the script is "compiled". For my purposes this is undesirable functionality. My question is if there is anything that can be done so that this code would execute successfully on every run, or if that is not possible perhaps someone could explain why the demonstrated functionality is desirable. The only solutions I have currently is to wrap the select with EXEC or select *, but I don't like either of those solution.
Thanks
If you put:
IF OBJECT_ID('tempdb..#test') IS NOT NULL
DROP TABLE #test
GO
At the start, then the problem will go away, as the batch will get parsed before the #test table exists.
What you're asking is for the system to recognise that "1=0" will always evaluate to false. If it were ever true (which could potentially be the case for most real-life conditions), then you'd probably want to know that you were about to run something that would cause failure.
If you drop the temporary table and then create a stored procedure that does the same:
CREATE PROC dbo.test
AS
BEGIN
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
END
Then this will happily be created, and you can run it as many times as you like.
Rob
Whether or not this behaviour is "desirable" from a programmer's point of view is debatable of course -- it basically comes down to the difference between statically typed and dynamically typed languages. From a performance point of view, it's desirable because SQL Server needs complete information in order to compile and optimize the execution plan (and also cache execution plans).
In a word, T-SQL is not an interpretted or dynamically typed language, and so you cannot write code like this. Your options are either to use EXEC, or to use another language and embed the SQL queries within it.
This problem is also visible in these situations:
IF 1 = 1
select dummy = GETDATE() into #tmp
ELSE
select dummy = GETDATE() into #tmp
Although the second statement is never executed the same error occurs.
It seems the query engine first level validation ignores all conditional statements.
You say you have problems with subsequent request and that is because the object already exits. It it recommended that you drop your temporary tables as soon as possible when you are done with it.
Read more about temporary table performance at:
SQL Server performance.com
When it comes to creating stored procedures, views, functions, etc., is it better to do a DROP...CREATE or an ALTER on the object?
I've seen numerous "standards" documents stating to do a DROP...CREATE, but I've seen numerous comments and arguments advocating for the ALTER method.
The ALTER method preserves security, while I've heard that the DROP...CREATE method forces a recompile on the entire SP the first time it's executed instead of just a a statement level recompile.
Can someone please tell me if there are other advantages / disadvantages to using one over the other?
ALTER will also force a recompile of the entire procedure. Statement level recompile applies to statements inside procedures, eg. a single SELECT, that are recompiled because the underlying tables changes, w/o any change to the procedure. It wouldn't even be possible to selectively recompile just certain statements on ALTER procedure, in order to understand what changed in the SQL text after an ALTER procedure the server would have to ... compile it.
For all objects ALTER is always better because it preserves all security, all extended properties, all dependencies and all constraints.
This is how we do it:
if object_id('YourSP') is null
exec ('create procedure dbo.YourSP as select 1')
go
alter procedure dbo.YourSP
as
...
The code creates a "stub" stored procedure if it doesn't exist yet, otherwise it does an alter. In this way any existing permissions on the procedure are preserved, even if you execute the script repeatedly.
Starting with SQL Server 2016 SP1, you now have the option to use CREATE OR ALTER syntax for stored procedures, functions, triggers, and views. See CREATE OR ALTER – another great language enhancement in SQL Server 2016 SP1 on the SQL Server Database Engine Blog. For example:
CREATE OR ALTER PROCEDURE dbo.MyProc
AS
BEGIN
SELECT * FROM dbo.MyTable
END;
Altering is generally better. If you drop and create, you can lose the permissions associated with that object.
If you have a function/stored proc that is called very frequently from a website for example, it can cause problems.
The stored proc will be dropped for a few milliseconds/seconds, and during that time, all queries will fail.
If you do an alter, you don't have this problem.
The templates for newly created stored proc are usually this form:
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'P' AND name = '<name>')
BEGIN
DROP PROCEDURE <name>
END
GO
CREATE PROCEDURE <name>
......
However, the opposite is better, imo:
If the storedproc/function/etc doesn't exist, create it with a dummy select statement. Then, the alter will always work - it will never be dropped.
We have a stored proc for that, so our stored procs/functions usually like this:
EXEC Utils.pAssureExistance 'Schema.pStoredProc'
GO
ALTER PROCECURE Schema.pStoredProc
...
and we use the same stored proc for functions:
EXEC Utils.pAssureExistance 'Schema.fFunction'
GO
ALTER FUNCTION Schema.fFunction
...
In Utils.pAssureExistance we do a IF and look at the first character after the ".": If it's a "f", we create a dummy fonction, if it's "p", we create a dummy stored proc.
Be careful though, if you create a dummy scalar function, and your ALTER is on a table-valued function, the ALTER FUNCTION will fail, saying it's not compatible.
Again, Utils.pAssureExistance can be handy, with an additional optional parameter
EXEC Utils.pAssureExistance 'Schema.fFunction', 'TableValuedFunction'
will create a dummy table-valued function,
Additionaly, I might be wrong, but I think if you do a drop procedure and a query is currently using the stored proc, it will fail.
However, an alter procedure will wait for all queries to stop using the stored proc, and then alter it. If the queries are "locking" the stored proc for too long (say a couple seconds), the ALTER will stop waiting for the lock, and alter the stored proc anyway: the queries using the stored proc will probably fail at that point.
DROP generally loses permissions AND any extended properties.
On some UDFs, ALTER will also lose extended properties (definitely on SQL Server 2005 multi-statement table-valued functions).
I typically do not DROP and CREATE unless I'm also recreating those things (or know I want to lose them).
I don't know if it's possible to make such blanket comment and say "ALTER is better". I think it all depends on the situation. If you require this sort of granular permissioning down to the procedure level, you probably should handle this in a separate procedure. There are benefits to having to drop and recreate. It cleans out existing security and resets it what's predictable.
I've always preferred using drop/recreate. I've also found it easier to store them in source control. Instead of doing .... if exists do alter and if not exists do create.
With that said... if you know what you're doing... I don't think it matters too much.
If you perform a DROP, and then use a CREATE, you have almost the
same effect as using an ALTER VIEW statement. The problem is that you need to entirely re-establish your permissions on who can and can’t use the view. ALTER retains any dependency information and set permissions.
You've asked a question specifically relating to DB objects that do not contain any data, and theoretically should not be changed that often.
Its likely you may need to edit these objects but not every 5 minutes. Because of this I think you've already hit the hammer on the head - permissions.
Short answer, not really an issue, so long as permissions are not an issue
We used to use alter while we were working in development either creating new functionality or modifying the functionality. When we were done with our development and testing we would then do a drop and create. This modifys the date/time stamp on the procs so you can sort them by date/time.
It also allowed us to see what was bundeled by date for each deliverable we sent out.
Add with a drop if exists is better because if you have multiple environments when you move the script to QA or test or prod you don't know if the script already exists in that environment. By adding an drop (if it already exists) and and then add you will be covered regardless if it exists or not. You then have to reapply permissions but its better then hearing your install script error-ed out.
From a usability point of view a drop and create is better than a alter. Alter will fail in a database that doesn't contain that object, but having an IF EXISTS DROP and then a CREATE will work in a database with the object already in existence or in a database where the object doesn't exist. In Oracle and PostgreSQL you normally create functions and procedures with the statement CREATE OR REPLACE that does the same as a SQL SERVER IF EXISTS DROP and then a CREATE. It would be nice if SQL Server picked up this small but very handy syntax.
This is how I would do it. Put all this in one script for a given object.
IF EXISTS ( SELECT 1
FROM information_schema.routines
WHERE routine_schema = 'dbo'
AND routine_name = '<PROCNAME'
AND routine_type = 'PROCEDURE' )
BEGIN
DROP PROCEDURE <PROCNAME>
END
GO
CREATE PROCEDURE <PROCNAME>
AS
BEGIN
END
GO
GRANT EXECUTE ON <PROCNAME> TO <ROLE>
GO