How to Manage SQL Source Code? - sql

I am in charge of a database.
It has around 126 sprocs, some 20 views, some UDFs. There are some tables that saves fixed configuration data for our various applications.
I have been using a one-big text file that contained IF EXIST ...DELETE GO CREATE PROCEDURE... for all the sprocs, udfs, views and all the insert/updates for the configuration scripts.
In the course of time, new sprocs were added, or existing sprocs were changed.
The biggest mistake (as far as I am aware of) I have made in creating this BIG single text file is to use the code for new/changed sprocs at the beginning of the text file. I, however, I forgot to exclude the previous code for the new/changed sprocs. Lets illustrate this:
Say my BIG script (version 1) contains script to create sprocs
sp 1
sp 2
sp 3
view 1
view 2
The databse's version table gets updated with the version 1.
Now there is some change in sp 2. So the version 2 of the BIG script is now:
sp2 --> (newly added)
sp1
sp2
sp3
view 1
view 2
So, obviously running the BIG script version 2 will not going to update my sp 2.
I am kind of late of realise this with 100+ numbers of sprocs.
Remedial Action:
I have created a folder structure. One subfolder for each sproc/view.
I have gone through the latest version of the BIG script from the bgeinning and placed the code for all scripts into respective folders. Some scripts are repeated more than once in the BIG script. If there are more than on block of code for creating a specific sproc I am putting the earlier version into another subfolder called "old" within the folder for that sproc. Luckily I have always documented all the changes I made to all sprocs/view etc - I write down the date, a version number and description of changes made as comment in the sproc's code. This has helped me a lot to figure out the the latest version of code for a sprocs when there are more than one block of code for the sproc.
I have created a DOS batch process to concatenate all the individual scripts to create my BIG script. I have tried using .net streamreader/writer which messes up with the encoding and the "£" sign. So I am sticking to DOS batch for the time being.
Is there any way I can improve the whole process?
At the moment I am after some way to document the versioning of the BIG script along with its individual sproc versions. For example, I like to have some way to document
Big Script (version 1) contains
sp 1 version 1
sp 2 version 1
sp 3 version 3
view 1 version 1
view version 1
Big script (version 2) has
sp 1 version 1
sp 2 version 2
sp 3 version 3
view 1 version 1
view 2 version 1
Any feedback is welcomed.

Have you looked at Visual Studio Team System Database Edition (now folded into Developer Edition)?
One of things it will do is allow to maintain the SQL to build the whole database, and then apply only the changes to update the target to the new state. I believe that it will also, given a reference database, create a script to bring a database matching the reference schema up to the current model (e.g. to deploy to production without developers having access to production).

The way we do it is to have separate files for tables, stored procedures, views etc and store them in their own directories as well. For execution we just have a script which executes all the files. Its definitely a lot easier to read than having one huge file.
To update each table for example, we use this template:
if not exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[MyTable]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
begin
CREATE TABLE [dbo].[MyTable](
[ID] [int] NOT NULL ,
[Name] [varchar](255) NULL
) ON [PRIMARY]
end else begin
-- MyTable.Name
IF (SELECT COL_LENGTH('MyTable','Name')) IS NULL BEGIN
ALTER TABLE MyTable ADD [Name] [varchar](255) NULL
PRINT 'MyTable.Name CREATED.'
END
--etc
end

When I had to handle a handful of SQL tables, procedures and triggers I did the following :
All files under version control (CVS at that time but look at SVN or Bazaar for example)
One file per object named after the object
a makefile stating the dependencies between files
It was an oracle project and every time you change a table you have to recomple its triggers.
And my trigges used several modules so had to be recompiled also when their dependent modules were updated ...
The makefile avoid the "big file" approach : you don't have to execute ALL your code for every change.
Under windows you can download "NMAKE.exe" to use makefiles
HTH

Please see my answer to a similar question, which may help:
Database schema updates
Some additional points:
When we make a Release, e.g. for Version 2, we concatenate together all the Sprocs from that have a modified date more recent than the previous Release.
We are careful to add at least one blank line to the bottom of each Sproc script, and to start each Sproc script with a comment - otherwise concatenation can yield "GOCREATE NextSproc" - which is a bore!
When we run the concatenated script we sometimes find that we get conflicts - e.g. calling sub-Sprocs that don't already exist. We duplicate the code for such Sprocs at the bottom of the script - so they are recreated a second time - to ensure that SQL Server's dependency table is correct. (i.e. we sort this out at the QA stage for the Release)
Also, we put a GRANT permissions statement at the bottom of each Sproc script, so that when we Drop / Create an SProc we re-Grant the permissions. However, if your Permissions are allocated to each user, or are differently assigned on each server, then it may be better to use ALTER rather than CREATE - but that is a problem if the SProc does not already exist, so then it is best to do:
IF NOT EXIST ...
CREATE PROCEDURE MySproc AS SELECT 'STUB'
GRANT EXECUTE Permissions
and then that Stub is immediately replaced by the real ALTER Sproc statement.

Related

Pre-execute a query when any Stored Procedure is called

Our enterprise's database is 20+ years old, and it's filled with junk, so we're planning to start deleting tables and Stored Procedures. The problem is that we don't exactly know which of those are unused, so we thought on doing a research to spot them.
I tried this answer's solution, but I think the number of queries returned are the ones in the system cache.
I have an idea of how to do it, but I don't know if it's possible:
- Create a system table with 3 columns: Stored Procedure name, number of executions, and date of last call
- The tricky part: everytime a Stored Procedure is executed, perform a query to insert/update that table.
To avoid having to modify ALL our Stored Procedures (those are easily 600+), I thought of adding a Database Trigger, but turns out it's only possible to link them to tables, not Stored Procedures.
My question is, is there any way to pre-execute a query when ANY Stored Procedure is called?
EDIT: Our Database is a SQL Server
I'm aware that I asked this question a while ago, but I'll post what I've found, so anyone who stumbles with it can use it.
When the question was asked, my goal was to retrieve the number of times all Stored Procedures were executed, to try to get rid of the unused ones.
While this is not perfect, as it doesn't show the date of last execution, I found this query, which retrieves all Stored Procedures on all databases, and displays the number of times it's been executed since it's creation:
SELECT
Db_name(st.dbid) [Base de Datos],
Object_schema_name(st.objectid, dbid) [Schema],
Object_name(st.objectid, dbid) [USP],
Max(cp.usecounts) [Total Ejecuciones]
FROM
sys.dm_exec_cached_plans cp
CROSS apply sys.Dm_exec_sql_text(cp.plan_handle) st
WHERE
Db_name(st.dbid) IS NOT NULL
AND cp.objtype = 'proc'
GROUP BY
cp.plan_handle,
Db_name(st.dbid),
Object_schema_name(objectid, st.dbid),
Object_name(objectid, st.dbid)
ORDER BY
Max(cp.usecounts)
I found this script on this webpage (it's on spanish). It also has 2 more useful scripts about similar topics.
I used this script (subsequently improved)
https://chocosmith.wordpress.com/2012/12/07/tsql-recompile-all-views-and-stored-proceedures-and-check-for-error/#more-571
To run through all of your objects and find the ones that are no longer valid.
If you want I will post my enhanced version which fixes a few things.
Then create a new schema (I call mine recycle) and move those invalid objects in there.
Now run it again.
You may end up moving a whole bunch on non functional objects out

How to effective version store procedures?

i am part of database development team wotking for big eshop. We are using MS SQL 2016 and ASP.NET. SQL Server is used by clients from 10+ IIS servers using connection pooling (we have aprox 7-10k batch/sec) in production environment and we are using 18 DEV/TESTING IIS servers (but only one DEV database because multi TB size).
We develop a new functionality that forces us to make changes to existing stored procedures quite often.
If we are deploying a change to a production environment, it is a part of changing both the modification of the application to IIS and the change in the database procedures. When deploying, they are always changed to 5 IIS servers, then to 5 more and more. In the meantime, both old and new versions exist on IIS servers. These versions must coexist for some time while using the procedures in the database at the same time. At the database level, we solve this situation by using several versions for the procedure. The old app version calls EXEC dbo.GetProduct, the new app version uses dbo.GetProduct_v2. After you deploy a new version of the application to all IIS, everyone is using dbo.GetProduct_v2. During the next deployment, the situation will be reversed and dbo.GetProduct will contain a new version. A similar situation lies in the development and testing environment.
I fully realize that this solution is not ideal and I would like to be inspired.
We consider separating the data part and the logical part. In one database there will be data tables, other databases will contain only procedures and other program objects. When deploying a new version, we simply deploy a new version of the entire database containing logic and will not need to create a version of the procedure. Procedures from the logic database will query the database with data.
However, the disadvantage of this solution is the impossibility of using natively compiled procedures that we plan to use next year because they do not support querying in other databases.
Another option is using one database and separate procedure versions in different schemas...
If you have any ideas, pros/cons or you know tools what can help us and manage/deploy/use multiple proc versions, please make comment.
Thank you so much
Edit : We are using TFS and Git, but this do not solve versioning of procedures in SQL database. My main question is how to deal with the need to manage multiple versions of IIS applications using multiple versions of the procedures in the database.
Versioning is easy with SSDT or SQL Compare and source control. So are deployments.
Your problem is not versioning.
You need two different stored procedures with the same name, probably same parameters but different code and maybe different results. It's more achievable in, say, .net code because you can use overloading to a point.
Your problem is phased deployments using different code:
Two versions of the same proc must co-exist.
In your case, I would consider using synonyms to mask the actual stored procedure name.
So you have these stored procedures.
dbo.GetProduct_v20170926 (last release)
dbo.GetProduct_v20171012 (this release)
dbo.GetProduct_v20171025 (next release)
dbo.GetProduct_v20171113 (one after)
Then you have
CREATE SYNONYMN dbo.GetProductBlue FOR dbo.GetProduct_v20171012;
CREATE SYNONYMN dbo.GetProductGreen FOR dbo.GetProduct_v20170926;
Your phased IIS deployments refer to one of the SYNONYMNs
Next release...
DROP SYNONYMN dbo.GetProductBlue;
CREATE SYNONYMN dbo.GetProductBlue FOR dbo.GetProduct_v20171025;
then
DROP SYNONYMN dbo.GetProductGreen;
CREATE SYNONYMN dbo.GetProductGreen FOR dbo.GetProduct_v20171113;
Using a different schema is the same result but you'd end up with
- Blue.GetProduct
- Green.GetProduct
Or code your release date into the schema name.
- Codev20171025.GetProduct
- Codev20171113.GetProduct
You'd have the same problem even you had another set of IIS servers and keep one code base on each set of servers:
Based on the blue/green deployment model
A couple assumptions.
You have a version number in your IIS code somewhere - perhaps an App.config or Web.config file and that version number can be referenced in your .NET code
Your goal is not to change your IIS .NET SP names on every release but have it call the correct version of the SP in the DB
All versions of the SP take the same parameters
Different version of the SP can return different results
Ultimately there is no way around having multiple versions of the stored procedure in the DB. The idea is to abstract that away, as much as possible, from IIS (I am assuming).
Based on the above, I am thinking you could add another parameter to your SP which accepts a version number (which you would likely get from Web.config in IIS).
Then your stored proc dbo.GetProduct becomes a "controller" or "routing" stored procedure whose sole purpose is to take the version number and pass the remaining parameters to the appropriate underlying SP.
So you would have 1 SP per version (use whatever naming convention you wish). And dbo.GetProduct would call the appropriate one based on the version number passed in. An example is below.
create proc dbo.GetProduct_v1 #Param1 int, #Param2 int
as
begin
--do whatever is needed for v1
select 1
end
go
create proc dbo.GetProduct_v2 #Param1 int, #Param2 int
as
begin
--do whatever is needed for v2
select 2
end
go
create proc dbo.GetProduct #VersionNumber int, #Param1 int, #Param2 int
as
begin
if #VersionNumber = 1
begin
exec dbo.GetProduct_v1 #Param1, #Param2
end
if #VersionNumber = 2
begin
exec dbo.GetProduct_v2 #Param1, #Param2
end
end
Another thought would be to dynamically build your SP name in IIS (based on the version number in Web.config) instead of hard coding the SP name.

SQL - How to: IF then GO (execute new query) in the same script without dynamic SQL?

In short, I'm managing a bunch of versioned SQL Scripts where one requirement is that they need to be sort of backwards compatible in that the same scripts can be executed multiple times, still guaranteeing the same end result for the latest version. Basically, say we are on version 3. We need to be able to run scripts for versions 1, 2 and 3 over and over, without errors, and still guarantee that the end result is the same complete version 3.
Now this is easy with normal scenarios (just check if column / table / type is right and create / modify if not), but how do you deal with for instance a trigger that's way over 8000 characters long and can't be executed as dynamic SQL? As version 2 is installed, the triggers are dropped, at the end, new ones are created to match v2's datamodel. But if v3 removed one of the columns referred to by the v2 trigger, that trigger will now fail.
I can't make any kind of IF checks to see if our log has v3 scripts, or if the datamodel doesn't match the requirements. I'd hate to make others do manual labor to do something I'm sure can be automated one way or another. So is there any nice gimmick, trick or just something I missed that could help?
Thanks for the help. :)
but how do you deal with for instance a trigger that's way over 8000
characters long and can't be executed as dynamic SQL?
It can be executed using sp_executesql for which size of sql statement is limited only by available database server memory.
You need to check if object exists and create it if you need or delete otherwise.
if object_id(N'your_table_name','U') is null
CREATE TABLE
...
GO
/* add column */
if not exists (select * from sys.columns
where object_id=object_id('TableName','U') and name='ColumnName')
ALTER TABLE TableName
ADD ColumnName
GO
/* creating Stored Procedure */
if object_id(N'ProcedureName','P') is null
EXEC sp_executesql N'CREATE PROCEDURE ProcedureName AS print 1'
GO
ALTER PROCEDURE ProcedureName
AS
/*your actual code here*/
GO
/* and so on */
Object types for object_id function you can see here.

Problem with SQL Server client DB upgrade script

SQL Server 2005, Win7, VS2008. I have to upgrade database from the old version of product to the newer one. I'd like to have one script that creates new database and upgrades old database to the new state. I am trying to do the following (SQL script below) and get the error (when running on machine with no database ):
Database 'MyDatabase' does not exist. Make sure that the name is
entered correctly.
The question is:
How can I specify database name in upgrade part
Is the better way to write create/upgrade exists ?
SQL code:
USE [master]
-- DB upgrade part
if exists (select name from sysdatabases where name = 'MyDatabase')
BEGIN
IF (<Some checks that DB is new>)
BEGIN
raiserror('MyDatabase database already exists and no upgrade required', 20, -1) with log
END
ELSE
BEGIN
USE [MyDatabase]
-- create some new tables
-- alter existing tables
raiserror('MyDatabase database upgraded successfully', 20, -1) with log
END
END
-- DB creating part
CREATE DATABASE [MyDatabase];
-- create new tables
You don't usually want to explicitly specify database name in a script. Rather, supply it exernally or pre-process the SQL to replace a $$DATABASENAME$$ token with the name of an actual database.
You're not going to be able to include the USE [MyDatabase] in your script since, if the database doesn't exist, the query won't parse.
Instead, what you can do is keep 2 separate scripts, one for an upgrade and one for a new database. Then you can call the scripts within the IF branches through xp_cmdshell and dynamic SQL. The following link has some examples that you can follow:
http://abhijitmore.wordpress.com/2011/06/21/how-to-execute-sql-using-t-sql/
PowerShell may make this task easier as well, but I don't have any direct experience using it.

Using table just after creating it: object does not exist

I have a script in T-SQL that goes like this:
create table TableName (...)
SET IDENTITY INSERT TableName ON
And on second line I get error:
Cannot find the object "TableName" because it does not exist or you do not have permissions.
I execute it from Management Studio 2005. When I put "GO" between these two lines, it's working. But what I would like to acomplish is not to use "GO" because I would like to place this code in my application when it will be finished.
So my question is how to make this work without using "GO" so that I can run it programmatically from my C# application.
Without using GO, programmatically, you would need to make 2 separate database calls.
Run the two scripts one after the other - using two calls from your application.
You should only run the second once the first has successfully run anyway, so you could run the first script and on success run the second script. The table has to have been created before you can use it, which is why you need the GO in management studio.
From the BOL: "SQL Server utilities interpret GO as a signal that they should send the current batch of Transact-SQL statements to SQL Server". Therefore, as Jose Basilio already pointed out, you have to make separate database calls.
If this can help, I was faced with the same problem and I had to write a little (very basic) parser to split every single script in a bunch of mini-script which are sent - one at a time - to the database.
something even better than tpdi's temp table is a variable table. they run lightning fast and are dropped automatically once out of scope.
this is how you make one
declare #TableName table (ColumnName int, ColumnName2 nvarchar(50))
then to insert you just do this
insert into #TableName (ColumnName, ColumnName2)
select 1, 'A'
Consider writing a stored proc that creates a temporary table and does whatever it needs to with that. If you create a real table, your app won't be able to run the script more than once, unless it also drops the table -- in which case, you have exactly the functionality of a temp table.