How to effective version store procedures? - sql

i am part of database development team wotking for big eshop. We are using MS SQL 2016 and ASP.NET. SQL Server is used by clients from 10+ IIS servers using connection pooling (we have aprox 7-10k batch/sec) in production environment and we are using 18 DEV/TESTING IIS servers (but only one DEV database because multi TB size).
We develop a new functionality that forces us to make changes to existing stored procedures quite often.
If we are deploying a change to a production environment, it is a part of changing both the modification of the application to IIS and the change in the database procedures. When deploying, they are always changed to 5 IIS servers, then to 5 more and more. In the meantime, both old and new versions exist on IIS servers. These versions must coexist for some time while using the procedures in the database at the same time. At the database level, we solve this situation by using several versions for the procedure. The old app version calls EXEC dbo.GetProduct, the new app version uses dbo.GetProduct_v2. After you deploy a new version of the application to all IIS, everyone is using dbo.GetProduct_v2. During the next deployment, the situation will be reversed and dbo.GetProduct will contain a new version. A similar situation lies in the development and testing environment.
I fully realize that this solution is not ideal and I would like to be inspired.
We consider separating the data part and the logical part. In one database there will be data tables, other databases will contain only procedures and other program objects. When deploying a new version, we simply deploy a new version of the entire database containing logic and will not need to create a version of the procedure. Procedures from the logic database will query the database with data.
However, the disadvantage of this solution is the impossibility of using natively compiled procedures that we plan to use next year because they do not support querying in other databases.
Another option is using one database and separate procedure versions in different schemas...
If you have any ideas, pros/cons or you know tools what can help us and manage/deploy/use multiple proc versions, please make comment.
Thank you so much
Edit : We are using TFS and Git, but this do not solve versioning of procedures in SQL database. My main question is how to deal with the need to manage multiple versions of IIS applications using multiple versions of the procedures in the database.

Versioning is easy with SSDT or SQL Compare and source control. So are deployments.
Your problem is not versioning.
You need two different stored procedures with the same name, probably same parameters but different code and maybe different results. It's more achievable in, say, .net code because you can use overloading to a point.
Your problem is phased deployments using different code:
Two versions of the same proc must co-exist.
In your case, I would consider using synonyms to mask the actual stored procedure name.
So you have these stored procedures.
dbo.GetProduct_v20170926 (last release)
dbo.GetProduct_v20171012 (this release)
dbo.GetProduct_v20171025 (next release)
dbo.GetProduct_v20171113 (one after)
Then you have
CREATE SYNONYMN dbo.GetProductBlue FOR dbo.GetProduct_v20171012;
CREATE SYNONYMN dbo.GetProductGreen FOR dbo.GetProduct_v20170926;
Your phased IIS deployments refer to one of the SYNONYMNs
Next release...
DROP SYNONYMN dbo.GetProductBlue;
CREATE SYNONYMN dbo.GetProductBlue FOR dbo.GetProduct_v20171025;
then
DROP SYNONYMN dbo.GetProductGreen;
CREATE SYNONYMN dbo.GetProductGreen FOR dbo.GetProduct_v20171113;
Using a different schema is the same result but you'd end up with
- Blue.GetProduct
- Green.GetProduct
Or code your release date into the schema name.
- Codev20171025.GetProduct
- Codev20171113.GetProduct
You'd have the same problem even you had another set of IIS servers and keep one code base on each set of servers:
Based on the blue/green deployment model

A couple assumptions.
You have a version number in your IIS code somewhere - perhaps an App.config or Web.config file and that version number can be referenced in your .NET code
Your goal is not to change your IIS .NET SP names on every release but have it call the correct version of the SP in the DB
All versions of the SP take the same parameters
Different version of the SP can return different results
Ultimately there is no way around having multiple versions of the stored procedure in the DB. The idea is to abstract that away, as much as possible, from IIS (I am assuming).
Based on the above, I am thinking you could add another parameter to your SP which accepts a version number (which you would likely get from Web.config in IIS).
Then your stored proc dbo.GetProduct becomes a "controller" or "routing" stored procedure whose sole purpose is to take the version number and pass the remaining parameters to the appropriate underlying SP.
So you would have 1 SP per version (use whatever naming convention you wish). And dbo.GetProduct would call the appropriate one based on the version number passed in. An example is below.
create proc dbo.GetProduct_v1 #Param1 int, #Param2 int
as
begin
--do whatever is needed for v1
select 1
end
go
create proc dbo.GetProduct_v2 #Param1 int, #Param2 int
as
begin
--do whatever is needed for v2
select 2
end
go
create proc dbo.GetProduct #VersionNumber int, #Param1 int, #Param2 int
as
begin
if #VersionNumber = 1
begin
exec dbo.GetProduct_v1 #Param1, #Param2
end
if #VersionNumber = 2
begin
exec dbo.GetProduct_v2 #Param1, #Param2
end
end
Another thought would be to dynamically build your SP name in IIS (based on the version number in Web.config) instead of hard coding the SP name.

Related

SQL Server runs very slow when called from .NET application

I have SQL call to stored procedure in my ASP.NET application using nHibernate:
GetNamedQuery("MyProc")
.SetString("param1", value1)
.SetString("param2", value2)
...
SQL Server 2005 used here. It runs well on our testing environment, this call takes about 2 seconds to complete. But when we move it to new server it starts to take very long time for this and I got timeout exception in my application.
However, I catch calls in SQL Server Profiler, find that this one runs for 30 sec. But when I copied the same query and just run it on server it completes in 2 sec.
So the question is what can affect working queries from .NET application?
Hands down, the most complete solution(s) to this type of problem is found here, one of the best written pieces on this. IF you're passing parameters into a stored procedure from an external application, one quick hack to this that works 80% of the time is to localize the parameters in the procedure:
CREATE PROCEDURE sp_Test
#VarOne INT, #VarTwo INT
AS
BEGIN
DECLARE #VOne INT, #VTwo INT
SET #VOne = #VarOne
SET #VTwo = #VarTwo
/* Rest of code only uses #VOne and #VTwo for parameters */
END
This is assumes, though, that you have parameters in your application that the stored procedure needs (which it looks like from the brief snippet of code you've posted). Otherwise, the provided link also delineates some other oversights and I highly recommend it to anyone troubleshooting performance from an external application.

SQL Server Synonyms and Concurrency Safety With Dynamic Table Names

I am working with some commercial schemas, which have a a set of similar tables, which differ only in language name e.g.:
Products_en
Products_fr
Products_de
I also have several stored procedures which I am using to access these to perform some administrative functions, and I have opted to use synonyms since there is a lot of code, and writing everything as dynamic SQL is just painful:
declare #lang varchar(50) = 'en'
if object_id('dbo.ProductsTable', 'sn') is not null drop synonym dbo.ProductsTable
exec('create synonym dbo.ProductsTable for dbo.Products_' + #lang)
/* Call the synonym table */
select top 10 * from dbo.ProductsTable
update ProductsTable set a = 'b'
My question is how does SQL Server treat synonyms when it comes to concurrent access? My fear is that a procedure could start, then a second come along and change the table the synonym points to halfway through causing major issues. I could wrap everything in a BEGIN TRAN and COMMIT TRAN which should theoretically remove the risk of two processes changing a synonym, however the documentation is scarce on this matter and I can not get a definitive answer.
Just to note, although this system is concurrent, it is not high traffic, so the performance hits of using synonyms/transactions are not really an issue here.
Thanks for any suggestions.
Your fear is correct. Synonyms are not intended to used in this way. Wrapping it is a transaction (not sure what isolation level would be required) might solve the issue, but only by making the system single user.
If I was dealing with this then I would probably have gone with dynamic SQL becuase I am familiar with it. However, having thought about it I wonder if schemas could solve your problem.
If you created schema for each language and then had a table called products in each schema. Your stored proc can then reference an un-qualified table name and SQL should resolve the reference to the table that is in the default schema of the current user. You'll then need to either change what account your application authenticates as to determine which schema it uses or use EXECUTE AS in a stored proc to decide which schema is default.
I haven't tested this schema idea, I may not have thought of everything and I don't know enough about your application to know if it is actually workable in your case. Let us know if you decide to try it.

Stored Procedure stopped working

I have a stored procedure that I'm positive of has no errors but I recently deleted the table it references but imported a backup with the same name exactly and same column settings (including identity) that the previous one had but now it doesn't work.
Is there any reason that having deleted the table but importing a new would break the stored procedure?
BTW: Running Microsoft SQL Server Express Edition w/ IIS.
you can try to recompile the stored procedure with:
exec sp_recompile YourProblemTableNameHere
this will recompile all procedures that use the YourProblemTableNameHere table. But that is just a guess based on the very limited info given.

How to Manage SQL Source Code?

I am in charge of a database.
It has around 126 sprocs, some 20 views, some UDFs. There are some tables that saves fixed configuration data for our various applications.
I have been using a one-big text file that contained IF EXIST ...DELETE GO CREATE PROCEDURE... for all the sprocs, udfs, views and all the insert/updates for the configuration scripts.
In the course of time, new sprocs were added, or existing sprocs were changed.
The biggest mistake (as far as I am aware of) I have made in creating this BIG single text file is to use the code for new/changed sprocs at the beginning of the text file. I, however, I forgot to exclude the previous code for the new/changed sprocs. Lets illustrate this:
Say my BIG script (version 1) contains script to create sprocs
sp 1
sp 2
sp 3
view 1
view 2
The databse's version table gets updated with the version 1.
Now there is some change in sp 2. So the version 2 of the BIG script is now:
sp2 --> (newly added)
sp1
sp2
sp3
view 1
view 2
So, obviously running the BIG script version 2 will not going to update my sp 2.
I am kind of late of realise this with 100+ numbers of sprocs.
Remedial Action:
I have created a folder structure. One subfolder for each sproc/view.
I have gone through the latest version of the BIG script from the bgeinning and placed the code for all scripts into respective folders. Some scripts are repeated more than once in the BIG script. If there are more than on block of code for creating a specific sproc I am putting the earlier version into another subfolder called "old" within the folder for that sproc. Luckily I have always documented all the changes I made to all sprocs/view etc - I write down the date, a version number and description of changes made as comment in the sproc's code. This has helped me a lot to figure out the the latest version of code for a sprocs when there are more than one block of code for the sproc.
I have created a DOS batch process to concatenate all the individual scripts to create my BIG script. I have tried using .net streamreader/writer which messes up with the encoding and the "£" sign. So I am sticking to DOS batch for the time being.
Is there any way I can improve the whole process?
At the moment I am after some way to document the versioning of the BIG script along with its individual sproc versions. For example, I like to have some way to document
Big Script (version 1) contains
sp 1 version 1
sp 2 version 1
sp 3 version 3
view 1 version 1
view version 1
Big script (version 2) has
sp 1 version 1
sp 2 version 2
sp 3 version 3
view 1 version 1
view 2 version 1
Any feedback is welcomed.
Have you looked at Visual Studio Team System Database Edition (now folded into Developer Edition)?
One of things it will do is allow to maintain the SQL to build the whole database, and then apply only the changes to update the target to the new state. I believe that it will also, given a reference database, create a script to bring a database matching the reference schema up to the current model (e.g. to deploy to production without developers having access to production).
The way we do it is to have separate files for tables, stored procedures, views etc and store them in their own directories as well. For execution we just have a script which executes all the files. Its definitely a lot easier to read than having one huge file.
To update each table for example, we use this template:
if not exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[MyTable]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
begin
CREATE TABLE [dbo].[MyTable](
[ID] [int] NOT NULL ,
[Name] [varchar](255) NULL
) ON [PRIMARY]
end else begin
-- MyTable.Name
IF (SELECT COL_LENGTH('MyTable','Name')) IS NULL BEGIN
ALTER TABLE MyTable ADD [Name] [varchar](255) NULL
PRINT 'MyTable.Name CREATED.'
END
--etc
end
When I had to handle a handful of SQL tables, procedures and triggers I did the following :
All files under version control (CVS at that time but look at SVN or Bazaar for example)
One file per object named after the object
a makefile stating the dependencies between files
It was an oracle project and every time you change a table you have to recomple its triggers.
And my trigges used several modules so had to be recompiled also when their dependent modules were updated ...
The makefile avoid the "big file" approach : you don't have to execute ALL your code for every change.
Under windows you can download "NMAKE.exe" to use makefiles
HTH
Please see my answer to a similar question, which may help:
Database schema updates
Some additional points:
When we make a Release, e.g. for Version 2, we concatenate together all the Sprocs from that have a modified date more recent than the previous Release.
We are careful to add at least one blank line to the bottom of each Sproc script, and to start each Sproc script with a comment - otherwise concatenation can yield "GOCREATE NextSproc" - which is a bore!
When we run the concatenated script we sometimes find that we get conflicts - e.g. calling sub-Sprocs that don't already exist. We duplicate the code for such Sprocs at the bottom of the script - so they are recreated a second time - to ensure that SQL Server's dependency table is correct. (i.e. we sort this out at the QA stage for the Release)
Also, we put a GRANT permissions statement at the bottom of each Sproc script, so that when we Drop / Create an SProc we re-Grant the permissions. However, if your Permissions are allocated to each user, or are differently assigned on each server, then it may be better to use ALTER rather than CREATE - but that is a problem if the SProc does not already exist, so then it is best to do:
IF NOT EXIST ...
CREATE PROCEDURE MySproc AS SELECT 'STUB'
GRANT EXECUTE Permissions
and then that Stub is immediately replaced by the real ALTER Sproc statement.

Cross-database queries with different DB names in different environments?

How would you handle cross database queries in different environments. For example, db1-development and db2-development, db1-production and db2-production.
If I want to do a cross-database query in development from db2 to db1 I could use the fully qualified name, [db1-development].[schema].[table]. But how do I maintain the queries and stored procedures between the different environments? [db1-development].[schema].[table] will not work in production because the database names are different.
I can see search and replace as a possible solution but I am hoping there is a more elegant way to solve this problem. If there are db specific solutions, I am using SQL Server 2005.
Why are the database names different between dev and prod? It'd, obviously, be easiest if they were the same.
If it's a single table shared, then you could create a view over it - which only requires that you change that view when moving to production.
Otherwise, you'll want to create a SYNONYM for the objects, and make sure to always reference that. You'll still need to change the SYNONYM creation scripts, but that can be done in a build script fairly easily, I think.
For this reason, it's not practical to use different names for development and production databases. Using the same db name on development, production, and optionally, acceptance/Q&A environments, makes your SQL code much easier to maintain.
However, if you really have to, you could get creative with views and dynamic SQL. For example, you put the actual data retrieval query inside a view, and then you select like this:
declare #environment varchar(10)
set #environment = 'db-dev' -- input parameter, comes from app layer
declare #sql varchar(8000)
set #sql = 'select * from ' + #environment + '.dbo.view'
execute(#sql)
But it's far from pretty...