We have multiple instances(application + database) of the application running (Dev, Test, Prod).
Our implementation process currently looks like that:
multiple database developers work on the same database (application is developed by a separate team)
database development process contains of adding/altering database objects and configuring application objects
the application objects are stored in the database tables (we call them kernel tables) as data
the database is connected to an instance of an application, there is no possibility for developers to run their own instance of the application on their local machines. (I know it is not the best approach, but currently we cannot do it differently for various reasons)
We'd like to figure out an approach and tools (preferably free or at least not very expensive)
to do the versioning on a feature level, as usually, one developer works on one feature. In the end, the feature is usually some database objects (tables, views, stored procedures) and the data inserted in the kernel (application) tables. The developer does not control the data entered by the application.
The kernel tables usually have the auto-incremented primary key, plus some dependencies like foreign keys on other kernel tables.
The ultimate goal is to have the continuous integration process in place. Meaning that we want to have an automated process which would do some tests and populate our features to other environments.
We are using both SQL Server 2014 and Azure SQL Databases.
Do you have any experiences/ideas how to handle such situation?
Thanks
You can use Visual Studio Community https://www.visualstudio.com/vs/community/ with the SSDT (SQL Server Developer Tools) tools https://learn.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt
You can now create an SQL Database Project, which can handle all your database objects. In addition you can use GIT tools (github or vsts (https://www.visualstudio.com/) to store and share your code (privately).
With these tools you can manage both Azure SQL Database and SQL Server databases.
Here is a complete guide for these tools : https://msdn.microsoft.com/en-us/library/xee70aty.aspx
Related
Today I maintain project that has really messy DB that need a lot of refactor and publish on clients machines.
I know that I could add a SQL Server Database project that contains just scripts of the database and creates a .dacpac file that allows me to change clients databases automatically.
Also I know that I could just add an .mdf file to the App_Data or even to Solution_Data folder and have my database there. I suppose that localDb that already exists allows me to startup my solution without SQL Server
And atlast i know that Entity Framework exist with it's own migrations. But i don't want to use it, besouse i can't add and change indexes with it's migrations and i don't have anought flexibility when i need to describe difficult migrations scenarios.
My goals:
Generate migration scripts to clients DB's automaticaly.
Make my solution self-contained, that any new Programmer that came to project don't even need to install SQL Server on his machine.
Be able to update local (development) base in 1-2 clicks.
Be able to move back in history of db changes (I have TFS server)
Be able to have clean (only with dictionaries or lookup tables) db in solution with up to date DB scheme.
Additionally i want to be able to update my DB model (EF or .dbml) automatically or very easy way.
So what I what to ask:
What's a strengths and weaknesses of using this 2 approaches if I want to achive my goals?
Can be that I should use sort of combination of this tools?
Or don't I know about other existing tool from MS?
Is there a way to update my DAL model from this DB?
What's a strengths and weaknesses of using this 2 approaches if I want to achive my goals?
Using a database project allows you to version control all of the database objects. You can publish to various database instances and roll out changes incrementally, rather than having to drop and recreate the database, thus preserving data. These changes can be in the form of a dacpac, a SQL script, or done right through the VS interface. You gain a lot of control over deployments using pre- and post-deployment scripts and publishing profiles. Developers will be required to install SQL Server (the developer/express edition is usually good enough).
LocalDB is a little easier to work with -- you can make your changes directly in the database without having to publish. LocalDB doesn't have a built-in publish process for pushing changes to other instances. No SQL Server installation required.
Use a database project if you need version control for your database objects, if you have multiple users concurrently making changes, or if you have multiple applications that use the same database. Use LocalDB if none of those conditions apply or for small apps that require their own standalone database.
Can be that I should use sort of combination of this tools?
Yes. According to Kevin's comment below, "If the Database Project is set as your startup project, hitting F5 will automatically deploy it to LocalDB. You don't even need a publish profile in this case."
Or don't I know about other existing tool from MS?
Entity Framework's Code First approach comes close.
Is there a way to update my DAL model from this DB?
Entity Framework's POCO generator works well unless you make changes to your DAL classes, then those changes get lost the next time you run the generator.
There is a new tool called SqlSharpener which can generate classes from the SQL files in a database project. I have not used it so I cannot vouch for it but it looks promising.
One way for generating client script for DB changes is to use database modeling tool like ERWin Which have a free community edition. The best way to meet your database version control requirement and easy script generation is Redgate SQL Source Control. Using Redgate tool you will meet the first five goals mentioned. Moreover, you can now update EF Model by single click after changing DB schema (i.e. Database first approach) as required in goal 6.
I do not recommend using LocalDB at all. It always make issues with source control like "DB File is in use and can't commit...” In addition, the developer in the project will not have common set of updated data to work on unless a developer add test data to the database and ask others to get latest version and overwrite their own database Or generate update script by the previous mentioned tool and ask every developer to run it on his localDB.
The best way in your situation is to use SQL Server on network. A master version that all the developers use. Since you have version control on the database using previously mentioned tool, you can rollback any buggy change in the database server.
If you think that RedGate tool is expensive for the budget of your project. A second approach is to generate single SQL file from your database that has all database object and the other developers update the SQL file in source control per their changes. This can be done easily by using schema compare tool in visual studio and appending the generated script to SQL file in the source control. With EF DB First approach, you will not have to add many migration classes as in EF Code first.
A project I am working on is storing each database object (tables, store procedures, etc) in its own file in source control, TFS. I am thinking about implementing a workflow that will build the database in a Windows Azure SQL Server VM instance tied to TFS commits that will run tests for continuous integration.
How does one reconstruct the database from these individual files? Since there are dependencies to consider among other things, is there a standard practice on how to construct a database with needed structure when the objects are stored in individual files?
I am thinking that file by file might not actually be a realistic way to do this? If this is the case, do some companies keep an empty database in the testing domain to be filled with data for CI purposes and not drop the database during test tear down?
Sounds like your team had a SQL Server Database Project at some point. If it's not there, you can create one, and include the individual script files in the appropriate folder in the SQL Server Database Project.
Then all you have to do is right click and deploy to whichever environment you want to deploy the database to.
Here's more: CREATING A SQL SERVER DATABASE PROJECT IN VISUAL STUDIO 2012.
I have recently implemented Continuous Integration for SQL Server using SSDT, Jenkins and Perforce.
This has proven to be a revelation to our firm, and our turn around time for database changes is now extremely fast. The DBA Team are on board with this new methodology, as their number of support issues have dramatically dropped.
I have implemented a database build server, that is an empty shell, each time a database change is checked into to perforce the whole database is deployed to this shell. Thus any breaking dependencies are picked up.
We use SONAR to report the quality of code for C# and Java. I would like to extend this not only to T-SQL but also generically to database platforms that have a supported JDBC driver.
To make the database review generic, and also allow the DBA Team to write the rules, I would like to write a plugin that will run SQL scripts (each a rule) against a database, and then report the results.
The idea is that a DBA Team could write a suite of scripts (rules) that will give them confidence in the data model. Examples include
list all tables that do not have a primary key
list all tables that do not have a unique index
list all tables that are not referenced by a foreign key constraint
list all roles that are not used
Ideas would be appreciated.
Dave.
I have been working with SQL Server for a couple of years. I have heard about SMO but I don't know anything about it. What are the benefits of using it? Should I learn and start using SMO in my SQL Server projects (mainly data warehouse development)? Why?
From Microsoft:
Overview (SMO)
SQL Server Management Objects (SMO)
are objects designed for programmatic
management of Microsoft SQL Server.
You can use SMO to build customized
SQL Server management applications.
Although SQL Server Management Studio
is a powerful and extensive
application for managing SQL Server,
there might be times when you would be
better served by an SMO application.
For example, the user applications
that control the SQL Server management
tasks might have to be simplified to
meet the needs of new users and to
reduce training costs. You might have
to create customized SQL Server
databases, or create an application
for creating and monitoring the
efficiency of indexes. An SMO
application might also be used to
include third-party hardware or
software seamlessly into the database
management application.
The SMO object model extends and
supersedes the Distributed Management
Objects (SQL-DMO) object model.
Compared to SQL-DMO, SMO increases
performance, control, and ease of use.
Most SQL-DMO functionality is included
in SMO, and there are various new
classes that support new features in
SQL Server. The object model is
intuitive and uses SQL-DMO
terminology, where it is possible, to
help transfer your skills.
You can download SMO here:
Microsoft® SQL Server® 2008 R2 Feature Pack
And for getting started programming:
Creating SMO Programs
It depends on what you're trying to do. SMO is SQL Server Management Objects. It is a set of libraries for managing SQL Server programmatically. For example if you're trying to build a clone of SQL Maangement Studio then SMO is something you probably want to look into. OR if you're trying to manpulate the structure of your database programmatically then that's the place to look.
Otherwise, I wouldn't bother.
I have used SMO to automatically script out object code and user permissions and add to version control.
By doing this I can save privileges or object DDL as of a point in time for my auditing team or my own research or for cloning a server.
I also use it so I can quickly compare object code from specific dates without needing a snapshot / backup.
Recently I used SMO in a Disaster Recovery Project to script out all Server Permissions and System Database Object Permissions and run the script on the replacement server.
I've recently created an POS/Work Order Management application with a SQL database backend and the SMO library. The SMO gave my a application a lot of flexibility to control the database in terms of work order records, user's records an even my own set of user roles. Helping me to differentiate SQL users when managing a specific database. So, my take is that it all depends on the extend of your use of SQL and how much you may need to automate and control certain aspects of your SQL database.
We have a common problem of moving our development SQL 2005 database onto shared web servers at website hosting companies.
Ideally we would like a system that transfers the database structure and data as an exact replica.
This would be commonly achieved by restoring a backup. But because they are shared SQL servers, we cannot restore backups – we are not given access to the actual machine.
We could generate a script to create the database structure, but then we could not do a data transfer through the menu item Tasks/Import Data because we might violate foreign key constraints as tables are imported in an order the conflicts with the database schema. Also, indexes might not be replicated if they are set to auto generate.
Thus we are left with a messy operation:
Create a script in SQL 2005 that generates the database in SQL 2000 format.
Run the script to create a SQL 2000 database in SQL 2000.
Create a script in SQL 2000 that generates the database structure WITHOUT indexes and foreign keys.
Run this script on the production server. You now have a database structure to upload data to.
Use SQL 2005 to transfer the data to the production server with Tasks/Import data.
Use SQL 2000 to generate a script that creates the database with indexes and keys.
Copy the commands that generate the indexes and foreign keys only. These are located after the table creation commands. Note: In SQL 2005, the indexes and foreign keys are generated as one and cannot be easily separated.
Run this script on the production database.
Voila! The database is uploaded with all data and keys/constraints in place. What a messy and error prone system.
Is there something better?
Scott Gu had written few posts on this topic :
SQL Server Database Publishing Toolkit for Web Hosting
Generation scripts are fine for creating the database objects, but not for transporting database information. For example, client-specific databases where the developer is required to pre-populate some data.
One of the issues I've run into with this is the new MAX types in SQL Server 2005+. (nvarchar(max), varchar(max), etc.) Of course, this is worse when you are actually using Sql Server Express, which doesn't allow for exporting other than creating your own scripts to create the data.
I would recommend switching to a hosting company that allows you to have the ability to FTP backup files and does NOT require you to use your own scripts. That's the whole point of SQL Server, right? To provide more tools that are friendlier to use. If the hosting company takes that away, you may as well move to MySql for its ease in dumping information.
WebHost4Life is a life saver in this category. They offer FTP to the database server to upload your backup file or MDF and LDF files for attachment! I was so upset when I saw GoDaddy had the similar restriction you mentioned. Their tool didn't tell me it was a bad import, and I couldn't figure out why my site was coming back with 500 errors.
One other note: I'm not sure which is considered more secure. I enabled external connections in GoDaddy and connected with Management Studio, and I was able to see every database on that server! I couldn't access them, but I now have that info. A double whammy is that GoDaddy requires that the user name for the DB be the same as the DB! now all you need to do is spam passwords against those hundreds of DBs!
Webhost4life, on the other hand, has only your specific database shown in Management Studio. And they let you pick your own DB name and user name, independent of each other. They only append the same unique id on the end of the user & db names in order to keep them from conflicting with others.
You should not rely on restoring backups for copying / transferring databases. You need to use scripts - trust me you will get better at it.
I have used the RedGate Compare tools with shared hosting and it works well.
Database-generation scripts are messy, but they also have several advantages that ... well, make the pain more tolerable.
First, if you treat the DB scripts as real programming tasks in and of themselves, you can encapsulate the messiness. If you generate a script once (using a database tool), you can split the table structure aspects from the constraint aspects (keys, indices, etc.). Similarly, you can export the data once, but split it it into "system" data that's not frequently changed but is necessary for correct operation (stuff like tax or shipping rates, etc.), 'test' data that's easily identifiable, and 'operational' data that needs to be moved from DB version Old to DB version New (last week's Orders).
The first 3 minutes after you've accomplished that, things are wonderful: you can regenerate a new database with or without test data in a few minutes. Unfortunately, after 3 minutes, the databases are out of synch, at least in terms of data, if not quite as frequently in terms of structure.
I personally like to have each table's structure as a separate SQL file (and it's constraints as a separate file in a separate directory, and it's test data in one file, it's system data in another, etc.). On the one hand, this means that several different files have to be touched when making a change, but on the other hand, it makes it much easier to see the granularity of what's been changed: it's all right there in the version control logs. (I could probably be convinced that many-files is a mistaken strategy...)
All of this is predicated on the assumption that you have some facility for actually running a complex script involving many files and are not just constrained to some Web-based control panel, which may be what you're describing when you say "we are not given access to the actual machine." I feel that you can't do custom software development and not have some kind of shell access on the server; the hosting business is competitive enough that you can certainly find a script-friendly host easily enough.
Check whether the webhsoting company provides myLittleBackup
This is definitively the easiest solution to "install" a db from the development server to the shared sql server
Answer for SQL Server 2008 users.
I had the same exact issue as OP but I was using SQL Server 2008 and my shared hosting company is GoDaddy. Here's the solution to copy DB + the data to GoDaddy database...
In Visual Studio 2010, go to Server Explorer (in VS Express, I think it's called database explorer). Right click on database and select Publish to Provider ... this opens the Database Publishing Wizard ... go thru the wizard and it'll create a xxx.sql file on your local computer ...
Open SQL Server Management Studio and connect to the GoDaddy database (you should have already created this via the GoDaddy control panel within their website) ...
Open windows explorer and find the xxx.sql file and double click it. The script should open up in SSMS. Execute the script "within the proper database" ... voila, done.