I am currently trying to gather requirements to what is actually needed to move code from dev to production. These will mainly be SQL stored procedures and will be used to create reports in SSRS.
What are some things you guys do as a small (nothing major, or complicated) when moving code from dev to prod?
Thanks
Test
Test
Test
back up everything you're going to replace
Test
Make sure that any modifications to underlying tables/views are migrated first
Use the "GENERATE SCRIPTS" option
Mind you, it's a good idea to have two databases: one for storing data and one for storing stored procedures. This way, you can create a third database to deploy your new stored procedures and views to so you can test them using live data without breaking anything for your users. We refer to it as a "staging" database.
Create a rollout script
Create a rollback script
Create Release note or deployment steps
Deploy the rollout in preprod or UAT env (following the release notes)and test
Deploy the rollback in preprod or UAT env (following the release notes) and test
Take a DB backup if necessary
Deploy
Related
We have three main servers we use for a project: development, preproduction and production.
We need the "preproduction" server to be as much as possible in sync with the production server, so we plan to, every two weeks, update everything in the database to match any new changes made in production in that period of time.
For the database information, we're using Linked servers, to retrieve the latest data from production. The problem is, how to do something similar for the stored procedures?
Is there a way to make it so that, if a change was made to a production server stored procedure, such change would be made also to preproduction without doing it manually? It can be somewhat slow, since we would do it throughout the night.
Same thing with any changes made to a table structure in that time, is there any way to detect it and run it? We would like a script we could run every two weeks, but if not possible, we could look into other options.
SqlPackage is the native SQL Server tool to generate schema drift scripts, and to deploy database schema changes.
Note you should deploy your changes to non-prod environments first, not last.
My Requirement is to create delta scripts for an existing project, so we will be making lot of changes and create more tables in it.
We have Dev, QA, Stage and Production environments. I want to do the changes only in dev environment and rest of the environments have to be taken care by Dacpac automatically using VSTS. All the scripts have to be re-runnable except the seed data.
I am able to add a table, but unable to add an alter table statement in database project in build mode. I don't want to import full database. Can the Dacpac not accept alter statements?
Since I have to check if exists for post deployment script, I don't want use alter statement there. How can I achieve this?
I think you may have miss understood what the SQL Server database project is/does. The project it self uses SQL scripts to build an in memory model of what your database looks like. This model then compiles down to a DACPAC which contains meta data that describes what the database should look like. When you deploy the DACPAC, this generates a change script that will transform the database into a state that matches the model described in the DACPAC.
The reason your ALTER TABLE doesn't work is due to there being no table to alter. The project isn't a database and it doesn't know how to represent in memory your ALTER statements. If you include it in a pre or post deploy script, the model will ignore this. It will, as you found out, mess with the deployments to the other environments.
The ideal way to deploy your database to your dev environment is using VSTS via CI/CD practices with the DACPAC. I'm not sure why you don't want to use a DACPAC to deploy to your dev environment, but if this is a hard fast rule, then you can use schema compare in Visual Studio's SSDT to copy your changes locally to the target database.
I'm working on a project as an outsourcing developer where i don't have access to testing and production servers only the development environment.
To deploy changes i have to create sql scripts containing the changes to make on each server for the feature i wish to deploy.
Examples:
When i make each change on the database, i save the script to a folder, but sometimes this is not enought because i sent a script to alter a view, but forgot to include new tables that i created in another feature.
Another situation would be changing a table via SSMS GUI and forgot to create a script with the changed or new columns and later have to send a script to update the table in testing.
Since some features can be sent for testing and others straight to production (example: queries to feed excel files) its hard to keep track of what i have to send to each environment.
Since the deployment team just executes the scripts i sent them to update the database, how can i manage/ keep track of changes to sql server database without a compare tool ?
[Edit]
The current tools that i use are SSMS, VS 2008 Professional and TFS 2008.
I can tell you how we at xSQL Software do this using our tools:
deployment team has an automated process that takes a schema snapshot of the staging and production databases and dumps the snapshots nightly on a share that the development team has access to.
every morning the developers have up to date schema snapshots of the production and staging databases available. They use our Schema Compare tool to compare the dev database with the staging/production snapshot and generate the change scripts.
Note: to take the schema snapshot you can either use the Schema Compare tool or our Schema Compare SDK.
I'd say you can have a structural copy of test and production servers as additional development databases and keep in mind to always apply change when you send something.
On these databases you can establish triggers that will capture all DDL events and put them into table with getdate() attached. With that you should be able to handle changes pretty easily and some simple compare will also be easier to apply.
Look into Liquibase specially at the SQL format and see if that gives you what you want. I use it for our database and it's great.
You can store all your objects in separate scripts, but when you do a Liquibase "build" it will generate one SQL script with all your changes in it. The really important part is getting your Liquibase configuration to put the objects in the correct dependency order. That is tables get created before foreign key constraints for one example.
http://www.liquibase.org/
Is any thing wrong if i create alter script on the entire database in analysis service in the development server SSMS and execute that script on the production server SSMS instead of deploying through BIDS?
no, you actually should never use BIDS to deploy to prod. BIDS will always overwrites the management settings(security and partition) of the target server.
the best option is to use the Deployment Wizard. It enables you to generate an incremental deployment script that updates the cube and dimension structures. Can customize how roles and partitions are handled. It uses as input files the XML output files generated by building the SSAS in BIDS and you can run on several modes:
Silent Mode (/s): Runs the utility in silent mode and not display any dialog boxes.
Answer file mode (/a): Do not deploy. Only modify the input files.
Output mode (/o): No user interface is displayed. Generate the XMLA script that would be sent to the deployment targets. Deployment will not occur.
If you want a complete synchronization, you can use the "Synchronize Database Wizard". It pretty much clones a database. When the destination database already exists, it performs metadata synchronization and incremental data synchronization. When the destination database does not exist, a full deployment and data synchronization is done.
I think the main disadvantage of scripting the whole database is that everything may be reprocessed. Also, if another team or team member is responsible for deploying the script it may be a lot harder to review and understand if everything is rebuilt with each update.
I work for Red Gate and we recently introduced a free tool called SSAS Compare to help manage this scenario. It helps you to create a script containing just the changes you want to deploy
Usually throughout development of a project I will deploy frequently, just to make sure I wont have any problems in production.
Also, throughout development, I find myself changing the database's schema.
How can I easily update the database in production?
I have been dropping the old database and reattaching the new one. Is there a faster way to update the deployment database?
Thanks
EDIT
What are some free tools for this?
Maintain a list of all the change scripts that you apply to your dev database and apply them to the Production database when deploying.
Alternatively, use a third party tool that can compare the two schemas and provide a changescript which you can then run.
I try to use tools like RedGate SQL Compare which will show you "diffs" between two versions and actually script out the components that are different. You can also make it a habit to script all of your database revisions so that you have an audit trail of changes you've made and can apply them in a programmatic way when you are ready to deploy.
Your best bet is to implement your changes as a set of diff scripts. So rather than dropping a table and recreating it, you script is as ALTER TABLE.
There are also tools out there that help you do this. If you keep a copy of the original and the new database, you can run a tool against the two which will generate SQL that will take you from one version to another.
I personally like to keep full creation scripts updated, as well as maintaining an upgrade script, whenever I change the schema for a particular release. I have used Red Gate SQL Compare, and it is a very good tool, but prefer to keep the scripts maintained.
Always write a script to make your schema changes. Place the script in a promotion folder so that when you promote your changes, the scripts are executed to change each environment.
Try DBSourceTools.
http://dbsourcetools.codeplex.com
Its open source, and will script an entire database
- tables, views, procs and data to disk, and then allow you to re-create that database through a deployment target.
It's specifically designed to help developers get their databases under source code control.
The Generate Scripts wizard did exactly what I needed.
Migrator Dot Net is an awesome tool for versioning your database. It's hard to go back to manually keeping track of scripts and doing database comparisons after you've used migrations.
Visual Studio Database Edition is quite good at this. It keeps your entire schema in source scripts under source control along with the rest of your code. It can analyze your schema for dependencies when you make a change. It can run best practices analysis. And it can generate a .dbschema file that can is used by the deployment tool to upgrade your database to the current schema.
You can actually automate this with continuos integration and build drops straight to test environment, staging environment and even production environment. What that means is that when you check in into the test branch, the build machine will build product, run the build validation tests and deploy it on your development server. When you reverse integrate from test branch to main branch, the build machine builds the product, runs the BVTs and deploys is on your staging test/acceptance server. And when you integrate into the release branch the build machine will build, test and finally deploy on production. Now is true, not many orgs are ready to go that far and let the continuos build process deploy automatically on the live production servers and I reckon it is kinda radical thinking. But I say you should trust more your automated BVTs and automated processes than any manual test and deployment.