How to overwrite table structure and data from db1 to db2 - sql

I am developing a Grails-application which uses several databases, others are read-only and 1 is the app's sort of a "main db". Additionally there are multiple environments: dev, qa, prod. qa is used for release-testing and is identical to prod.
Always before release-testing I need to overwrite the "main" qa-database with "main" prod-database. I don't have other than SQL-user access to the server running MS SQL instance.
What I need is the magic that drops everything in qa-database without dropping the database itself and imports everything from the prod-database. Databases contain a lot of foreign key constraints.
How to achieve the aforementioned?
P.S.
I did this on MySQL but now we've migrated to MS SQL. My MySQL-script goes somewhat like this (pseudo):
SET foreign_key_checks = 0;
-- Drop all tables..
SET foreign_key_checks = 1;
-- Import prod-dump to DB..

You shouldn't do this in straight T-SQL.
You really should use something like SMO Scripting in .NET to export objects in this way. There is NO clean way to do what you are asking in pure SQL code.
There are too many variables to account for if you plan to just build dynamic SQL from system tables, which is the only way to approach this in T-SQL.

I think the the tool "xSQL Data Compare" exactly matches your requirements. You will need "sa" access at least for the qa-DB though.

Related

Can I use with(index(xxx)) in my SQL with DB2

I'm used to being able to tell my sql statement which index I'd like for it to use in MSSQL. But it seems like that doesn't work in DB2 the same way.
This statement works for me in MSSQL but not in Db2. :
SELECT ACT.COMPANY,ACT.ACCT_UNIT,ACT.ACTIVITY,ACT.ACTIVITY_GRP,ACT.ACCT_CATEGORY,ACT.TRAN_AMOUNT,ACT.DESCRIPTION as ACT_DESCRIPTION,
AP.VENDOR,AP.INVOICE,AP.PO_NUMBER,AC.DESCRIPTION as AC_DESCRIPTION
FROM
ACTRANS ACT WITH (INDEX(ATNSET12)),
APDISTRIB AP WITH (INDEX(APDSET9)),
ACACTIVITY AC WITH (INDEX(ACVSET1))
WHERE
ACT.OBJ_ID = AP.ATN_OBJ_ID AND
ACT.ACTIVITY = AC.ACTIVITY AND
ACT.ACCT_CATEGORY != 'CAPEX'
Thankyou!
Well, choosing an index or other ways of accessing the data should be the task of the database system, not the user. Data distribution, database technology, available resources like memory and disk might change, still your query should work in an optimial way due to the database system figuring out an optimal access plan.
If you still believe this should be influenced, then DB2 offers several features to do so: Database configuration parameters or better session-specific environment settings, optimization profiles, different ways of maintaining statistics, ...

Local vs Global temp tables - When to use what?

I have a report which on execution connects to the database with my_report_user username. There can be many end-users of the report. And in each execution a new connection to the database will be made with my_report_user (there is no connection pooling)
I have a result set which I think can just be created once (may be on the first run of the report) and other report executions can just reuse that stuff. Basically each report execution should check whether this result set (stored as temp table) exists or not. If it does not exist then create that result set else just reuse whats available.
Should I use local temp tables (#) or global temp tables (##)?
Has anyone tried such stuff and if yes, please let me know what all things should I care about? (Almost simultaneous report runs, etc.)
EDIT: I am using Sql-Server 2005
Neither
If you want to cache result result sets under your own control, then you cannot use temp tables, of any kind. You should use ordinary user tables, stored either in tempdb or even have your own result set cache database.
Temp tables, bot #local and ##shared have a lifetime controlled by the connection(s). If your application disconnect, the temp table is deleted, and this does not work well with what you describe.
The real difficult prolem will be to populate these cached result sets under concurent runs without mixing things up (end up with result sets containing duplicate items from concurent report runs that both believed are the 'first' run).
As a side note SQL Server Reporting Services already does this out-of-the-box. You can cache and share datasets, you can cache and share reports, it already works and was tested for you.
I find #temp tables can be useful in certain scenarios, but not as a best practice. I have yet to find a valid use for global ##temp tables, either in my own work, or in the work of anyone else who has written about them. The only case I can think of is BCP or other external process which needs to build a temporary data store and then retrieve it in some subsequent step. In that case I would prefer to use a permanent table with some kind of key and a background process to handle cleanup.
It sounds like you are getting into an OLTP mode now. Reading up on database warehousing will definitely help you.

Move Data from Oracle to SQL Server

I would like to copy parts of an Oracle DB to a SQL Server DB. I need to move the data because the Oracle box is being decommissioned. I only need the data for reference purposes so don't need indexes or stored procedures or contstaints, etc. All I need is the data.
I have a link to the Oracle DB in SQL Server. I have tested the following query, which seemed to work just fine:
select
*
into
NewTableName
from
linkedserver.OracleTable
I was wondering if there are any potential issues with using this approach?
Using SSIS (sql integration services) may be a good alternative especially if your table names are the same on both servers. Use the import wizard via and it should create the destination tables for you and let you edit any mappings.
The only issue I see with that is you will need to execute that of course for each and every table you need. Glad you are decommissioning the oracle server :-). Otherwise if you are not concerned with indexes or any of the existing sprocs I don't see any issue in what you are doing.
The "select " approach could be very slow if tables are large. Consider writing pro*C in that case or use Fastreader http://www.wisdomforce.com/products-FastReader.html
A faster and easier approach might be to use the Data Transformation Services, depending on the number of objects you're trying to copy over.

best way in producing a master script for SQL

i want to extract specific database tables & stored procedures into one master script. Do you know any software that can help me do this faster? I've tried using the SQL Database publishing tool, but it's not that efficient since its gathering tables that I didn't select.
In SQL Server 2005, right click on the database, then select Tasks, and then select Generate Scripts.
Generating SQL Scripts in SQL Server 2005
As mentioned in that link, I'm fairly sure you have to generate the DROP and CREATE statements separately.
Try DBSourceTools. http://dbsourcetools.codeplex.com
Its open source, and specifically designed to script databases - tables, views, procs to disk.
It also allows you to select which tables, views, db-objects to script.
I use Redgate SQL compare for this (by comparing to an empty DB), as well as for doing upgrades between all my DB versions (I save a copy of the DB for each released version, and then just do a compare between current and previous to get a change script for that version).
I have found the "Generate Scripts" does a bad job in some cases with dependencies - eg, it will try to create a stored procedure that uses a table before the table is created, causing the script to fail. I'll accept I'm possibly using it wrong, but SQL Compare "just works". The scripts it generates are also enclosed in a transaction -- so if something fails, the whole change is rolled back. You don't end up with a half-populated or half-upgraded database.
Downside is that this is a commercial tool, but IMHO worth the money.

how to compare/validate sql schema

I'm looking for a way to validate the SQL schema on a production DB after updating an application version. If the application does not match the DB schema version, there should be a way to warn the user and list the changes needed.
Is there a tool or a framework (to use programatically) with built-in features to do that?
Or is there some simple algorithm to run this comparison?
Update: Red gate lists "from $395". Anything free? Or more foolproof than just keeping the version number?
Try this SQL.
- Run it against each database.
- Save the output to text files.
- Diff the text files.
/* get list of objects in the database */
SELECT name,
type
FROM sysobjects
ORDER BY type, name
/* get list of columns in each table / parameters for each stored procedure */
SELECT so.name,
so.type,
sc.name,
sc.number,
sc.colid,
sc.status,
sc.type,
sc.length,
sc.usertype ,
sc.scale
FROM sysobjects so ,
syscolumns sc
WHERE so.id = sc.id
ORDER BY so.type, so.name, sc.name
/* get definition of each stored procedure */
SELECT so.name,
so.type,
sc.number,
sc.text
FROM sysobjects so ,
syscomments sc
WHERE so.id = sc.id
ORDER BY so.type, so.name, sc.number
I hope I can help - this is the article I suggest reading:
Compare SQL Server database schemas automatically
It describes how you can automate the SQL Server schema comparison and synchronization process using T-SQL, SSMS or a third party tool.
You can do it programatically by looking in the data dictionary (sys.objects, sys.columns etc.) of both databases and comparing them. However, there are also tools like Redgate SQL Compare Pro that do this for you. I have specified this as a part of the tooling for QA on data warehouse systems on a few occasions now, including the one I am currently working on. On my current gig this was no problem at all, as the DBA's here were already using it.
The basic methodology for using these tools is to maintain a reference script that builds the database and keep this in version control. Run the script into a scratch database and compare it with your target to see the differences. It will also generate patch scripts if you feel so inclined.
As far as I know there's nothing free that does this unless you feel like writing your own. Redgate is cheap enough that it might as well be free. Even as a QA tool to prove that the production DB is not in the configuration it was meant to be it will save you its purchase price after one incident.
You can now use my SQL Admin Studio for free to run a Schema Compare, Data Compare and Sync the Changes. No longer requires a license key download from here http://www.simego.com/Products/SQL-Admin-Studio
Also works against SQL Azure.
[UPDATE: Yes I am the Author of the above program, as it's now Free I just wanted to Share it with the community]
If you are looking for a tool that can compare two databases and show you the difference Red Gate makes SQL Compare
You didn't mention which RDMBS you're using: if the INFORMATION SCHEMA views are available in your RDBMS, and if you can reference both schemas from the same host, you can query the INFORMATION SCHEMA views to identify differences in:
-tables
-columns
-column types
-constraints (e.g. primary keys, unique constraints, foreign keys, etc)
I've written a set of queries for exactly this purpose on SQL Server for a past job - it worked well to identify differences. Many of the queries were using LEFT JOINs with IS NULL to check for the absence of expected items, others were comparing things like column types or constraint names.
It's a little tedious, but its possible.
I found this small and free tool that fits most of my needs.
http://www.wintestgear.com/products/MSSQLSchemaDiff/MSSQLSchemaDiff.html
It's very basic but it shows you the schema differences of two databases.
It doesn't have any fancy stuff like auto generated scripts to make the differences to go away and it doesn't compare any data.
It's just a small, free utility that shows you schema differences :)
Make a table and store your version number in there. Just make sure you update it as necessary.
CREATE TABLE version (
version VARCHAR(255) NOT NULL
)
INSERT INTO version VALUES ('v1.0');
You can then check the version number stored in the database matches the application code during your app's setup or wherever is convenient.
SQL Compare by Red Gate.
Which RDBMS is this, and how complex are the potential changes?
Maybe this is just a matter of comparing row counts and index counts for each table -- if you have trigger and stored procedure versions to worry about also then you need something more industrial
Try dbForge Data Compare for SQL Server. It can compare and sync any databases, even very large ones. Quick, easy, always delivers a correct result.
Try it on your database and comment upon the product.
We can recommend you a reliable SQL comparison tool that offer 3 time’s faster comparison and synchronization of table data in your SQL Server databases. It's dbForge Data Compare for SQL Server.
Main advantages:
Speedier comparison and synchronization of large databases
Support of native SQL Server backups
Custom mapping of tables, columns, and schemas
Multiple options to tune your comparison and synchronization
Generating comparison and synchronization reports
Plus free 30-day trial and risk-free purchase with 30-day money back guarantee.