I was wondering if someone could help me with creating a while loop to iterate through multiple databases(100 databases) and drop/Truncate the tables(Around 60 tables in each database) within those databases.Thank you.
My task is to create an SSIS package to move the data from source database to target database.Data in the destination needs to be truncated as part of the process whenever I run that package the old data need to be truncated and the new data needs to be inserted.
Kindly help.
Thank you.
I am just explain how i would go about this problem.
Hoping the destination and source database,tables are of similar schema.
1) Try to create a table with all the database (Source and destination) details all 60 in a separate database.
2) You need get the database details using execute SQL task from the table and use For Each container for looping the logic of truncate the destination table and then move the data from source database to destination (The data is moved from source database one after other).
3) You will need to use Dynamic SQL and stored procedures for moving the data from source database to destination database (That you must be knowing). or You can use Data flow task too if you don't want to use Stored procedure or Dynamic Queries.
Hope it helps you! :)
I have a table in SQL Server 2012. The table has few columns. Now I need to check for which column and what is the DDL script has been executed on this table. Or if it is possible to get what is the latest ALTER has been executed for this table.
Thanks in advance.
You cannot achieve this if you do not have a historical or archive table that store this data (using a DDL Trigger) or using a source control.
OR
You have to use a third-party log reader (if log is not shrinked) like ApexSQL LOG
I have a database1 which has more than 500 tables and I have database2 which also has the same number of tables and in both the databases the name of tables are same.. some of the tables have different table definitions, for example a table reports in database1 has 9 columns and the table reports in database2 has 10.
I want to copy all the data from database1 to database2 and it should overwrite the same data and append the columns if structure does not match. I have tried the import export wizard in SQL Server 2008 but it gives an error when it comes to the last step of copying rows. I don't have the screen shot of that error right now, it is my office PC. It says that error inserting into the readonly column xyz, some times it says that vs_isbroken, for the read only column error as I mentioned a enabled the identity insert but it did not help..
Please help me. It is an opportunity in my office for me.
SSIS and SQL Server 2008 Wizards can be finicky tools.
If you get a "can't insert into column ABC", then it could be one of the following:
Inserting into a PK column -> when setting up the mappings, you need to indicate to overwrite the value
Inserting into a column with a smaller range -> for example from nvarchar(256) into nvarchar(50)
Inserting into a calculated column (pointed out by #Nick.McDermaid)
You could also get issues with referential integrity if your database uses this (most do).
If you're going to do this more often, then I suggest you build an SSIS package instead of using the wizard tooling. This way you will see warnings on all sorts of issues like the ones I've described above. You can then run your package on demand.
Another suggestion I would make, is that you insert DB1 into "stage" tables in DB2. These tables should have no relational integrity and will allow you to break the process into several steps as follows.
Stage the data from DB1 into DB2
Produce reports/queries on issues pertinent to your database/rules
Merge the data from stage tables into target tables using SQL
That last step is where you can use merge statements, or simple insert/updates depending on a key match. Using SQL here in the local database is then able to use set theory to manage the overlap of the two sets and figure out what is new or to be updated.
SSIS "can" do this, but you will not be able to do a bulk update using SSIS, whereas with SQL you can. SSIS would do what is known as RBAR (row by agonizing row), something slow and to be avoided.
I suggest you inform your seniors that this will take a little longer to ensure it is reliable and the results reportable. Then work step by step, reporting on each stages completion.
Another two small suggestions:
Create _Archive tables of each of the stage tables and add a Tstamp column to each. Merge into these after the stage step which will allow you to quickly see when which rows were introduced into DB2
After stage and before the SQL merge step, create indexes on your stage tables. This will improve the merge performance
Drop those Indexes after each merge, this will increase the bulk insert Performance
Basic on Staging (response to question clarification):
Links:
http://www.codeproject.com/Articles/173918/How-to-Create-your-First-SQL-Server-Integration-Se
http://www.jasonstrate.com/tag/31daysssis/
http://blogs.msdn.com/b/andreasderuiter/archive/2012/12/05/designing-an-etl-process-with-ssis-two-approaches-to-extracting-and-transforming-data.aspx
Staging is the act of moving data from one place to another without any checks.
First you need to create the target tables, the schema should match the source tables.
Open up BIDS and create a new Project and in it a new SSIS package.
In the package, create a connection for the source server and another for the destination.
Then create a data flow step, in the step create a data source for each table you want to copy from.
Connect each source to a new data destination and set the appropriate connection and table.
When done, save and do a test run.
Before the data flow step, you might like to add a SQL step that will truncate all the target tables.
If you're open to using tools then what about using something like Red Gate Sql Compare and Red Gate SQL Data Compare?
First I would use data compare to manage the schema differences, add the new columns you want to your destination database (database2) from the source (database1). Then with data compare you match the contents of the tables any columns it can't match based on names you specify how to handle. Then you can pick and choose what data you want to copy from your destination. So you'll see what data is new and what's different (you can delete data in the destination that's not in the source or ignore it). You can either have the tool do the work or create you a script to run when you want.
There's a 15 day trial if you want to experiment.
Seems like maybe you are looking for Replication technology as is offered by SQL Server Replication.
Well, if i understood your requirement correctly, you need to make database2 a replica of database1. Why not take a full backup of database1 and restore it as database2? Your database2 will be exactly what database1 is at the time of backup.
How can I sync two databases and do a manual refresh on the entities on either of the database whenever I want?
Let's say I have two databases DB1(prod) and DB2(dev). I want to update/insert only a few tables from prod DB to dev DB. How could I achieve this? Is this possible instead of DBlink since I do not have privileges to create a database link?
If you only want to do a manual refresh set up an import/export/datapump script to copy the data across if there is not too much data involved. If there is a large amount of data you could write some pl/sql as described above to only move the new/changed rows. This will be easier if your data has fields such as created/updated_on
I am using SQL Server 2005 SSIS and we are using the Data Flow Task to move data from one table to another. This works well. Now we have another requirement to do data update from the same table using this approach.
Is this possible to use the same approach for as follow:
We have a dataset from Table A based on complex query
We update back to the Table A
The normal query UPDATE INTO is not an option due it takes awhile to process and we can't see the data movement like we did for Data Flow Task.
Any guidance or anything that will be good.
Thanks
either:
write it to a temporay table and do the update into with a single SQL task after you processed everything
break it down into smaller chunks based on SSIS variables and OFFSET and use a FOR/FOREACH LOOP
Read the data with a data source in a data flow task, and use ole db command in the data flow to update the data in the same table. If there is no locking when you read and only row-level locking when you update, that should work