Copying data from one table to another between two SQL Server instances - sql

I have an application that uses a remote database connection as a source (SQL Server), but the database table is case sensitive, and I need it not to be. To solve this I am looking to keep a local copy of that table. Values are regularly added and removed from the source table, so I need the local table to be kept up to date, hourly. What is the simplest way to accomplish this?

Related

SSIS - ole db source/destination - only retrieve rows from source server WHERE EXISTS in table destination server

I am transfering 90 million rows from a source server to my staging area on the destination server.
And from the staging area I transfer 20 million further up the ETL process by doing an WHERE ID EXISTS in a table located on the destination server.
Since the table is not present on the source server and only in the destination server. Is it possible to filter when I pull the rows directly from the source server (so I only transfer 20 million rows from the source server to my destination server)?
Besides creating a linked server on the Source Server. there are two pure SSIS approaches.
Create a temp staging table at the Destination server, copy all records from Source to this temp stage table, and use where exists filter.
On the Data Flow, create a Lookup transformation which for lookup set gets IDs from the table of the Destination Server. Then proceed with matched records only. For performance reason, you may use either Lookup in full cache or partial cache mode; only performance testing can tell which mode is better.
Hadi's recommendations with Linked Server are fine and will work. Pure SSIS approach has advantage that it does not bring any changes to the Source Server, all connection configuration is inside SSIS. In some cases it can be beneficial. Its disadvantage - performance can be worse that with Linked Server.
If needed to transfer as little rows from Source as possible, the most simple way is the Linked Server approach. Otherwise, one can create a table at Source Server (it even can be a global temp ## table created at SSIS package task) and copy filter IDs from the Destination server. The temp table should be a global with ##, since it will be filled at one task and used in subsequent tasks. Then filter records with EXISTS clause at the Source server.
You can do that by creating a linked server on the source machine. Linked servers allows to join tables on different instances:
How to create and configure a linked server in SQL Server Management Studio
Create Linked Servers (SQL Server Database Engine)

Getting data from different database on different server with one SQL Server query

Server1: Prod, hosting DB1
Server2: Dev hosting DB2
Is there a way to query databases living on 2 different server with a same select query? I need to bring all the new rows from Prod to dev, using a query
like below. I will be using SQL Server DTS (import export data utility)to do this thing.
Insert into Dev.db1.table1
Select *
from Prod.db1.table1
where table1.PK not in (Select table1.PK from Dev.db1.table1)
Creating a linked server is the only approach that I am aware of for this to occur. If you are simply trying to add all new rows from prod to dev then why not just create a backup of that one particular table and pull it into the dev environment then write the query from the same server and database?
Granted this is a one time use and a pain for re-occuring instances but if it is a one time thing then I would recommend doing that. Otherwise make a linked server between the two.
To backup a single table in SQL use the SQl Server import and export wizard. Select the prod database as your datasource and then select only the prod table as your source table and make a new table in the dev environment for your destination table.
This should get you what you are looking for.
You say you're using DTS; the modern equivalent would be SSIS.
Typically you'd use a data flow task in an SSIS package to pull all the information from the live system into a staging table on the target, then load it from there. This is a pretty standard operation when data warehousing.
There are plenty of different approaches to save you copying all the data across (e.g. use a timestamp, use rowversion, use Change Data Capture, make use of the fact your primary key only ever gets bigger, etc. etc.) Or you could just do what you want with a lookup flow directly in SSIS...
The best approach will depend on many things: how much data you've got, what data transfer speed you have between the servers, your key types, etc.
When your servers are all in one Active Directory, and when you use Windows Authentification, then all you need is an account which has proper rights on all the databases!
You can then simply reference all tables like server.database.schema.table
For example:
insert into server1.db1.dbo.tblData1 (...)
select ... from server2.db2.dbo.tblData2;

How can I copy and overwrite data of tables from database1 to database2 in SQL Server

I have a database1 which has more than 500 tables and I have database2 which also has the same number of tables and in both the databases the name of tables are same.. some of the tables have different table definitions, for example a table reports in database1 has 9 columns and the table reports in database2 has 10.
I want to copy all the data from database1 to database2 and it should overwrite the same data and append the columns if structure does not match. I have tried the import export wizard in SQL Server 2008 but it gives an error when it comes to the last step of copying rows. I don't have the screen shot of that error right now, it is my office PC. It says that error inserting into the readonly column xyz, some times it says that vs_isbroken, for the read only column error as I mentioned a enabled the identity insert but it did not help..
Please help me. It is an opportunity in my office for me.
SSIS and SQL Server 2008 Wizards can be finicky tools.
If you get a "can't insert into column ABC", then it could be one of the following:
Inserting into a PK column -> when setting up the mappings, you need to indicate to overwrite the value
Inserting into a column with a smaller range -> for example from nvarchar(256) into nvarchar(50)
Inserting into a calculated column (pointed out by #Nick.McDermaid)
You could also get issues with referential integrity if your database uses this (most do).
If you're going to do this more often, then I suggest you build an SSIS package instead of using the wizard tooling. This way you will see warnings on all sorts of issues like the ones I've described above. You can then run your package on demand.
Another suggestion I would make, is that you insert DB1 into "stage" tables in DB2. These tables should have no relational integrity and will allow you to break the process into several steps as follows.
Stage the data from DB1 into DB2
Produce reports/queries on issues pertinent to your database/rules
Merge the data from stage tables into target tables using SQL
That last step is where you can use merge statements, or simple insert/updates depending on a key match. Using SQL here in the local database is then able to use set theory to manage the overlap of the two sets and figure out what is new or to be updated.
SSIS "can" do this, but you will not be able to do a bulk update using SSIS, whereas with SQL you can. SSIS would do what is known as RBAR (row by agonizing row), something slow and to be avoided.
I suggest you inform your seniors that this will take a little longer to ensure it is reliable and the results reportable. Then work step by step, reporting on each stages completion.
Another two small suggestions:
Create _Archive tables of each of the stage tables and add a Tstamp column to each. Merge into these after the stage step which will allow you to quickly see when which rows were introduced into DB2
After stage and before the SQL merge step, create indexes on your stage tables. This will improve the merge performance
Drop those Indexes after each merge, this will increase the bulk insert Performance
Basic on Staging (response to question clarification):
Links:
http://www.codeproject.com/Articles/173918/How-to-Create-your-First-SQL-Server-Integration-Se
http://www.jasonstrate.com/tag/31daysssis/
http://blogs.msdn.com/b/andreasderuiter/archive/2012/12/05/designing-an-etl-process-with-ssis-two-approaches-to-extracting-and-transforming-data.aspx
Staging is the act of moving data from one place to another without any checks.
First you need to create the target tables, the schema should match the source tables.
Open up BIDS and create a new Project and in it a new SSIS package.
In the package, create a connection for the source server and another for the destination.
Then create a data flow step, in the step create a data source for each table you want to copy from.
Connect each source to a new data destination and set the appropriate connection and table.
When done, save and do a test run.
Before the data flow step, you might like to add a SQL step that will truncate all the target tables.
If you're open to using tools then what about using something like Red Gate Sql Compare and Red Gate SQL Data Compare?
First I would use data compare to manage the schema differences, add the new columns you want to your destination database (database2) from the source (database1). Then with data compare you match the contents of the tables any columns it can't match based on names you specify how to handle. Then you can pick and choose what data you want to copy from your destination. So you'll see what data is new and what's different (you can delete data in the destination that's not in the source or ignore it). You can either have the tool do the work or create you a script to run when you want.
There's a 15 day trial if you want to experiment.
Seems like maybe you are looking for Replication technology as is offered by SQL Server Replication.
Well, if i understood your requirement correctly, you need to make database2 a replica of database1. Why not take a full backup of database1 and restore it as database2? Your database2 will be exactly what database1 is at the time of backup.

Keeping temp tables in memory after switching servers?

How can I get #tempTable to stay in memory after switching servers. Is this possible?
Select *
into #tempTable
from dbo.table
I have data in server 1 that I want to compare in server 2, but I only have readonly access to server 2 (so I can't just move my data there), and the table in server 2 is too big to move to server 1. This is why I want to know how to keep a temp table in memory after connecting to a new server.
Any help would be appreciated, thanks.
What you write is possible, but it just creates a temporary table on the server where the table is.
You probably want:
select *
into #Server2Table
from server2.database.dbo.table
You can then use #Server2Table in additional queries on the same connection where you copied it (such as the same window in SSMS or the same job step or the same stored procedure). If you need it in a more permanent location, either use a global temporary table (starts with ##) or a "real" table.
This requires the ability to link servers, using something like:
sp_addlinkedserver server2
You would run this on server1. Perhaps your DBA will need to set it up.
I have found that queries often run faster when loading cross-server tables into a temporary table. This is not because temporary tables are stored in memory. This is because there is more information available about a table on a local server for the SQL optimizer to take advantage of.
You could export the data from your query and import that into a database on your first server. You can use the SQL server import/export wizard. Bit convoluted, but if this happens a lot you can use SSIS to automate this move or even simply tick the 'Save this package' box. Once you have the data in your first server you can do whatever you like with it.

Move selected data from one server to another sql server 2008

I need to move selected data from 800+ tables in one database to the same 800+ tables in another database in another server. The data I select is based on date fields of every table. So, if I say table 1 date from 01/01/10 to 01/15/10, then only that data I want to be copied into the other server's database table specified.
I hope I am not confusing anyone. What is easiest way to do this?
Look into SSIS. What you're talking about is very easy using it. Here is a page that talks about using variables in SSIS.
If this is a one time solution and the destination database is going to be a brand new one. I would restore a backup from the source database and then delete all the records outside of the date range I want in the new database.
If this is a one time solution and you need to move the data to an existing database you can use the export/import wizard in SQL Server Management Studio (This is not in Express edition). Right click on the database go to task and select export data. Then you can use a query to select the data based on the date range from the source table.
You can also link the servers and just run an insert into Server1.database.dbo.table1 to Server2.database.dbo.Table2.
If you will be moving data everyday I would recommend you to create an SSIS package. You can use the Export Wizard and save the SSIS package at the end. Then you can modify the SSIS package using Visual Studio.