copying data between servers on two different machines - SQL - sql

I found this question Copy table to a different database on a different SQL Server which is close to what I want but my two databases are on two different machines. I am interested in backing 1 or two tables, not the whole database. I tried BCP backup and bulk insert but I am consistently getting error on importing date field (type mismatch or invalid character for the specified codepage). I gave up after I successfully imported the peice of csv file that I was getting error for in a new test table.
Now I would like something like this
select INTO mycomputer\SQLEXPRESS\target_table from ReMOTECOMPUTER\SQLEXPRESS\source_table
or anything similar? Can I do that, what is the proper syntax if yes. I tried but was not successful.

Have you looked at using linked servers? We had a somewhat similar data consistency issue and used a linked server setup to provide for triggered data propagation. Once you have the linked servers defined you can issue your statement pretty much as you have it listed in your question.
http://msdn.microsoft.com/en-us/library/ms188279.aspx

Related

How do I copy data from one Azure database table to a different Azure database table and also convert data types?

I have to copy data from one table to another, the tables are held in two different databases within Azure. I did a quick search for answers to this and whilst a query seems fairly straight forward i.e.
INSERT INTO table1 (make, model, type, serial)
SELECT the_make, the_model, the_type, ref_no
FROM database2.dbo.table2
I encountered issues because I'm using Azure.
Msg 40515, Level 15, State 1, Line 16 Reference to database and/or
server name in 'database2.dbo.table2' is not supported in this version of
SQL Server.
The above issue led me to the Cross-Database Queries articles. My requirements are a little more complicated than some of the scenarios provided and I need some help in making it work.
I also need to convert some columns such as reg_no which is a 'string' to an 'int' and then copy the value to the 'serial' column.
My question is, what the best way to create a script for this that allows me to reference both databases without any errors, copy the data and convert the columns at the same time? I tried the simple way of exporting data and importing it, editing the mappings for the columns, it wasn't that good I found and was causing problems all over the place.
Any guidance is appreciated on this.
You're getting this error because there's no linked server by default. You'll need to add it, in order to access the secondary db server. Here's a link about how to do it:
https://www.sqlshack.com/create-linked-server-azure-sql-database/
In terms of the transformation. It depends on many factors e.g. amount of rows, frequency, etc..
Usually the best alternative is by using an external tool (ETL) such as SSIS / Azure Data Factory because you can schedule it's execution and get the status of each execution.

same query from remote server and on server, different results

I have a server called ERP-SERVER, and a server called SQLDEV-SERVER.
They both have a blob instance, but we never copy over the complete blob to the SQLDEV-SERVER as that would be too much data.
So when trying to access a file on our test server, it should first check if that file exists on the SQLDEV-SERVER, and if not check if the file exists on the ERP-SERVER. This is where it goes wrong. This piece of code (SQL) used to work but somewhere along the way it broke. I have narrowed it down to the inter database query just returning completely different results.
so for instance i run this query on the ERP-SERVER instance in SQL management studio:
SELECT count(*)
FROM [erp-server].[Extranet_Blob].[dbo].[FileStorBlob]
this returns 223221 results.
When i run the same query on the SQLDEV-SERVER instance in SQL management studio, it returns 313 results.
It points to the same server and same database, yet a completely different count, which is why it is also not returning the files from the live environment when it is not found on the dev environment.
Any pointers as to where this problem could be situated?
Look very carefully at your linked server definition. When you are running the query on SQLDEV-SERVER it is using the linked server definition of that name rather than necessarily the ERP_Server. Is it possible that someone has fiddled with the definition?

Excel query showing different result than SSMS query

I have had an odd error I cannot explain. Basically, I am running a query to my SQL database using excel and am having non-existent data pop up when it comes to a very particular order in my database.
Here is a simple query surrounding this order:
select * from OR200100 where OR200100.OR20001='0000793605'
Here is the output in EXCEL
And here is the same output in SQL
what is happening here? How could the same query generate 2 different results?
Run SQL Server Profiler against the database if you can, then compare the output to the sql query that you are running in ssms.
OK, so it's SQL Server then, that's important because different SQL products can have very different idiosyncrasies and controls.
The next things to check are these:
Is OR200100 a Table or a View? If it's a view then post it's code.
Are you using the same Login/account from both Excel and SSMS?
Are you sure that you are connecting to the same Server and Database? SSMS tells you what you are connected to, but client apps like Excel do not and it is very common for this type of problem to be caused by the app connecting to a Dev or QA version of the database. See here for some of the different ways that this can happen:
So I had a very similar problem, my query was grouping by week numbers. What I found was that one of the queries had set datefirst 5 set whilst the other didn't. I guess the key thing here is make sure, if you are using any SET operations in your ssms queries, these are identical to those in the Excel query string.

SSIS and MySQL - Table Name Delimiter Issue

I am trying to insert rows into a MySQL database from an Access database using SQL Server 2008 SSIS.
TITLE: Microsoft SQL Server Management Studio
------------------------------
ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.0.51a-community-nt]You have
an error in your SQL syntax; check the manual that corresponds to your MySQL
server version for the right syntax to use near '"orders"' at line 1
The problem is with the delimiters. I am using the 5.1 ODBC driver, and I can connect to MySql and select a table from the ADO.Net destination data source.
The MySql tables all show up delimited with double-quotes in the SSIS package editor:
"shipto addresses"
Removing the double quotes from the "Use a table or view" text box on the ADO.NET Destination Editor or replacing them with something else does not work if there is a space in the table name.
When SSIS puts the Insert query together, it retains the double quotes and adds single quotes.
The error above is shown when I click on "Preview" in the editor, and a similar error is thrown when I run the package (albeit then from the actual insert statement).
I don't seem to have control over this behavior. Any suggestions? Other package types where I can hand-code the SQL don't have this problem.
Sorry InnerJoin, I had to take the accepted answer away from you. I found a workaround here:
The solution is to reuse the connection for all tasks, and to turn ANSI quotes on for the connection before you do any inserts, with an Execute Sql task that runs the following:
set sql_mode='STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,
NO_ENGINE_SUBSTITUTION,ANSI_QUOTES'
Try using square brackets around the table names. That may help.
EDIT: If you can, I would create views (with no spaces) based on the Access tables, and use those to export. Even if it means building another Access database with linked tables, I think this is your best bet.
I've always struggled with using SSIS with MYSQL directly. Even after installing the ODBC drivers, they just don't play well in data flows. I've always ended up creating linked ODBC connections between SQL Server and MYSQL. I then rely on linked server queries to bring over data. Instead of using a SSIS data flow task, I use an Execute SQL command, usually in the form of a stored procedure that executes an OPENQUERY.
One solution you could do is load the data into a SQL Server database and use it as a staging environment before you load it into the MYSQL database. I regularly move data between SQL Server 2008 and MYSQL and in the past I use to regularly move data between Access and SQL Server.
Another possible solution is to transform the incoming Access data before it loads into the MYSQL database. That may give you a chance to clean up the column names and the actual data that's going through to MYSQL.
Let me know if either of these work for you.
You can locate the configuration setting file my.ini at <<Drive>>:\ProgramData\MySQL\MySQL Server 5.6\my.ini and add "ANSI_QUOTES" to sql-mode.
e.g: sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION,ANSI_QUOTES". This should solve the issue while previewing in the SSIS editor.

How can I use transactions that span procedures chained across multiple servers?

I'm trying to test a proposition that one of our vendors presented to us for accessing their product database and it regards to queries and transactions that span multiple servers. I've never done this directly on the database before and to be frank, I'm clueless, so I'm trying to mock up a proof that this works at least conceptually.
I've got two SQL Server 2005 servers. Let's for argument's sake call them Server1 and Server2 [hold your applause] each containing a dummy database. The dummy database on Server1 is called Source and that on Server2 is called Destination, just to keep things simple. The databases each hold a single table called Input and Output respectively, so the structure is quasi explained like so:
Server1.Source.dbo.Input
Server2.Destination.dbo.Output
I have a stored procedure on Server2 called WriteDataToOutput that receives a single Varchar argument and writes it's content to the output table.
Now the trickiness starts:
I want to create a stored procedure on Server1.Source that calls the WriteDataToOutput stored procedure defined on Server2, which seems like the simple step.
I want this call to be part of a transaction so that if the procedure that invokes it fails, the entire transaction is is rolled back.
And here endeth my knowledge of what to do. Can anyone point me in the right direction? I tried this on two different databases on the same server, and it worked just fine, leading me to assume that it will work on different servers, the question is, how do I go about doing such a thing? Where do I start?
As others have noted, I agree that a linked server is the best way to go.
Here are a couple of pointers that snagged me the first time I dealt with linked servers:
If the linked server is an instance, make sure you bracket the name. For example [SERVERNAME\INSTANCENAME].
Use an alias for the table or view from the linked server or you will get a "multi-part identifier cannot be bound" error. There is a limit of a 4 part naming convention. For example SERVER.DATABASE.dbo.TABLE.FIELD has five parts and will give an error. However, SELECT linked.FieldName FROM SERVER.DATABASE.dbo.TABLE AS linked will work fine
You will want to link the servers:
http://msdn.microsoft.com/en-us/library/aa213778.aspx
for step 2 you need to have Distributed Transaction Coordinator running, you also need to use SET XACT_ABORT ON to make sure it will all rollback
you also need to enable RPC which is turned off by default in 2005 and up
There is a whole bunch of stuff that can bite you in the neck
MSDN says you can have transactions across linked servers if you use the command BEGIN DISTRIBUTED TRANSACTION.
I remember though that I had problems called a stored procedure on a linked server, but I worked around it, rather than solving it.
Using linked servers, you can run stored procedures on either server within a single transaction using DTC (Distributed Transactino Coordinator). You will definitely want to do some performance analysis. I have found some SPs using links can drastically slow down down database performance, especially if you try to join result sets from each of the two servers.
Set up a linked server, then you should be able to execute selects/inserts/updates across the servers. Something like:
INSERT INTO Server2.Destination.dbo.Output
SELECT * FROM Input
WHERE <Criteria>
This assumes you are running the query from Server1.Source, so you wouldn't need to fully qualify.