How do I copy data from one Azure database table to a different Azure database table and also convert data types? - sql

I have to copy data from one table to another, the tables are held in two different databases within Azure. I did a quick search for answers to this and whilst a query seems fairly straight forward i.e.
INSERT INTO table1 (make, model, type, serial)
SELECT the_make, the_model, the_type, ref_no
FROM database2.dbo.table2
I encountered issues because I'm using Azure.
Msg 40515, Level 15, State 1, Line 16 Reference to database and/or
server name in 'database2.dbo.table2' is not supported in this version of
SQL Server.
The above issue led me to the Cross-Database Queries articles. My requirements are a little more complicated than some of the scenarios provided and I need some help in making it work.
I also need to convert some columns such as reg_no which is a 'string' to an 'int' and then copy the value to the 'serial' column.
My question is, what the best way to create a script for this that allows me to reference both databases without any errors, copy the data and convert the columns at the same time? I tried the simple way of exporting data and importing it, editing the mappings for the columns, it wasn't that good I found and was causing problems all over the place.
Any guidance is appreciated on this.

You're getting this error because there's no linked server by default. You'll need to add it, in order to access the secondary db server. Here's a link about how to do it:
https://www.sqlshack.com/create-linked-server-azure-sql-database/
In terms of the transformation. It depends on many factors e.g. amount of rows, frequency, etc..
Usually the best alternative is by using an external tool (ETL) such as SSIS / Azure Data Factory because you can schedule it's execution and get the status of each execution.

Related

Running SQL via SQLWorkbench versus via Tableau Prep

I have developed some SQL that reads from a redshift table, does some manipulation (esp listagg some fields), and then writes to another redshift table.
When I run the SQL using SQLWorkbench it executes successfully. When I embed it in a Tableau Prep flow (as "Complex SQL") I get several of these errors: "System error: AqlProcessor evaluation failed: [Amazon][Support] (40550) Invalid character value for cast specification." Presumably these relate to my treatment of data types. What I don't is what is so difference in the environment that would cause different results like this? Is it because SQLWorkbench and Tableau Prep use different SQL interpreters? Or is my question too broad to even speculate without going through the actual code?
Best guess is that Tableau, which has knowledge of DDL, is add some CAST() operations to the SQL. SQLWorkbench is simpler and is pushing the SQL to Redshift as written. This is based on there being no explicit CASTs in your SQL but an error message that identifies a CAST().
Look at stl_querytext for these two queries and see if they are being given to Redshift differently by the two benches. I suspect this will give you some clues to go on.
If there are no differences in the SQL then the issue may be with user / connection differences and more info will likely be needed about the issue.

Is it possible to output a detailed query result file?

I am new at working with SQL and need to know if it is possible to produce a detailed query result file. I know you can have this file but it only contains info like 1 row(s) affected, but I need to have detailed info like:
"added row ID,Name,Surname; 1, John, Adams".
This is not a feature of SQL Server at this time. If you need this level of insight into your database changes, you could look at using temporal tables or implementing a custom logging solution (like using Modified/Created columns on the table so that you can query the data to see when things changed or were created).
It's hard to say what your options are without knowing the version of SQL Server you're using and what level of control you have over how the data is getting into the system, but these are at least a couple options.

SSAS Multidimensional - Table Value Function as a Query for Partition

#GregGalloway was able to answer the question I should have asked. I am adding a more concise question here, while maintaining the original lengthy text
How do I use a table valued function as the query for a partition, when the function is in separate database from my fact and referenced dimensions?
Overview: I am building a SSAS multidimensional cube that is built off of a single fact table in our application's data warehouse, and want to use the result set from a table valued function as my fact table's partition query. We are using SQL Server (and SSAS) 2014
Condition: For each environment (Dev,Tst,Prd) there are 2 separate databases on the same server, one for the application data warehouse [DW_App], the other for custom objects [DW_Custom]. I cannot create any objects in [DW_App], but have a lot of freedom in [DW_Custom]
Background info: I have not been able to find much information on using a TVF and partitions in this way. My thinking is that it will help streamline future development by giving me a single place to update the SQL if/when I modify the fact table.
So in testing out my crazy idea of using a TVF as the query for my partitions I have run into a bit of a conundrum. I am able to use my TVF when I explicitly state the Database in my FROM clause.
SELECT * FROM [DW_Custom].[dbo].[CubePartition](#StartDate, #EndDate)
However, that will not work, because the cube will be deployed in multiple environments before production, and it needs to point to different DBs for each. So I tried adding a new data source, setting my partition query to point to the new data source, and then remove the database name. IE:
SELECT * FROM [dbo].[CubePartition](#StartDate, #EndDate)
I get an error that
The SQL syntax is not valid. The relational database returned the following error message: Deferred prepare could not be completed. Invalid object name 'dbo.CubePartition'
If I click through this error and the subsequent warnings about the cube not being able to process if I continue I am able to build and deploy the cube. However I cannot process it, because I get an error that one of my dimensions does not exist.
Looking into the query that was generated and it is clear that it is querying my dimensions as well as fact, which do not exist inside of '[DW_Custom]' which explains that error perfectly fine.
So I guess 2 questions:
Is it possible to query another DB (on the same server) from inside of an SSAS partition query?
If not, is there any way I can use a variable as the database name in the query, and update that variable based on the project configuration (Dev,Tst,Prd)
Bonus question: Is the reason that I can not find much about doing it this way because it is an obviously bad idea that I am overlooking, and if so why?
How about creating a second SSAS Data Source pointing to the DW_Custom database (or whatever it's called in the particular environment you're deploying to)? Then when you deploy from Dev to Prod, you need only change that connection string. When you create your partitions, then specify the DW_Custom data source and then specify the query without database name:
SELECT * FROM [dbo].[CubePartition](#StartDate, #EndDate)
As long as the query plan for that table-valued function is efficient compared to a plain SELECT statement, then I don't see a problem with that.

copying data between servers on two different machines - SQL

I found this question Copy table to a different database on a different SQL Server which is close to what I want but my two databases are on two different machines. I am interested in backing 1 or two tables, not the whole database. I tried BCP backup and bulk insert but I am consistently getting error on importing date field (type mismatch or invalid character for the specified codepage). I gave up after I successfully imported the peice of csv file that I was getting error for in a new test table.
Now I would like something like this
select INTO mycomputer\SQLEXPRESS\target_table from ReMOTECOMPUTER\SQLEXPRESS\source_table
or anything similar? Can I do that, what is the proper syntax if yes. I tried but was not successful.
Have you looked at using linked servers? We had a somewhat similar data consistency issue and used a linked server setup to provide for triggered data propagation. Once you have the linked servers defined you can issue your statement pretty much as you have it listed in your question.
http://msdn.microsoft.com/en-us/library/ms188279.aspx

How do I use SQL to Drop a Column from a MS ACCESS Database if that column is a replication ID?

I had a notion to use a database column of type replication ID, but have since changed my approach and want to use this column for another purpose.
However, I'm unable to use SQL to drop the column to remove it from my database.
My SQL is:
ALTER TABLE foo_bar DROP COLUMN theFoo;
However, I get a "syntax error" and I'm assuming this has something to do with this column being a replication ID.
I'd rather not download the file and edit it directly using the MS Access application, but not sure if that's my only recourse.
Thanks so much in advance.
Regards,
Kris
If you have access to the database in a command shell, Michael Kaplan's Replication System Removal Fields utility should do the trick. However, I've found that in some circumstances, it's unable to do the job. Also note that the utility will only work with a Jet 4 format database (MDB), not ACE format (ACCDB).
If all else fails, you can recreate the table structure and append the existing data to it. That can get messy if you have referential integrity defined, though, but it will get the job done, and likely most of it is scriptable (if not all possible using just DDL).
Here is a link that may help you, I had a similar idea but when browsing the web found this
AccessMonster - Replication-ID-Field-size
EDIT: Well I don't have much time but what I was thinking of first was if you could alter the column to make it different (not a replication ID) and then drop it. (two separate actions). But I have not tested this.