R: Copy/Clone full DB from SQL Server to local - sql

In R, I have a SQL Server connection with this database:
From this answer I understand that these levels are catalogs (=databases), schemas, and tables. The following code:
library(odbc)
library(DBI)
library(RSQlite)
library(dbi)
confull <- odbc::dbConnect(odbc(),
Driver = "SQL Server",
Server = "XXX")
odbcListObjects(confull, schema="schema")
Yields:
name type
1 DBAInfo catalog
2 InBluePrism catalog
3 master catalog
4 msdb catalog
5 tempdb catalog
Questions:
How can I extract the full structure tree of this database, not just the catalogs?
How can I progammatically save (clone) this whole database (including all tables, schemas, and catalogs) into a local SQLite table?
For the first question I have tried the following:
> all_schemas <- DBI::dbGetQuery(confull, "SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA")
> all_schemas
SCHEMA_NAME
1 dbo
2 guest
3 INFORMATION_SCHEMA
4 sys
5 CCAutomation
6 XXXXXX\\xxxAdmin
7 XXXXXX\\z_swmon
8 NT AUTHORITY\\SYSTEM
9 XXXXXX\\z_Backup
10 db_owner
11 db_accessadmin
12 db_securityadmin
13 db_ddladmin
14 db_backupoperator
15 db_datareader
16 db_datawriter
17 db_denydatareader
18 db_denydatawriter
For the second question, I have tried:
to generate scripts in SQL Server, but I get an error and moreover I would like to keep this programmatic.
to just save all the tables given by dbListTables(confull) however, I then lose the information about the catalogs and schemas these tables belong to.
EDIT: the following link also contains useful information

I don't know which version of the SQL Server you have. I'm basing it on what I have which is SQL Server 2008 and 2016.
To have CLI tool you can download Export2SqlCE.zip
Description:
SQL Server 2005/2008 command line utility to generate a SQL Compact or SQLite compatible script with schema and data (or schema only)
After downloading you can run it to extract the information using:
Export2SQLCE.exe "Data Source=(local);Initial Catalog=<your_database>;Integrated Security=True" your_database.sql sqlite

We can use Powerbuilder pipeline of version 9/10/10.5 depends on your SQL Server version. Database and data will be easily migrated all you must know is to create ODBC/Database connections which is the matter of few clicks.
Use a Pipeline object by defining a standard class user object inherited from the built-in Pipeline object in the User Object painter. We can then access the Pipeline events by writing scripts that contain code for the events.
This is how we can execute pipeline by writing the script.
The scenario of this case is we want to pipeline a table from one database to another database. So, first, we need at least 2 transaction objects, which mean we must declare first in the top of the script. Since we have a default database connection SQLCA, we only have declare another new transaction object called SQLCAtarget, which represent for the target database connection. Remember, in this case, SQLCA will be the source of database connection
transaction SQLCAtarget // declare this variable as INSTANT variable
SQLCA.DBMS = 'your source dbms name'
SQLCA.Database = 'your source database name'
SQLCA.LogId = 'your source database login id'
SQLCA.LogPass = 'your source database password'
SQLCA.ServerName = 'your source database server'
CONNECT USING SQLCA;
SQLCAtarget = CREATE transaction
SQLCAtarget.DBMS = 'your target dbms name'
SQLCAtarget.Database = 'your target database login id'
SQLCAtarget.LogPass = 'your target database password'
SQLCAtarget.ServerName = 'your target database server'
SQLCAtarget.LogId = 'your target database login id'
CONNECT USING SQLCAtarget;
Next step, we need to build a pipeline object by clicking the Pipeline painter in the main toolbar. Remember, use MAIN TOOLBAR, if we want to pipeline the data to ANOTHER DATABASE.
Setup the source database and the target database profile, choose the table(s), column(s) and criteria(s), then save as pl_mypipeline.
to begin with, Click on pipeline button from powerbuilder
Choose the source and target of Pipeline
set the table, column and criteria of your pipeline
save your pipeline
Create a window, then put one datawindow object and one button object. We don't need to put dataobject for the datawindow, just keep it blank. And put the script below at clicked event in button object.
integer iReturn
pipeline myPipeline
myPipeline = CREATE pipeline
myPipeline.DataObject = "pl_mypipeline"
iReturn = myPipeline.Start(SQLCA, SQLCAtarget, dw_1)
// always disconnect your database connection
DISCONNECT USING SQLCA;
DISCONNECT USING SQLCAtarget;
iReturn should have 0 (zero) value if the pipeline runs smoothly.

Related

Imported SQL Server Database Username

When importing a SQL Server database from another machine using a .BAK file through the restore option, the process appears to be successful and completes with the database now in the list within SQL Server Management Studio. But one thing I need clarified please.
When the Restore dialogue is importing the database a row of fields are displayed including "Name", "component", "Type" etc. But the last field is the username. Is this field simply showing the owner of the database where it originated from or is this value used with relevance in the imported database?
I looked in database_name >> Security > Users in SSMS but the user shown in the restore process is not listed

Azure Machine Learning Write output to Azure SQL Database

I am using Azure Machine Learning to clustering data.
The input data is from an Azure SQL Database, and it works fine.
At the end of everything I want to write the output to a table in the same Azure SQL Database, but I get this error:
Error: Error 1000: AFx Library library exception:
Sql encountered an error: Login failed for user
Anyone any idea?
Thank you very much!
Please follow the instructions and examine the examples provided here to properly use the Export Data module to save the data of ML to Azure SQL Database.
How to Export Data to an Azure SQL Database
Add the Export Data module to your experiment. You can find this module in the Data Input and Output group in the experiment items list in Azure Machine Learning Studio.
Connect it to the module that produces the data that you want to export to Azure SQL DB.
For Data destination, select Azure SQL Database. This option supports Azure SQL Data Warehouse as well.
Set the following options specific to Azure SQL Database or Azure SQL Data Warehouse.
Database server name
Type the server name that is generated by Azure. Typically it has the form <generated_identifier>.database.windows.net.
Database name
Type the name of a database on the server you just specified.The database must already exist; the Export Data cannot create it.
Server user account name
Type the user name of an account that has access permissions for the database.
Server user account password
Provide the password for the specified user account.
Comma-separated list of columns to be saved
Type the names of the columns in the experiment that you want to write to the database.
Data table name
Type the name of the table where data will be stored.
For Azure SQL Database, if the table does not exist, it will be created. For Azure SQL Data Warehouse, the table must already exist and have the correct schema, so be sure to create it in advance.
Comma-separated list of datatable columns
Type the names of the columns as you wish them to appear in the destination table. The columns should correspond in order with the column names that you list in Comma-separated list of columns to be saved.
if you are writing to Azure SQL Data Warehouse, the columns names must match those already in the destination table schema.
Number of rows written per SQL Azure operation
Indicate how many rows should be written to the destination table in each batch. By default, the value is set to 50, which is the default batch size for Azure SQL Database. However, you should increase this value if you have a large number of rows to write.
TIP:
For Azure SQL Data Warehouse, we recommend that you set this value to 1. If you use a larger batch size, the size of the command string that is sent to Azure SQL Data Warehouse can exceed the allowed string length, causing an error.
If you don't want to write new results each time you run the experiment, select the Use cached results option. If there are no other changes to module parameters, the experiment will write the data the first time the module is run, and thereafter not perform writes.
However, a write will always be performed if any parameters have been changed in Export Data that would change the results.
Run the experiment.
Find the issue!
I needed to create an specific user with this SQL code:
CREATE USER AMLApplicationUser WITH PASSWORD = '************';
and then add the user to these roles on the database I want to write.
ALTER ROLE db_datareader ADD MEMBER AMLApplicationUser;
ALTER ROLE db_datawriter ADD MEMBER AMLApplicationUser;
I guess only the datawriter role is enough, but I needed datareader too.
So in conclusion, seems that database admin role can be used to read data, but not to write data from AML.
Thank you for your help!

Azure SQL, Copy most of a database into an existing one (not new one) same server

I know I can clone DB into a new one with
CREATE DATABASE Database1_copy AS COPY OF Database1;
(https://learn.microsoft.com/en-us/azure/sql-database/sql-database-copy-transact-sql)
and this goes flawesly, except in Azure, where db properties are managed by Azure portal, so I am try to find a way to copy most of the schema/resources/data into an EXISTING DB
would be great for:
CLONE DATABASE Database_test AS COPY OF Database_production
[even first approach has been to "clone" the entire db, indeed few tables on destination db should be kept, so better approach would be to CLONE EVERYTHING EXCEPT ('table1','table2'). Actually plan to achieving this by scripting the few tables needed on destination db and overwriting them after import, but bet solution would be the other]
You can do this in several ways:
Through the Azure Portal
Open your database in the Azure Portal(https://portal.azure.com)
In the overview blade of your database select the "copy" option
Fill in the parameters, in which server would you like the copy
Using a sql server client and connecting to the server
Open your SQL Server blade in Azure
Select the "Firewall" option
Click on "Add client IP"
Connect to your database with your connection string and your favorite client, could be SSMS
Execute your sql query to clone the database in the same server
-- Copy a SQL database to the same server
-- Execute on the master database.
-- Start copying.
CREATE DATABASE Database1_copy AS COPY OF Database1;
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-copy-transact-sql
The above SQL statement works perfectly fine as expected in Azure SQL Database.
Important Notes:
Log on to the master database (System Databases) using the
server-level principal login or the login that created the
database you want to copy.
Logins that are not the server-level principal must be members of
the dbmanager role in order to copy databases.
Use updated version of the SQL Server Management Studio

Export table data from one SQL Server to another

I have two SQL Servers (both 2005 version).
I want to migrate several tables from one to another.
I have tried:
On source server I have right clicked on the database, selected Tasks/Generate scripts.
The problem is that under Table/View options there is no Script data option.
Then I used Script Table As/Create script to generate SQL files in order to create the tables on my destination server. But I still need all the data.
Then I tried using:
SELECT *
INTO [destination server].[destination database].[dbo].[destination table]
FROM [source server].[source database].[dbo].[source table]
But I get the error:
Object contains more than the maximum number of prefixes. Maximum is
2.
Can someone please point me to the right solution to my problem?
Try this:
create your table on the target server using your scripts from the Script Table As / Create Script step
on the target server, you can then issue a T-SQL statement:
INSERT INTO dbo.YourTableNameHere
SELECT *
FROM [SourceServer].[SourceDatabase].dbo.YourTableNameHere
This should work just fine.
Just to show yet another option (for SQL Server 2008 and above):
right-click on Database -> select 'Tasks' -> select 'Generate Scripts'
Select specific database objects you want to copy. Let's say one or more tables. Click Next
Click Advanced and scroll down to 'Types of Data to script' and choose 'Schema and Data'. Click OK
Choose where to save generated script and proceed by clicking Next
If you don't have permission to link servers, here are the steps to import a table from one server to another using Sql Server Import/Export Wizard:
Right click on the source database you want to copy from.
Select Tasks - Export Data.
Select Sql Server Native Client in the data source.
Select your authentication type (Sql Server or Windows authentication).
Select the source database.
Next, choose the Destination: Sql Server Native Client
Type in your Server Name (the server you want to copy the table to).
Select your authentication type (Sql Server or Windows authentication).
Select the destination database.
Select Copy data.
Select your table from the list.
Hit Next, Select Run immediately, or optionally, you can also save the package to a file or Sql Server if you want to run it later.
Finish
There is script table option in Tasks/Generate scripts! I also missed it at beginning! But you can generate insert scripts there (very nice feature, but in very un-intuitive place).
When you get to step "Set Scripting Options" go to "Advanced" tab.
Steps described here (pictures can understand, but i do write in latvian there).
Try using the SQL Server Import and Export Wizard (under Tasks -> Export Data).
It offers to create the tables in the destination database. Whereas, as you've seen, the scripting wizard can only create the table structure.
If the tables are already created using the scripts, then there is another way to copy the data is by using BCP command to copy all the data from your source server to your destination server
To export the table data into a text file on source server:
bcp <database name>.<schema name>.<table name> OUT C:\FILE.TXT -c -t -T -S <server_name[ \instance_name]> -U <username> -P <Password>
To import the table data from a text file on target server:
bcp <database name>.<schema name>.<table name> IN C:\FILE.TXT -c -t -T -S <server_name[ \instance_name]> -U <username> -P <Password>
For copying data from source to destination:
use <DestinationDatabase>
select * into <DestinationTable> from <SourceDataBase>.dbo.<SourceTable>
Just for the kicks.
Since I wasnt able to create linked server and since just connecting to production server was not enough to use INSERT INTO i did the following:
created a backup of production server database
restored the database on my test server
executed the insert into statements
Its a backdoor solution, but since i had problems it worked for me.
Since i have created empty tables using SCRIPT TABLE AS / CREATE in order to transfer all the keys and indexes I couldnt use SELECT INTO. SELECT INTO only works if the tables do not exist on the destination location but it does not copy keys and indexes, so you have to do that manualy. The downside of using INSERT INTO statement is that you have to manualy provide with all the column names, plus it might give you some problems if some foreign key constraints fail.
Thanks to all anwsers, there are some great solutions but i have decided to accept marc_s anwser.
You can't choose a source/destination server.
If the databases are on the same server you can do this:
If the columns of the table are equal (including order!) then you can do this:
INSERT INTO [destination database].[dbo].[destination table]
SELECT *
FROM [source database].[dbo].[source table]
If you want to do this once you can backup/restore the source database.
If you need to do this more often I recommend you start a SSIS project where you define source database (there you can choose any connection on any server) and create a project where you move your data there.
See more information here: http://msdn.microsoft.com/en-us/library/ms169917%28v=sql.105%29.aspx
It can be done through "Import/Export Data..." in SQL Server Management Studio
This is somewhat a go around solution but it worked for me I hope it works for this problem for others as well:
You can run the select SQL query on the table that you want to export and save the result as .xls in you drive.
Now create the table you want to add data with all the columns and indexes. This can be easily done with the right click on the actual table and selecting Create To script option.
Now you can right click on the DB where you want to add you table and select the Tasks>Import .
Import Export wizard opens and select next.Select the Microsoft Excel as input Data source and then browse and select the .xls file you have saved earlier.
Now select the destination server and also the destination table we have created already.
Note:If there is any identity based field, in the destination table you might want to remove the identity property as this data will also be inserted . So if you had this one as Identity property only then it would error out the import process.
Now hit next and hit finish and it will show you how many records are being imported and return success if no errors occur.
Yet another option if you have it available: c# .net. In particular, the Microsoft.SqlServer.Management.Smo namespace.
I use code similar to the following in a Script Component of one of my SSIS packages.
var tableToTransfer = "someTable";
var transferringTableSchema = "dbo";
var srvSource = new Server("sourceServer");
var dbSource = srvSource.Databases["sourceDB"];
var srvDestination = new Server("destinationServer");
var dbDestination = srvDestination.Databases["destinationDB"];
var xfr =
new Transfer(dbSource) {
DestinationServer = srvDestination.Name,
DestinationDatabase = dbDestination.Name,
CopyAllObjects = false,
DestinationLoginSecure = true,
DropDestinationObjectsFirst = true,
CopyData = true
};
xfr.Options.ContinueScriptingOnError = false;
xfr.Options.WithDependencies = false;
xfr.ObjectList.Add(dbSource.Tables[tableToTransfer,transferringTableSchema]);
xfr.TransferData();
I think I had to explicitly search for and add the Microsoft.SqlServer.Smo library to the references. But outside of that, this has been working out for me.
Update: The namespace and libraries were more complicated than I remembered.
For libraries, add references to:
Microsoft.SqlServer.Smo.dll
Microsoft.SqlServer.SmoExtended.dll
Microsoft.SqlServer.ConnectionInfo.dll
Microsoft.SqlServer.Management.Sdk.Sfc.dll
For the Namespaces, add:
Microsoft.SqlServer.Management.Common
Microsoft.SqlServer.Management.Smo

How to backup transaction log after database backup everyday in SQL server 2005

How to backup transaction log after database backup everyday in SQL server 2005
Why not just use SSIS to backup the log, so it can backup, then copy it to where it needs to be.
UPDATE:
You can look at this question, it talks about how to go from SQL Server 2005 query to an Excel file:
http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/DTS/Q_23090779.html
The useful answer is:
Create a stored procedure that will have the output you need to export in
excel.
In the DTS package add a SQL connection and an excel conection. SQL
conn should point you server and db
and excel conn your file. If it
doesn't exist just create one on the
fly.
Create the Transformation task betreen the SQL conn and excel conn.
Double click on the arrow and in the trasformation data task properties
window in Source tab instead of
Table/View pick SQL query. In the
panel below type EXEC sprocname, where
sprocname will be the name of your
procedure from step 1.
5.Click on Destination tab; if file/worksheet if doesn't exist will
open a dialog window for creation.
Edit if you want and click OK.
In Transformations tab define your trasnfromation by matching the
columns. 7 Run.
If you want to run this automatically you need in an ongoing matter what you need is to define a Dynamic properties tasl where you can edit the excel connection to generate a name that will have a timestamp, (you can use an sql statement as well) and then in an Active X task create/copy the file from an existing structure file.
So
Dynamic Property Task ---> ActiveX task (copy from struct file to new generated name file ) ---> SQL conn ------> Excel conn.