Azure SQL bacpac Fails on Special Characters in Stored Procedures - sql

Trying to move a database from an Azure GOV tenant to a standard Azure tenant. From what I can tell, the export is failing due to special characters in the Store Procedures (slashes, dollar signs, etc.) These are properly escaped and work as standalone T-SQL scripts. I can drop all of the SPs, move the database, and then restore. But there has to be a better way.
Has anyone else had an issue with special characters in the body of stored procedures? I am open to other ways of trying to move between tenants, but have come up empty-handed.

Adding correct argument handling to your code would be a more solid method to handle this.
Use cmd.Parameters.AddWithValue(String parameterName, Object value) | SqlParameterCollection.Add Method
https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlparametercollection.add?redirectedfrom=MSDN&view=dotnet-plat-ext-5.0#System_Data_SqlClient_SqlParameterCollection_Add_System_String_System_Object_
It will be simpler to utilise special characters as arguments if you run the SQL by creating a SQLCommand object and adding parameters to it.
Alternative
To migrate resources, you can utilise the Azure interface, Azure PowerShell, Azure CLI, or the REST API. Move resources to a new resource group or subscription | https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription
During the move procedure, both the source and target groups are
locked. Until the relocation is finished, write and delete activities
on resource groups are banned. You can't add, alter, or delete
resources in resource groups if they're locked. This does not imply
that the resources have been frozen. Applications that use the
databases will not experience any downtime if you migrate an Azure SQL
logical server and its databases to a different resource group or
subscription. They still have access to the databases and can read and
write to them. Although the lock can last up to four hours, most
manoeuvres are completed in considerably less time.
You must ensure the following before moving resources across subscriptions:
Both the source and destination subscriptions must be located in the same folder.
In both subscriptions, a single user account must be able to
generate and delete resources.
You must migrate all SQL databases on that server at the same time.
If the SQL server and the destination directory are in separate directories, you can transfer the SQL server to a temporary, trial subscription, then move that subscription to the target directory (from the old portal), then complete the move in the new portal using the target directory.

Related

What permission are required on the source to copy a SQL Azure database?

I need to grant permissions to a remote development team so they can copy schema changes on a database to their local dev instances. I see many posts similar to this, but they seem to focus on what is required in the destination server, rather than rights to read everything necessary on the source.
Currently, the user is in the db_datareader role and while they seem to be able to read a good portion of the table structure, configuration items such as defaults seems to be obscured, and stored proc and view definitions don't seem to be available, either.
I need the team to be able to copy from our Test/UAT instance, but I don't want them to be able to modify it. They should already have sa access to their local dev instances.
I need to grant permissions to a remote development team so they can copy schema changes on a database to their local dev instances.
I think you can using Azure SQL database Data Sync.
Data Sync is useful in cases where data needs to be kept up-to-date across several Azure SQL databases or SQL Server databases. Here are the main use cases for Data Sync:
Hybrid Data Synchronization: With Data Sync, you can keep data
synchronized between your on-premises databases and Azure SQL
databases to enable hybrid applications. This capability may appeal
to customers who are considering moving to the cloud and would like
to put some of their application in Azure.
Distributed Applications: In many cases, it's beneficial to separate
different workloads across different databases. For example, if you
have a large production database, but you also need to run a
reporting or analytics workload on this data, it's helpful to have a
second database for this additional workload. This approach minimizes
the performance impact on your production workload. You can use Data
Sync to keep these two databases synchronized.
Globally Distributed Applications: Many businesses span several
regions and even several countries/regions. To minimize network
latency, it's best to have your data in a region close to you. With
Data Sync, you can easily keep databases in regions around the world
synchronized.
Data Sync is based around the concept of a Sync Group. A Sync Group is a group of databases that you want to synchronize.
A Sync Group has the following properties:
The Sync Schema describes which data is being synchronized.
The Sync Direction can be bi-directional or can flow in only one
direction. That is, the Sync Direction can be Hub to Member, or
Member to Hub, or both.
The Sync Interval describes how often synchronization occurs.
The Conflict Resolution Policy is a group level policy, which can be
Hub wins or Member wins.
For more detail, please see Overview of SQL Data Sync.
With Data sync, you can set your Azure SQL database as Hub database, teams local dev instances as member database, set Sync Direction to 'Hub to Member'.
Then you can sync the schema changes on a database to their local dev instances manually or automatically. Reference: Tutorial: Set up SQL Data Sync between Azure SQL Database and SQL Server on-premises
Hope this helps.
GRANT VIEW DEFINITION was what I needed.
Not sure how I didn't stumble on that in my searches, but there it is.

Azure database copy feature pricing

Currently our production azure SQL database is a P1. We'd like to replicate or copy this database as our QA database. Our QA database doesn't need to be anything more than a S1. Does anyone know if the action of copying a database costs money? If I wanted to run a azure function to copy the database every night to the same azure SQL server would it be costly? I know in the azure function, after a successful copy, i have to lower it from a P1 to and S1. The Azure documentation about copying a database doesn't talk about pricing.
Another question, Does anyone know if you can replicate a P1 azure SQL database to a S1? That would be better than a azure function copy every night.
Thanks in advance
Does copying a database cost money?
Assuming you mean use the "Copy" function inside the database blade or the "New-AzSqlDatabaseCopy" PowerShell command, I have done this several time's and it does not result additional costs. If you are copying via some sort of manual method via script, then it would simply utilize DTU's when the copy process is occurring.
Copying the database every night
Performing the copy every night using the built in copy functions would not cause additional costs, but this wouldn't be the best way to accomplish what you want. Instead of doing that, why not setup replication using a synch group (as you hinted at) which is easy to setup and even easier to maintain. See my post here about how to do that.
Copying/Synching between SQL Service Levels
Lastly, unless the database exceeds the 1 TB S1 storage size limit, there is no reason why you can't synch a P1 to an S1 in a sync group.

Creating SQL Windows login for External Domain

Problem
Is it somehow possible to create a Windows Authentication login for a SQL database without performing a check for the user at creation time?
Example
Consider ServerA that exists in our DomainA, and ServerB that exists in the customer's DomainB. Being separate companies, DomainA and DomainB never share resources. But, if we backup from ServerB and restore to ServerA, we are able to see the existing SQL logins for users from DomainB, and even modify and code against these logins. This is good, because we are able to develop the database schema on ServerA and then publish to ServerB.
But, if I want to add a new user for this database, and am working on ServerA in DomainA, the following command produces an error:
CREATE USER [DomainB\User];
Windows NT user or group 'DomainB\User' not found. Check the name again. (Microsoft SQL Server, Error: 15401)
This is bad, because we're no longer able to develop on ServerA using the same schema as ServerB.
Backstory
I'm attempting to bring our database-driven application's database schema into source control using a Visual Studio 2010 Database Project. It's important to me to make this work well enough to convince the boss not to continue using 60-GB database backups in a zip file as a means of 'Version Control' (especially since this is just for schema, and not a backup routine). VS2010 DB Projects use scripting to create/modify databases, and so they can't create WinNT users for an unknown domain. In order to get the boss's buy-off, we're going to have to be able to match the capabilities of restoring a backup, and that means being able to re-create users for domains that we don't have access to.
Using SQL Server 2008 in my case.
Note - DBProjects are best suited to managing and versioning your SCHEMA, not your data.
If you want to keep rolling backups of your SQL databases as a whole, then I'd recommend a decent backup strategy.
If you want to better manage your databases' evolving schemas, then using DBProjects may well be your best bet.
FWIW, if you reverse-engineer a DB into a DBProj, you could then run a script to replace DomainB\known-user with DomainA\known-user prior to deploying within DomainA, no?
No, because SQL needs to know the windows SID (ugly GUID) of the user at the time it's created.
Note that you can, however create a SQL or Windows User with the same name and password as your remote SQL, Machine, or Domain user, and it will be able to log in.

SQL Server 2005: Replication, varbinary

Scenario
In our replication scheme we replicate a number of tables, including a photos table that contains binary image data. All other tables replicate as expected, but the photos table does not. I suspect this is because of the larger amount of data in the photos table or perhaps because the image data is a varbinary field. However, using smaller varbinary fields did not help.
Config Info
Here is some config information:
Each image could be anywhere from 65-120 Kb
A revision and approved copy is stored along with thumbnails, so a single row may approach ~800Kb
I once had trouble with the "max text repl size" configuration field, but I have set that to the max value using sp_configure and reconfigure with override
Photos are filtered based on a “published” field, but so are other working tables
The databases are using the same local db server (in the development environment) and are configured for transactional replication
The replicated database uses a “push” subscription
Also, I noticed that sometimes regenerating the snapshot and reinitializing the subscription caused the images to replicate. Taking this into consideration, I configured the snapshot agent to regenerate the snapshot every minute or so for debugging purposes (obviously this is overkill for a production environment). However, this did not help things.
The Question
What is causing the photos table not to replicate while all others do not have a problem? Is there a way around this? If not, how would I go about debugging further?
Notes
I have used SQL Server Profiler to look for errors as well as the Replication Monitor. No errors exist. The operation just fails silently as far as I can tell.
I am using SQL Server 2005 with Service Pack 3 on Windows Server 2003 Service Pack 2.
[update]
I have found out the hard way that Philippe Grondier is absolutely right in his answer below. Images, videos and other binary files should not be stored in the database. IIS handles these files much more efficiently than I can.
I do not have a straight answer to your problem, as our standard policy has always been 'never store (picture) files in (database) fields'. Our solution, that applies not only to pictures but to any kind of file, or document, is now standard:
We have a "document" table in our database, where document/file names and relative folders are stored (in order to get unique document/file names, we generate them from the primary key/uniqueIdentifier value of the 'Document' table).
This 'document' table is replicated among our different suscribers, like all other tables
We have a "document" folder and
subfolders, available on each of our
database servers.
Document folders are then replicated independently from the database, with some files and folders replication software (allwaysynch is an option)
main publisher's folders are fully accessible through ftp, where a user trying to read a document (still) unavailable on his local server will be proposed to download it from the main server through a ftp client software (such as coreFTP and its command line options)
With an images table like that, have you considered moving that article to a one-way (or two-way, if you like) merge publication? That may alleviate some of your issues.

How do you upload SQL Server databases to shared hosting environments?

We have a common problem of moving our development SQL 2005 database onto shared web servers at website hosting companies.
Ideally we would like a system that transfers the database structure and data as an exact replica.
This would be commonly achieved by restoring a backup. But because they are shared SQL servers, we cannot restore backups – we are not given access to the actual machine.
We could generate a script to create the database structure, but then we could not do a data transfer through the menu item Tasks/Import Data because we might violate foreign key constraints as tables are imported in an order the conflicts with the database schema. Also, indexes might not be replicated if they are set to auto generate.
Thus we are left with a messy operation:
Create a script in SQL 2005 that generates the database in SQL 2000 format.
Run the script to create a SQL 2000 database in SQL 2000.
Create a script in SQL 2000 that generates the database structure WITHOUT indexes and foreign keys.
Run this script on the production server. You now have a database structure to upload data to.
Use SQL 2005 to transfer the data to the production server with Tasks/Import data.
Use SQL 2000 to generate a script that creates the database with indexes and keys.
Copy the commands that generate the indexes and foreign keys only. These are located after the table creation commands. Note: In SQL 2005, the indexes and foreign keys are generated as one and cannot be easily separated.
Run this script on the production database.
Voila! The database is uploaded with all data and keys/constraints in place. What a messy and error prone system.
Is there something better?
Scott Gu had written few posts on this topic :
SQL Server Database Publishing Toolkit for Web Hosting
Generation scripts are fine for creating the database objects, but not for transporting database information. For example, client-specific databases where the developer is required to pre-populate some data.
One of the issues I've run into with this is the new MAX types in SQL Server 2005+. (nvarchar(max), varchar(max), etc.) Of course, this is worse when you are actually using Sql Server Express, which doesn't allow for exporting other than creating your own scripts to create the data.
I would recommend switching to a hosting company that allows you to have the ability to FTP backup files and does NOT require you to use your own scripts. That's the whole point of SQL Server, right? To provide more tools that are friendlier to use. If the hosting company takes that away, you may as well move to MySql for its ease in dumping information.
WebHost4Life is a life saver in this category. They offer FTP to the database server to upload your backup file or MDF and LDF files for attachment! I was so upset when I saw GoDaddy had the similar restriction you mentioned. Their tool didn't tell me it was a bad import, and I couldn't figure out why my site was coming back with 500 errors.
One other note: I'm not sure which is considered more secure. I enabled external connections in GoDaddy and connected with Management Studio, and I was able to see every database on that server! I couldn't access them, but I now have that info. A double whammy is that GoDaddy requires that the user name for the DB be the same as the DB! now all you need to do is spam passwords against those hundreds of DBs!
Webhost4life, on the other hand, has only your specific database shown in Management Studio. And they let you pick your own DB name and user name, independent of each other. They only append the same unique id on the end of the user & db names in order to keep them from conflicting with others.
You should not rely on restoring backups for copying / transferring databases. You need to use scripts - trust me you will get better at it.
I have used the RedGate Compare tools with shared hosting and it works well.
Database-generation scripts are messy, but they also have several advantages that ... well, make the pain more tolerable.
First, if you treat the DB scripts as real programming tasks in and of themselves, you can encapsulate the messiness. If you generate a script once (using a database tool), you can split the table structure aspects from the constraint aspects (keys, indices, etc.). Similarly, you can export the data once, but split it it into "system" data that's not frequently changed but is necessary for correct operation (stuff like tax or shipping rates, etc.), 'test' data that's easily identifiable, and 'operational' data that needs to be moved from DB version Old to DB version New (last week's Orders).
The first 3 minutes after you've accomplished that, things are wonderful: you can regenerate a new database with or without test data in a few minutes. Unfortunately, after 3 minutes, the databases are out of synch, at least in terms of data, if not quite as frequently in terms of structure.
I personally like to have each table's structure as a separate SQL file (and it's constraints as a separate file in a separate directory, and it's test data in one file, it's system data in another, etc.). On the one hand, this means that several different files have to be touched when making a change, but on the other hand, it makes it much easier to see the granularity of what's been changed: it's all right there in the version control logs. (I could probably be convinced that many-files is a mistaken strategy...)
All of this is predicated on the assumption that you have some facility for actually running a complex script involving many files and are not just constrained to some Web-based control panel, which may be what you're describing when you say "we are not given access to the actual machine." I feel that you can't do custom software development and not have some kind of shell access on the server; the hosting business is competitive enough that you can certainly find a script-friendly host easily enough.
Check whether the webhsoting company provides myLittleBackup
This is definitively the easiest solution to "install" a db from the development server to the shared sql server
Answer for SQL Server 2008 users.
I had the same exact issue as OP but I was using SQL Server 2008 and my shared hosting company is GoDaddy. Here's the solution to copy DB + the data to GoDaddy database...
In Visual Studio 2010, go to Server Explorer (in VS Express, I think it's called database explorer). Right click on database and select Publish to Provider ... this opens the Database Publishing Wizard ... go thru the wizard and it'll create a xxx.sql file on your local computer ...
Open SQL Server Management Studio and connect to the GoDaddy database (you should have already created this via the GoDaddy control panel within their website) ...
Open windows explorer and find the xxx.sql file and double click it. The script should open up in SSMS. Execute the script "within the proper database" ... voila, done.