Currently our production azure SQL database is a P1. We'd like to replicate or copy this database as our QA database. Our QA database doesn't need to be anything more than a S1. Does anyone know if the action of copying a database costs money? If I wanted to run a azure function to copy the database every night to the same azure SQL server would it be costly? I know in the azure function, after a successful copy, i have to lower it from a P1 to and S1. The Azure documentation about copying a database doesn't talk about pricing.
Another question, Does anyone know if you can replicate a P1 azure SQL database to a S1? That would be better than a azure function copy every night.
Thanks in advance
Does copying a database cost money?
Assuming you mean use the "Copy" function inside the database blade or the "New-AzSqlDatabaseCopy" PowerShell command, I have done this several time's and it does not result additional costs. If you are copying via some sort of manual method via script, then it would simply utilize DTU's when the copy process is occurring.
Copying the database every night
Performing the copy every night using the built in copy functions would not cause additional costs, but this wouldn't be the best way to accomplish what you want. Instead of doing that, why not setup replication using a synch group (as you hinted at) which is easy to setup and even easier to maintain. See my post here about how to do that.
Copying/Synching between SQL Service Levels
Lastly, unless the database exceeds the 1 TB S1 storage size limit, there is no reason why you can't synch a P1 to an S1 in a sync group.
Related
Hello folks first post in stack, btw wonderful community and helps out a lot.
like mentioned in the title what is the best way to copy such a large database? we got an ~ 500 GB Database and im currently moving this database from managed instance to a azure single database using smss:smss copy via deploy to microsoft azure sql database and it takes me right now 22 hours. i feel like im back in early 20s.
it's all in the same subscription and also in the same network configuration. afaik the process of that is that smss creates a bacpac file and then import it back to the single database. but 16 hours is just too long. so do you know any better option to do this quicker because i've a hell of more and partly larger databases to copy.
Did you think about using ETL tools, such as Azure Data Factory? It has good performance to migrate the big data. Ref this performance table:
It supports SQL database and Azure SQL MI. Ref these tutorial:
Copy and transform data in Azure SQL Database by using Azure Data Factory
Copy and transform data in Azure SQL Managed Instance by using Azure
Data Factory
It may takes some money but save much time. As we all know, time is money.
HTH.
I realize that Azure SQL Database does not support doing an insert/select from one db into another, even if they're on the same server. We receive data files from clients and we process and load them into a "load database". Once the load is complete, based upon various rules, we then move the data into a production database of which there are about 20, all clones of each other (the data only goes into one of the databases).
Looking for a solution that will allow us to move the data. There can be 500,000 records in a load file and so moving them one by one is not really feasible.
Have you tried Elastic Query? Here is the Getting Started guide for it. Currently you cannot perform remote writes, but you can always read data from remote tables.
Hope this helps!
Silvia Doomra
I hope someone can give me advice or point me to some readings for this. I generate business reports for my team. We host a subscription website so we need to track several things sometimes on a daily basis. A lot of sql queries are involved.The problem is querying a large volume information from the live database will slow or cause timeouts to our website.
My current solution requires me to run bcp scripts that copy new rows to a backup database, (that I use purely for reports) daily. Then I use an application I made to generate reports from there. The output is ultimately an excel file or several (for the benefit of the business teams, it's easier for them to read.) There several problems in my temporary solution though,
It only adds new rows. Updates to previous rows are not copied. and
It doesn't seem very efficient.
Is there another way to do this? My main concern is that the generation or the querying should not slow down our site.
I can think of three options for you, each of which could have various implementation methods. The first one is Azure SQL Data Sync Services, the second is the AS COPY COPY operation and the third is rides on top of a backup.
The Sync Services are a good option if you need more real time reporting capability; meaning if you need to run your reports multiple times a day, at just about any time, and you need your data as real time as you can get it. Sync Services could have a performance impact on your primary database because it runs based off of triggers, but with this option you can choose what to sync; in other words you can replicate a filtered set of data, which minimizes the performance impact. Then you can reports on the sync'ed database. Another important shortcoming of this approach is that you would end up maintaining a sync service; if your primary database schema changes, you may need to recreate some or all of the sync configuration.
The second option, AS COPY OF, is a simply database copy operation which essentially gives you a clone of your primary database. Depending on the size of the database, this could take some time, so testing is key. However, if you are performing a morning report for yesterday's activities and having the latest data is not as important, then you could run the AS COPY OF operation on a schedule after hours (or when the activity on your database is the lowest) and run your report on the secondary database. You may need to build a small script, or use third party tools to help you automate this. There would be little to no performance impact on your primary database. In addition, the AS COPY OF operation provides transactional consistency, if this important to you.
The third option could be to use a backup mechanism (such as the Azure Export, or Azure backup tools), and restore the latest backup before running your reports. This has the advantage to leverage your backup strategy without much additional effort.
This is not a traditional scale-up or scale-out question.
Please bear with me, here first allow me give an example:
I created a Sql Azure server and create a 1GB database inside, cost $9.99 a month.
(It has a master database as well, 1G, but Microsoft not charge us for that)
Ok, here is my question comes, when I need another 1G database for my application. Why I need another 1GB database? You may ask me this because the azure can support database up to 50GB. My answer is distribution, I know the data will reach 50G eventually, so I create the data model distribute and spread the data in different database.
For all the sake of performance, which option I should use:
Create another database in same server
Create another server and create a new database inside
Both option cost same.
I guess option 2 will be better, isn't it?
I'm not sure there are strong (or any) performance implications, my understanding is that the consideration is mostly a management one as some entities, mostly around security, are defined at server level and some at database level.
Behind the scenes the model is quite different anyway, and a multi-tenant one, so having separate SQL Azure server does not actually mean you get a dedicated server per-se. theoretically separate servers or separate databases may end up looking exactly the same.
My company at present has the following Setup:
127 SQL servers (2000 and 2005) and over 700 databases. We are in the process of server consolidation and are planning on having a Master server / Target servers setup to enable centralized administration. As part of this project, I have been given the responsiblity of creating a script based automated backup / maintenance solution.
Thanks to Ola Hallengren's script available here I have made a lot of progress.
Here is what I plan:
I have a database in the master server which has details of SQL instances, databases and backup path details.
I am in the process of modifying Hallengren's script to read from this database and create jobs dynamically.
Users are allowed to define what kind of backup they want, how often and how long the backup needs to be kept.
I am also looking at having the ability to spread out jobs, so that I do not have too many jobs running at the same time.
My thoughts are to create tables that have the data needed to be passed as parameters to sp_add_job, sp_add_jobstep and sp_add_jobschedule.
Please share your experiences in the pitfalls and hurdles with this setup. All ideas welcome.
Thanks,
Raj
You might also consider the approach of creating a full backup job and a transaction log backup job on each server, which retrieve the databases from the master database and feeds them into a procedure for backup. You could run the jobs every 5 minutes and the procedure would have to work out what time it is and what type of backup is required.
I say this, because things could get messy creating massive amounts of jobs in an automated manner - but maybe you have it worked out well. Be sure to create a 'primary key' with the job name if you take your original approach.
Spreading out jobs should be easy if you keep record in the database and find available windows to put new jobs in. I've seen scripts for this.
You could also do this in a SSIS package that runs from your admin server. It would iterate over the databases and connect to servers to perform the backups.
Check out the first 3 article here: http://www.sqlmag.com/Authors/AuthorID/1089/1089.html
Moving to 2005 we were not happy with the Intergration Services Maintenance Plans
We wrote sp's to do Full and Tran Backups as well as reindex, arrgegate management, archiving etc
A Daily and Hourly Job would step thru master..sysdatabases and in a try catch block
apply the necessary maintenance. Would not be hard to read from any table and check user defined conditions.
Unfortunatedly there was no way to tie output to msdb..sysjobhistry, so we logged to central table. The upside was we had a lot more control over what was logged.
However would be simpler to read than going thru 700 jobs.