We have a local database that retains data for 2 months. I'm replicating the local database to a cloud database using SQL Transactional replication.The idea is we want to save a year worth of data. I disabled the DELETE from being replicated, and this works great. However, if the replication for any reason got reinitialized or someone runs the snapshot agent again in the publisher, I will lose all the data in the cloud and get the current image of my local database! What can I do to stop this from happening from the subscriber side? Is there a way to make the subscriber or the cloud ignores all forms of DELETE or re-initialization, and just keep building up replicated data from the local database?
Related
Copying a database in the Azure Portal is never ending.
Usually, when I copy a 250GB database, it completes in just under an hour.
Today, when I copy, it never seems to finish, it has been over two to three hours now.
And in the server activity logs, the last entry just says an update occured
Any idea on how to see more progress, percent complete, or any other way to see what might be locking it? Nothing of use can be seen in the activty log json.
You can use SYS.DM_OPERATION_STATUS to track many operations including copy in SQLAZURE..
Documentation states
To use this view, you must be connected to the master database. Use the sys.dm_operation_status view in the master database of the SQL Database server to track the status of the following operations performed on a SQL Database:
Below are the operattions that can be tracked
Create database
Copy database. Database Copy creates a record in this view on both the source and target servers.
Alter database
Change the performance level of a service tier
Change the service tier of a database, such as changing from Basic to Standard.
Setting up a Geo-Replication relationship
Terminating a Geo-Replication relationship
Restore database
Delete database
You can also try sys.dm_database_copies in master database for info about copy status ..This has percent_complete field and below is what documentation has to say about this
The percentage of bytes that have been copied. Values range from 0 to 100. SQL Database may automatically recover from some errors, such as failover, and restart the database copy. In this case, percent_complete would restart from 0.
Note:
This view has info only during the duration of copy operation..
I have a Cloud hosted Database, maintained by a different company. Here is my scenario:
I would like to bring down the entire database, schema and data, and keep my Local database updated real-time(sql server 2008r2).
I cannot setup replication inside SQL server, I do not have permissions on the cloud server.
I cannot use triggers.
I cannot use linked servers
They would provide me a copy of backup (nightly)and access to the transacrion logs every 1 hour
How can I use these to update my entire database.
Thanks in advance
I have a SQL Server 2012 database which currently used as a transactional database and reporting database. The application reads/writes into the same database and the reports are also generated against the same database.
Due to some performance issue, I have decided to maintain the two copies of the database. One will be a transactional database which will be accessed by the application. The other database will be the exact copy of the transactional database and it will only be used by the reporting service.
Following are the requirements:
The reporting database should be synched with transactional database in every one hour. That is, the reporting database can have stale data for maximum of 1 hour.
It must be read-only database.
The main intension is NOT recovery or availability.
I am not sure which strategy, transactional log shipping, mirroring or replication, will be best suited in my case. Also if I do the synch operation more frequently (say in every 10 minutes), will there be any impact on the transactional database or the reporting service?
Thanks
I strongly recommend you to use a standby database in readonly state. And every 15 minutes your sqlserveragent has a scheduled job to: a) generate a new .trn logfile within main db, and b) restore it into standby one(your reports db). The only issue is: using this technique your session will be disconnected while agent restores the .trn logfile. But if you can stop the restore job, run your reports and then reactivate it, there is no problem. Seems to be exactly what you need. Or if your reports are fast to run, probably will not be disconnected...if im not wrong restore job can also be configured to wait opened session to finish or to close it. I can check it this last doubt for you tomorrow if you don't find..
Once it is running in the same sql server instance, you don't have to worry about extra licensing...
I am using Microsoft Sync framework to synchronize an Azure database with a local SQL Server 2008 database. Everything is working fine. But I have a small problem as mentioned below
I am synchronizing in one way (ie) from Azure DB to local DB. Insert/update/delete on Azure DB gets synchronized with local database. But I tried to manually update a record in local DB using normal update statement. Also I updated the same record with corresponding new value in the Azure DB. Now the record in the local DB is not getting the updated value from the Azure DB. This problem happens only after updating a record manually in local database.
Please help anyone.......
that's because you're now encountering a conflict. when both copy of a row is updated on both ends, you end up with a conflict and you need to tell Sync Framework how to resolve it (e.g., Retain local copy or overwrite it)
see: How to: Handle Data Conflicts and Errors for Database Synchronization
I have 2 azure sql db and I've created SSIS job to transfer some data from 1 db to another.
The db has millions of records
The SSIS is hosted on premise and if I execute the package on my pc,
will it directly copy the data from 1 azure db to another on the FLY
OR
Fetch the data from 1 azure db to my local and then upload the data to another azure db
A trip from azure to local and again from local to azure will be too costly if I have millions of records.
I am aware of azure data sync but my requirements requires ssis for transferring particular data.
Also, do azure data sync have option to sync only particular tables?
Running the SSIS package on your local machine will cause the data to be moved to your machine before being sent out to the destination database.
When you configure a sync group in SQL Azure Data Sync you should be able to select which tables to synchronize.
I'm pretty sure the SQL Azure Data Sync does have the option to select just the tables you need to transfer. However I don't think there is an option to do transformation over the data being transfered.
As for SSIS, I don't see how would a transfer be possible without data first coming to your premises. You have to connections established - 1 connection with the first SQL Azure, and then the other connestion with the second SQL Azure server. And SSIS will pull the data from the first stream (Connection) then push it to the second one.
I would suggest exploring SQL Azure Data Sync, as it might be the best choise for your scenario. Any other option would require data to first come on premise, then being transfered back to the cloud.
Well, there is 3rd option. You create a simple worker based on ADO.NET and SqlBulkCopy class. Put your worker in a worker role in the Cloud, and trigger it by message in an Azure queue or so. Hm. That would seem to be some of the best solution, as you have total control of what is being copied. Thus all the data will stay in MSFT datacenter which means:
Fast transfer
No bandwidth charges (as long as all the 3 - 2 x SQL Azure server + 1 x Worker role are deployed in same affinity group)