I am synchronizing sql server and sqlCE database. once the synchronization is over, I need to delete the Sync Completed records from SqlCE database.(not from server). so that I can keep the db size low. When I delete the completed records and their trackings from SqlCE db, still there are "UploadChangesTotal" showing. How do I handle this situation?
you can intercept the upload on the ChangesSelected event and remove the delete rows from the change dataset so they dont get propagated to the server.
see: MANIPULATING THE CHANGE DATASET IN SYNC FX
Related
Copying a database in the Azure Portal is never ending.
Usually, when I copy a 250GB database, it completes in just under an hour.
Today, when I copy, it never seems to finish, it has been over two to three hours now.
And in the server activity logs, the last entry just says an update occured
Any idea on how to see more progress, percent complete, or any other way to see what might be locking it? Nothing of use can be seen in the activty log json.
You can use SYS.DM_OPERATION_STATUS to track many operations including copy in SQLAZURE..
Documentation states
To use this view, you must be connected to the master database. Use the sys.dm_operation_status view in the master database of the SQL Database server to track the status of the following operations performed on a SQL Database:
Below are the operattions that can be tracked
Create database
Copy database. Database Copy creates a record in this view on both the source and target servers.
Alter database
Change the performance level of a service tier
Change the service tier of a database, such as changing from Basic to Standard.
Setting up a Geo-Replication relationship
Terminating a Geo-Replication relationship
Restore database
Delete database
You can also try sys.dm_database_copies in master database for info about copy status ..This has percent_complete field and below is what documentation has to say about this
The percentage of bytes that have been copied. Values range from 0 to 100. SQL Database may automatically recover from some errors, such as failover, and restart the database copy. In this case, percent_complete would restart from 0.
Note:
This view has info only during the duration of copy operation..
We have a local database that retains data for 2 months. I'm replicating the local database to a cloud database using SQL Transactional replication.The idea is we want to save a year worth of data. I disabled the DELETE from being replicated, and this works great. However, if the replication for any reason got reinitialized or someone runs the snapshot agent again in the publisher, I will lose all the data in the cloud and get the current image of my local database! What can I do to stop this from happening from the subscriber side? Is there a way to make the subscriber or the cloud ignores all forms of DELETE or re-initialization, and just keep building up replicated data from the local database?
I use SQL Azure Data Sync to sync my remote Azure database with my local SQL database. Data Sync does create some addional tables on client and server and also adds delete, insert and update triggers to existing tables.
For what are these triggers? Can i delete them? I don't think so?
Problem now is that i can't edit data on server.
I get the error
The target table 'dbo.Corporation' of the DML statement cannot have any
enabled triggers if the statement contains an OUTPUT clause without INTO clause.
The triggers are added by the Microsoft Sync Framework, which is being used for SQL Azure Data Sync. And, yes you can't delete them, because the SQL Azure Data Sync will stop working. It is not that easy to modify tables after they are provisioned. If you are adding columns check out this question. If it is something else, just try searching solution to your project tagged under Microsoft sync framework and not SQL Azure.
I am writing code to migrate data from our live Access database to a new Sql Server database which has a different schema with a reorganized structure. This Sql Server database will be used with a new version of our application in development.
I've been writing migrating code in C# that calls Sql Server and Access and transforms the data as required. I migrated for the first time a table which has entries related to new entries of another table that I have not updated recently, and that caused an error because the record in the corresponding table in SQL Server could not be found
So, my SqlServer productions table has data only up to 1/14/09, and I'm continuing to migrate more tables from Access. So I want to write an update method that can figure out what the new stuff is in Access that hasn't been reflected in Sql Server.
My current idea is to write a query on the SQL side which does SELECT Max(RunDate) FROM ProductionRuns, to give me the latest date in that field in the table. On the Access side, I would write a query that does SELECT * FROM ProductionRuns WHERE RunDate > ?, where the parameter is that max date found in SQL Server, and perform my translation step in code, and then insert the new data in Sql Server.
What I'm wondering is, do I have the syntax right for getting the latest date in that Sql Server table? And is there a better way to do this kind of migration of a live database?
Edit: What I've done is make a copy of the current live database. Which I can then migrate without worrying about changes, then use that to test during development, and then I can migrate the latest data whenever the new database and application go live.
I personally would divide the process into two steps.
I would create an exact copy of Access DB in SQLServer and copy all the data
Copy the data from this temporary SQLServer DB to your destination database
In that way you can write set of SQL code to accomplish second step task
Alternatively use SSIS
Generally when you convert data to a new database that will take it's place in porduction, you shut out all users of the database for a period of time, run the migration and turn on the new database. This ensures no changes to the data are made while doing the conversion. Of course I never would have done this using c# either. Data migration is a database task and should have been done in SSIS (or DTS if you have an older version of SQL Server).
If the databse you are converting to is just in development, I would create a backup of the Access database and load the data from there to test the data loading process and to get the data in so you can do the application development. Then when it is time to do the real load, you just close down the real database to users and use it to load from. If you are trying to keep both in synch wile you develop, well I wouldn't do that but if you must, make a nightly backup of the file and load first thing in the morning using your process.
You may want to look at investing in a tool like SQL Data Compare.
I believe it has support for access databases too, and you can download a trial.
I you are happy with you C# code, but it fails because of the constraints in your destination database you temporarily can disable them and then enable after you copy the whole lot.
I am assuming that your destination database is brand new DB with no data, and not used by anyone when the transfer happens
It sounds like you have two problems:
You're migrating data from one database to another.
You're changing your schema.
Doing either of these things is tricky if you are trying to migrate the data while people are using the data.
The simplest approach is to migrate the data based on a static copy of the data, and also to queue updates to that data from the moment you captured the static copy. I don't know how easy this is in Access, but in SQLServer or Oracle you can use the redo logs for this or a manual solution using triggers. The poor-man's way of doing this is to make triggers for all the relevant tables that log the primary key of the records that have changed. Then after the old database is shut off you can iterate over those keys and get those records from the old database and put them into the new database. Just copy the whole record; if the record was deleted then delete it from the new database.
Your problem is compounded by the fact that you can't simply copy the data, you have to transform it. This means you probably have to shut down both databases and re-migrate the records based on the change list. It will take a lot of planning to ensure you get things right and I'd recommend writing a testing script that can validate that the resulting data is correct.
Also I'd ensure that the code for the migration runs inside one of the databases if possible. Otherwise you are copying the data twice and this will significantly harm the performance.
We recently updated our WPF application to perform its data synchronisation (using sync framework) within a single transaction against our SQL Server 2008 database.
Almost straight away this has somehow led to a row being locked in of of the tables
preventing all other users from syncing.
The thing is that the lock does not seem to be lifting and we are not sure how to resolve the situation.
Any feedback appreciated
The only way to release a row lock is to commit or rollback the transaction in which the lock was taken.