Should the SQL database replication stop if i delete one record in a replicated table on the subscriber end?
I remember having a replication running where the delete on subscriber would be overwritten from publisher effectively preventing you from deleting the data.
But in our new configuration it crashed the replication when we deleted one record.
It depends. If you deleted a row at the subscriber that is subsequently updated at the publisher, replication will break when the update is propagated. Why is this? If you look at how the command is replicated, it calls a stored procedure with the PK column(s), a bit mask of what columns changed, and then the list of new values for columns that changed (I'm glossing over some detail, but you can look for yourself in the subscriber database; the procedures are all there and pretty accessible). Because it doesn't re-replicate the whole row again, if it doesn't find the row indicated by the PK, replication assumes (correctly) that the subscriber is no longer in sync with the publisher and stops. As far as I know, replication has never worked in the way you describe.
TL;DR: you should treat the subscriber database(s) as read-only except by the replication process itself.
Related
Normally when we reinitialise transactional replication it drops the table on the subscriber and recreates it. I want to have a clustered PK on the source database with a non-clustered PK of the same column on the destination and a different clustered index. I understand I can achieve this by temporarily stopping the replication making the changes and enabling it again.
I'm more worried about the future if we ever need to reinitialise I don't want the table to be dropped and lose our different index strategy. I might be being blind but I can't find a setting to allow the table structure on the subscriber to be kept on reinitialisation.
I found the answer. You set this in the article properties of the replication. You can choose to truncate all data if the table already exists.
I'm trying to understand the ramifications of database replication (SQL Server or Golden Gate) for situations where the source database is completely repopulated every night. To clarify, all existing tables are dropped and then the database is reloaded with new tables using same name along with all the data.
Based on my understanding i.e. that replication uses a transaction log... I would assume it will also repeat the process of dropping the tables instead of identifying the differences and just adding the new data. Is that correct?
You can set up the replication using OracleGoldenGate so that it will be doing what you want it to do.
the TRUNCATE TABLE command can be replicated or it can be ignored
the populating of the source table (INSERT/bulk operations) can be replicated or it can be ignored
if a row already exists (meaning a row with the same PK exists) on the target and you INSERT it on the source you can either UPDATE the target or DELETE the old one and INSERT the new, or ignore it
Database replication is based on the redo (transaction) log. Only particular events that appear on the source databases, which are logged can be replicated. But the whole replication engine can make some additional transformations as it is replicating the changes.
I am very new in Microsoft SQL Server and I am not so into databasess.
Yesterday I made an error and I deletd all the rows inside the wrong table (I should delete the records in another table)
So now it is very important to me restore in some way all the deleted records in this table (only these records and not all the DB, if it is possibile in someway).
for completeness the table is named dbo.VulnerabilityWorkaround and have the following fields:
Id: int not null (is the PK)
Description: varchar(max), not null
I think that the SQL Server
retains the information related to the deleted records in a log file (or in something like it, maybe a DB table...I don't know)
Can in some way restore my original dbo.VulnerabilityWorkaround by a query or something like it?
There is the transaction log, but as far as I know that can be used depending on the backup strategy the database instance has, meaning you would have to fire up a restore backup operation.
Other than restoring a previous backup, I don't think you have much options.
Since you just need one table it could be easier to restore a backup to a different server and then copy/move only the data you need using SSIS or Bulk Import/Export.
Does anyone know of the correct stored procedure that validates the actual code in transactional replication? I have a view that basically is a select * from table1. I changed that view to select * from table2 in the publisher and there is an error in the replication monitor (as there should be) but when I run the sp "sp_publication_validation" it validates.
sp_publication_validation does a row count and/or checksum check at the publisher and subscriber and compares the two results. If they compare as the same, the article is considered "good" and if not, it's considered "bad". Now, how does your view fit into your replication setup? Do you have the view published as an article? If so, what is the value for schema_option for it in sysarticles (at the publisher)? Also, what error are you getting in the replication monitor?
I would like to present you my problem related to SQL Server 2005 bidirectional replication.
What do I need?
My teamleader wants to solve one of our problems using bidirectional replication between two databases, each used by different application. One application creates records in table A, changes should replicate to second database into a copy of table A. When data on second server are changed, then those changes have to be propagated back to the first server.
I am trying to achieve bidirectional transactional replication between two databases on one server, which is running SQL Server 2005. I have manage to set this up using scripts, established 2 publications and 2 read only subscriptions with loopback detection. Distributtion database is created, publishment on both databases is enabled. Distributor and publisher are up. We are using some rules to control, which records will be replicated, so we need to call our custom stored procedures during replication. So, articles are set to use update, insert and delete custom stored procedures.
So far so good, but?
Everything works fine, changes are replicating, until updates are done on both tables simultaneously or before changes are replicated (and that takes about 3-6 seconds). Both records then end up with different values.
UPDATE db1.dbo.TestTable SET Col = 4 WHERE ID = 1
UPDATE db2.dbo.TestTable SET Col = 5 WHERE ID = 1
results to:
db1.dbo.TestTable COL = 5
db2.dbo.TestTable COL = 4
But we want to have last change winning replication. Please, is there a way to solve my problem? How can I ensure same values in both records? Or is there easier solution than this kind of replication?
I can provide sample replication script which I am using.
I am looking forward for you ideas,
Mirek
I think that adding dateUpdated field on both tables could help. This way in your replication code a record would be updated only if dateUpdated is greater then the one already stored.
That dateUpdated field would obviously store the datetime when the original record was updated, not when the replication was performed