SQL Server : Replication Error - the row was not found at the Subscriber when applying the replicated command - sql

I got the following error in Activity Monitor,
The row was not found at the Subscriber when applying the replicated command. (Source: SQL Server, Error number: 20598)
After some research, I found out the error occurs because it's trying to delete records which are not exists in the subscriber(Also are not exists in publisher)
{CALL [dbo].[sp_MSdel_testtable] (241)}
I can manually insert something to the record and the replication will move on. The problem I have right now is I don't how many bad records are there. Is there any fast way to do this? I already spends hours and inserted about 20 records.
Thank you

Re-initialize the subscription via Replication Monitor in SSMS, and generate a new snapshot during re-initialization. This should clear up the missing record issues.

Dude, I [literally] just solved this a few minutes ago. It pushed me about a fifty cent cab ride short of crazy. The replicated database was part of a third party solution to offload their app's reporting. As it turns out, one of their app servers was pointed to our reporting database rather than the live database. Hence, "row not found at subscriber." because they were inserting and deleting records in the subscriber table.
Use distribution.dbo.sp_helpsubscriptionerrors to find the xact_seqno, then you can use distribution.dbo.browsereplcmds and this query
SELECT *
FROM distribution.dbo.MSarticles
WHERE article_id in (
SELECT article_id
FROM MSrepl_commands
WHERE xact_seqno = 0x xact_seqno)
to get more info. Use distribution.dbo.sp_setsubscriptionxactseqno if you want to get that single transaction "un-stuck" or just re-initialize.
I ended up running SQL Profiler on both the publisher and subscriber, then keyed off the tables I found with the queries above. That's when I spotted the DML on the replicated table.
Good luck.

This query will give you all the pending commands for that article, which needs to be sent from distributor to subscriber.
Here you will get multiple records with same xact_seqno_start and xact_seqno_end but with different command ids.
You can check if all are delete commands, you can take primary key column value for all and insert them manually to subscriber server.
EXEC Sp_browsereplcmds
#article_id = 813,
#command_id = 1,
#xact_seqno_start = '0x00099979000038D60001',
#xact_seqno_end = '0x00099979000038D60001',
#publisher_database_id = 1

Related

How to solve S-IX deadlock without using the Snapshot isolation?

Ok, I got an weird problem: I got two server-side api, one is to select data from A table; the Other one is to Insert new record to B table with part of the data and PK from first one. They should not have any problem to each other with the Usual actions.
Somehow, I detected someone which called my select and insert function at less than 0.005 sec on my SQL monitor, which caused S-IX deadlock. So I searched the Internet and found an solution that told me to enable the Snapshot isolation. But I tried it on my test DB(which its total size is about 223MB), that Alter Database command does not show any sight of finishing after 1 hour execution, so this is intolerable to execute it on the Production DB with such long Downtime(Which its data size is bigger than the Test one).
So my question is: Does anyone know the Other way to solve the S-IX deadlock?(Without lower the Throughput.)
P.S.: My DB is SQL Server 2008

Oracle error: Application failover does not support non-single-SELECT statement

We are getting the following error using Oracle:
[Oracle JDBC Driver]Application failover does not support non-single-SELECT statement
The error occurs when we try to make a delete or insert over a large number of rows (tens of millions of rows).
I know that the script works, because it was working for almost an year before these error messages start to pop.
We know that no one change any database configuration, so we figure out that the problem must be on the volume of processed data (row number is growing as time goes by...).
But we never see that kind of error before! What does it means? It seems that a failover engine tries to recover from an error, but when oracle is 'taken over' by this engine, it enter in a more restricted state, where some kinds of queries does not work (like Windows Safe Mode...)
Well, if this is what is happening, how can I get the real error message? The one that trigger the failover mechanism?
BTW, below is one of the deletes that triggers the error:
delete from odf_ca_rnv_av_snapshot_week
(we tried this one just to test the simplest delete we could think of... a truncate won't help us with the real deal :) )
check this link
the error seems to come not from Oracle or JDBC, but from "progress". It means that it can only recover from SELECT statements and not from DML.
You'll have to figure out why the failover occurs in the first place.

Empty XML Columns during SQL Server replication

We have a merge replication setup on SQL Server that goes like this: 1 SQL server at the office, another SQL server traveling around the world. The publisher is the SQL server at the office.
In about 1% of the cases, two of our tables with a column of XML Data type (not bound to a schema) are replicated with rows containing empty XML columns. ( This only happened when data is sent from the "traveling server" back home, but then again, data seems to be changed more often there ). We only have this in prod. environment ( WAN replication ).
Things i have verified:
The row is replicated, as the last modification date on the row is refreshed but the xml column is empty. Of course it is not empty on the other SQL Server.
No conflicts are displayed in the replication conflicts UI.
It is not caused by the size of the data inside the XML Column as some are very small.
Usually, the problem occurs in batch. ( The xml column of 8-9 consecutive rows will be empty )
The problem occurs if a row was inserted OR updated. No pattern there.
The problem seems to occur, but this is pure speculation on my part when the connection is weaker. ( We've seen this problem happen more often when the server was far away as compared to when it was close by. )
Sorry if i have confused some things, I am not really a DBA, more of a DEV with knowledge of SQL but since the application using the database keeps getting blamed for the problems ( the XML column must not be empty!! ) I have taken it at heart to try and find the problem instead of just manually patching the data each time ( Whats the use of replication if you have to do that? )
If anyone could help out with this problem, or at least suggest some ways of being able to debug / investigate this it would be greatly appreciated.
I did search alot on google and I did find this: Hot Fix . But we do have the latest service pack and the problem seems a bit different.
fyi: We have a replication setup locally here but the problem never occurs. We will be trying a WAN simulator on it as well to see if that can help.
Thanks
Edit: hot fix is now available for my issue: http://support.microsoft.com/kb/2591902
After logging this issue with Microsoft, we were able to reproduce the problem without a slow link ( Big thanks to the competent escalation engineer at Microsoft ). The repro is a bit different from our scenario, but highlights the timing issue we were getting perfectly.
Create 2 tables – One parent one child (have a PK-FK relationship)
Insert 2 rows in the parent table
Set up replication – configure merge agent to run ON DEMAND
Sync
Once all is replicated:
On the PUBLISHER: delete one row from the parent table
On the SUBSCRIBER: Insert 2 rows of data that references the parentid you deleted above
Insert 5 rows of data that references the parentid that will stay in the table
Sync, Merge agent will fail, Sync again, Merge agent will succeed
Missing XML data on the publisher on the 5 rows.
Seems it is a bug that is in SQL Server 2005/2008 and 2008R2.
It will be addressed in a hot fix in 2008 and up. ( As SQL Server 2005 is no longer being altered )
Cheers.
You may want to start out by slapping a bandaid on this perplexing situation to buy some time to fully investigate and fix (or more likely get MS to fix it). SQL Data Compare is an excellent tool that might help.
Figured i'd put an update here as this issue got me a few gray hairs and I am somewhat closer to a solution now.
I finally had some time to work on this and managed to reproduce this issue in our test environment, using a WAN simulator and slowing down the link and injecting some random packet loss. ( to best simulate the production environment where the server is overseas on a really bad line ).
After doing some SQL tracing, and some verbose logging here are my conclusions:
When replicating a row with an XML column, the process is done in 2 steps. First an insert is done of the full row but with an empty string for the XML column. Right after, an update is done this time with the XML column having data. Since the link is slow, in some situations a foreign key violation occured.
In this scenario, Table2 depends on Table1. After finishing replicating table1, and starting to replicate table2 (Enumration of insert/updates which takes time on a slow link), some entries were added to table1 and table2. Therefore some inserts on Table2 failed because Table1 entries were not in the database and were only going to be replicated next batch. The next time the replication occured, no more foreign key violations occured, however when it tried to insert the row that had previously failed in Table2 ( XML column row ), the update part of it was missing ( I could see that in the SQL profiler ) and that is why the row ended up after all was done with an empty XML.
Setting "Enforce for replication" to false on the foreign keys seems to address the problem, however I do still think that this whole process should work with the option set to true.
I logged a support call with Microsoft for this. I have sent the traces and logs to Microsoft and will see what they have to say.
I've read this article: http://msdn.microsoft.com/en-us/library/ms152529(v=SQL.90).aspx. But for me, setting this option to false is kind of a work around, no?
What do you guys think?
ps: Hope this is clear, tried to explain it the best I could. English is not my first language.

How to remove a tuple from an SQL table after a timeout?

I am faced with a peculiar requirement which is as follows:
A network-intensive operation is triggered to a server by multiple clients, through a web-interface. However, only one operation is allowed at a time, and hence an entry(tuple) is made in an SQL table to indicate that the operation is in progress. Once the operation is complete (irrespective of success or failure), the appropriate result is displayed back to the client(s), and the corresponding tuple is removed from the SQL table.
Since the operation is network-intensive, a scenario where the operation needs to be "considered" to be cancelled, after some timeout (10 minutes) has to be introduced.
Is there ANY way the lifetime of a row in SQL be associated with a timeout value, so that is is deleted after certain time? My application is primarily written in Java 1.5 and EJB 3.0, using JPA/Hibernate to access Oracle 10g DB engine.
Thanks in advance.
Regards,
Nagendra U M
I would suggest that you try using a timestamp column containing the start time of the task.
A before trigger can be then made to delete the old column before a new one is inserted if the task timed out.
If you want to have multiple tasks with different timeouts, you can even add a column with the timeout in seconds. Just code your trigger accordingly.
I don't know that Oracle has this kind of facility but I think no db engine have this.
If you want to do it at DB level,
you must have a datetime column, e.g.; 'CreatedDate' in
table. This column will have
datetime when record was created.
Write a procedure and put it in a
schedule job. This job will run after every 10 minutes and remove the 10 minutes old records. The query will be like
this.
T-SQL: Please convert it according to your db engine.
DELETE FROM yourtable WHERE CreatedDate < DATEADD(mi, -10, GETDATE())
This will delete all records older than 10 minutes from table.
This is just to give you idea of schedule job. It is in SQL Server. I don't know about Oracle
step_by_step_guide_to_add_a_sql_job_in_sql_server_2005
It sounds like you're implementing a mutex using the database, take a look at this question and see if it helps? Sounds like transactional access to a flag table will solve this for you, as long as you catch both success & failure states in your server code.

How to recover the old data from table

I have made an update statement in a table in SQL 2008 which updated the table with some wrong data.
I didn't have a backup for the DB.
It's some important dates which got updated.
Is there anyway where i can recover the old data from the table.
Thanks
SNA
Basically no unless you want to use a commercial log reader and try go through it with a fine tooth comb. No backup of the database can be an 'update resume, leave town' scenario - harsh but it just should not happen.
Andrew basically has called it. I just want to add a few ideas you can consider if you are desperate:
Are there any reports or printouts lying around? Perhaps you can reconstruct the data from there.
Was this data entered via a web application? If so, there is a remote chance you can find the original data in the web server logs, depending upon how the app was constructed, etc.
Does this app interface (pass data to) any other applications? They may have a buffered copy of data...
Can the data be derived from any other existing data? Is there an audit log table, or another date in your schema based on this one, from which you can reconstruct the original date?
Edit:
Some commenters are mentioning that is is a good idea to test your update/delete statements before running them. For this to become habit, it helps if you have an easy method. I usually create my DELETE statements like this:
--delete --select *
from [User]
where UserID=27
To run the select in order to test your query, highlight everything from select onwards. To then run the delete if you are satisfied with the filter criteria, highlight everything from delete onwards. The two dashes in front of delete are so that if the query accidentally gets run, it will just crash due to invalid syntax.
You can use a similar construct for UPDATE statements, although it is not quite as clean.
SQL server keeps log for every transation.So you can recover your modified data from the log as well without backup.
Select [PAGE ID],[Slot ID],[AllocUnitId],[Transaction ID]
,[RowLog Contents 0], [RowLog Contents 1],[RowLog Contents 3],[RowLog Contents 4]
,[Log Record]
FROM sys.fn_dblog(NULL, NULL)
WHERE
AllocUnitId IN
(Select [Allocation_unit_id] from sys.allocation_units allocunits
INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)
AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2
AND partitions.partition_id = allocunits.container_id)
Where object_id=object_ID('' + 'dbo.student' + ''))
AND Operation in ('LOP_MODIFY_ROW','LOP_MODIFY_COLUMNS')
And [Context] IN ('LCX_HEAP','LCX_CLUSTERED')
Here is the artcile, that explains step by step, how to do it.
http://raresql.com/2012/02/01/how-to-recover-modified-records-from-sql-server-part-1/
Imran
Thanks for all the responses.
The problem was actually accidentally ---i missed to select the where condition in the update statement.---Rest !.
It was a quick 5 minutes task --Like just changing the date to test for one customer data--so we didn't think of taking a backup.
Yes of course you are true ..This is a lesson.
Now onwards i will be careful to write "my update statements in a transaction." or "test my update statements"
Thanks once again--for spending your time to give some insight rather ignoring the question since the only answer is "NO".
Thanks
SNA
Always take a backup before major UPDATE statements, even if it's not used, there's the peace of mind
Especially with Red Gate's Object Level Restore, one can restore individual table/row now given a backup file
Good luck, I'd suggest finding an old copy elsewhere (DEV/QA) etc...
Isn't it possible to do a rollback on an UPDATE statement?
Late one but hopefully useful…
If database is in full recovery mode then all transactions are logged in transaction log and can be retrieved. Problem is that this is not natively supported because this is not the main purpose of the transaction log.
Options are:
Commercial tools such as Apex Log (more expensive, more options) or Quest Toad (less expensive, less options for this purpose main focus is on SQL Server management)
Trying to do this yourself, like user1059637 pointed out. Problem with this approach is that it can’t read transaction log backups and is more tedious.
It comes down to how much your data is worth to you in terms of time and $.