How to view transaction logs? - sql

I am not sure if the transaction log is what I need or what.
My first problem is that I have a stored procedure that inserts some rows. I was just checking out elmah and I see that some sql exceptions happens. They all are the same error(a PK constraint was violated).
Other than that elmah is not telling me much more. So I don't know what row caused this primary key constraint(I am guessing the same row was for some reason being added twice).
So I am not sure if the the transaction log would tell me what happened and what data was trying to be inserted. I can't recreate this error it always works for me.
My second problem is for some reason when my page loads up I have a row from that database that I don't think exists anymore(I have a hidden column with the PK in it.) When I try to find this primary key it does not exist in the database.
I am using ms sql 2005.
Thanks

I don't think transaction log will help you.
SQL 2 modes on how to insert data with uniqueness violation.
There is a setting : IGNORE_DUP_KEY. By default it is OFF. IF you turn it ON, SQL will ignire duplicate rows and your INSERT statement will succeed.
You can read about it here:
http://msdn.microsoft.com/en-us/library/ms175132.aspx
BTW, to view transaction log, you can use this command:
SELECT * FROM fn_dblog(null, null)

You can inspect the log with the (undocumented) function fn_dblog(), but it won't tell you anything in the case of duplicate key violation because the violation happens before the row is inserted, so no log record is generated. Is true though that you'll get other operations at the time of error and from those you can, possibly, recreate the actions that lead to the error condition. Note that if the database is in SIMPLE recovery model then the log gets reused and you likely lost track of anything that happened.
Have a look at this article How do checkpoints work and what gets logged for an example of fn_dblog() usage. Although is on a different topic, it shows how the function works.

If you can repeat the error I would suggest using SQL Server profiler so you can see exactly what is going on.
If you are using asp.net to load the page are you using any output caching or data caching that might be retaining the row that no longer exists in the db?

Related

Merge replication - publisher missing data from subscriber

I have a database using SQL 2005 merge replication and ther has been data inserted into the subscriber that never went over to the publisher. I believe there was a conlict that happened over the 14 day retention period ago and I do not see it any more. Can I manually add them into the publisher? Any ideas or directing me to a good link is appreciated. Thank you.
If the conflict occurred before the current retention period, I don't think there is any magic that will get it back. Can you drop the subscription and re-create it (synchronizing the deltas manually in the meantime)? Probably the safest action.
Before I answer this please note that the following directions can be very dangerous and must be done with the utmost care. This solution works for me because the tables in question are only written to one(1) subscriber and no where else. Basically what I did was to:
Pause replication(I actually disabled the replication job for the subscriber I was working on and enabled it when done)
Set the Identity Insert for the table to ON(auto identity is used on the table)
Alter the table to NOCHECK CONSTRAINT the repl_identity_range_(some Hex Value here)
Disabled the MSmerge_ins_(some Hex Value here) trigger for the table.(MAKE SURE TO ENABLE THIS WHEN COMPLETE)!!!
Inserted the rows
Set Indentity_Insert off
Enabled the MSmerge_ins_(some Hex Value here) trigger
Alter the table to CHECK CONSTRAINT the repl_identity_range_(some Hex Value here)
You can find the name of the repl_identity_range constraint by running sp_help. I reccomend that you use a tool such as Red Gates data compare to validate once you are complete just to make sure. Depending on you situation you may have to manually insert the data at all the subscribers as well. FYI-I had to do this on a production database without interruption the end users. Please use caution.

Prevent update to non-existent rows

At work we have a table to hold settings which essentially contains the following columns:
PARAMNAME
VALUE
Most of the time new settings are added but on rare occasions, settings are removed. Unfortunately this means that any scripts which might have previously updated this value will continue to do so despite the fact that the update results in "0 rows updated" and leads to unexpected behaviour.
This situation was picked up recently by a regression test failure but only after much investigation into why the data in the system was different.
So my question is: Is there a way to generate an error condition when an update results in zero rows updated?
Here are some options I have thought of, but none of them are really all that desirable:
PL/SQL wrapper which notices the failed update and throws an exception.
Not ideal as it doesn't stop anyone/a script from manually doing an update.
A trigger on the table which throws an exception.
Goes against our current policy of phasing out triggers.
Requires updating trigger every time a setting is removed and maintaining a list of obsolete settings (if doing exclusion).
Might have problems with mutating table (if doing inclusion by querying what settings currently exist).
A PL/SQL wrapper seems like the best option to me. Triggers are a great thing to phase out, with the exception of generating sequences and inserting history records.
If you're concerned about someone manually updating rather than using the PL/SQL wrapper, just restrict the user role so that it does not have UPDATE privileges on the table but has EXECUTE privileges on the procedure.
Not really a solution but a method to organize things a bit:
Create a separate table with the parameter definitions and link to that table from the parameter value table. Make the reference to the parameter definition required (nulls not allowed).
Definition table PARAMS (ID, NAME)
Actual settings table PARAM_VALUES (PARAM_ID, VALUE)
(changing your table structure is also a very effective way to evoke errors in scripts that have not been updated...)
May be you can use MERGE statement
here is a link for it
http://www.oracle-developer.net/display.php?id=203
The merge statement allows you to combine insert and update in the same query, so in case the desired row does not exist you may insert a record in a buffer table to indicate the the row does not exist or else you can update the required record
Hope it helps

SQL continue executing queries after duplicate key violation

I have a situation where I want to insert a row if it doesn't exist, and to not insert it if it already does. I tried creating sql queries that prevented this from happening (see here), but I was told a solution is to create constraints and catch the exception when they're violated.
I have constraints in place already. My question is - how can I catch the exception and continue executing more queries? If my code looks like this:
cur = transaction.cursor()
#execute some queries that succeed
try:
cur.execute(fooquery, bardata) #this query might fail, but that's OK
except psycopg2.IntegrityError:
pass
cur.execute(fooquery2, bardata2)
Then I get an error on the second execute:
psycopg2.InternalError: current transaction is aborted, commands ignored until end of transaction block
How can I tell the computer that I want it to keep executing queries? I don't want to transaction.commit(), because I might want to roll back the entire transaction (the queries that succeeded before).
I think what you could do is use a SAVEPOINT before trying to execute the statement which could cause the violation. If the violation happens, then you could rollback to the SAVEPOINT, but keep your original transaction.
Here's another thread which may be helpful:
Continuing a transaction after primary key violation error
I gave an up-vote to the SAVEPOINT answer--especially since it links to a question where my answer was accepted. ;)
However, given your statement in the comments section that you expect errors "more often than not," may I suggest another alternative?
This solution actually harkens back to your other question. The difference here is how to load the data very quickly into the right place and format in order to move data around a single SELECT -and- is generic for any table you want to populate (so the same code could be used for multiple different tables). Here's a rough layout of how I would do it in pure PostgreSQL, assuming I had a CSV file in the same format of the table to be inserted into:
CREATE TEMP TABLE input_file (LIKE target_table);
COPY input_file FROM '/path/to/file.csv' WITH CSV;
INSERT INTO target_table
SELECT * FROM input_file
WHERE (<unique key field list>) NOT IN (
SELECT <unique key field list>
FROM target_table
);
Okay, this is a idealized example and I'm also glossing over several things (like reporting back the duplicates, pushing the data into the table via Python in-memory data, COPY from STDIN rather than via a file, etc.), but hopefully the basic idea is there and it's going to avoid much of the overhead if you expect more records to be rejected than accepted.

Why could "insert (...) values (...)" not insert a new row?

I have a simple SQL insert statement of the form:
insert into MyTable (...) values (...)
It is used repeatedly to insert rows and usually works as expected. It inserts exactly 1 row to MyTable, which is also the value returned by the Delphi statement AffectedRows:= myInsertADOQuery.ExecSQL.
After some time there was a temporary network connectivity problem. As a result, other threads of the same application perceived EOleExceptions (Connection failure, -2147467259 = unspecified error). Later, the network connection was reestablished, these threads reconnected and were fine.
The thread responsible for executing the insert statement described above, however, did not perceive the connectivity problems (No exceptions) - probably it was simply not executed while the network was down. But after the network connectivity problems myInsertADOQuery.ExecSQL always returned 0 and no rows were inserted to MyTable anymore. After a restart of the application the insert statement worked again as expected.
For SQL Server, is there any defined case where an insert statment like the one above would not insert a row and return 0 as the number of affected rows? Primary key is an autogenerated GUID. There are no unique or check constraints (which should result in an exception anyway rather than not inserting a row).
Are there any known ADO bugs (Provider=SQLOLEDB.1)?
Any other explanations for this behaviour?
Thanks,
Nang.
If you does not have any exceptions, then:
When a table has triggers without SET NOCOUNT ON, then actually the operation (INSERT / UPDATE / DELETE) may be finished successfully, but a number of affected records may be returned as 0.
Depending on a transaction activity in current session, other sessions may not see changes made by current session. But current session will see own changes and a number of affected records will be (may be) not 0.
So, the exact answer may depend on your table DDL (+ triggers if any) and on how you are checking the inserted rows.
Looks like your Insert thread lost silently the connection and is not checking on it to do an auto reconnect if needed but keeps queuing the inserts without actually sending them.
I would isolate this code in a small standalone app to debug it and see how it behaves when you voluntarily disconnect the network then reconnect it.
I would not be surprised if you either found a "swallowed" exception, or some code omitting to check for success/failure.
Hope it helps...
If the values you're trying to insert are violating
a CHECK constraint
a FOREIGN KEY relationship
a NOT NULL constraint
a UNIQUE constraint
or any other constraints, then the row(s) will not be inserted.
Do you use transactions? Maybe your application has no autocommit? Some drivers do not commit data if there was error in transaction.

Who deleted my sql table rows?

I have an SQL table, from which data is being deleted. Nobody knows how the data is being deleted. I added a trigger and know the time, however no jobs are running that would delete the data. I also added a trigger whenever rows are being deleted from this table. I am then inserting the deleted rows and the SYSTEM_USER to a log table, however that doesnt help much. Is there anything better I can do to know who and how the data gets deleted? Would it be possible to get the server id or something? Thanks for any advice.
Sorry: I am using SQL Server 2000.
**update 1*: Its important to find out how the data gets deleted - preferably I would like to know the DTS package or SQL statement that is being executed.
Just a guess, but do you have delete cascades on one of the parent tables (those referenced by foreign keys). If so, when you delete the parent row the entries in the child table are also removed.
If the recovery mode is set to "Full", you can check the logs.
Beyond that, remove any delete grants to the table. If it still happens, whomever is doing it has root/dbo access - so change the password...
Try logging all transactions for the time being, even if if it hurts performance. MS offers a mssql profiler, including for express versions if needed. With it, you should be able to log transactions. As an alternative to profilers, you can use the trace_build stored procedure to dump activity into reference files, then just 'ctrl-f' for any instance of the word 'delete' or other similar keywords. For more info, see this SO page...
Logging ALL Queries on a SQL Server 2008 Express Database?
Also, and this may sound stupid, but investigate the possibility that what you are seeing is not deletes. Instead, investigate if records are simply being 'updated', 'replaced if already exists', 'upserted', or whatever you like to call it. In Mysql, this is the 'INSERT ... ON DUPLICATE KEY UPDATE' statement. I'm not sure of the MSSQL variant.
What recovery model is your database in? If it is full Redgate log Rescue is free and works against SQL2000 which might help you retrieve the deleted data. The Overview Video does appear to show a user column.
Or you could roll your own query against fn_dblog
Change all your passwords. Give as few people delete access as possible.