How to look at past DML on your table? - sql

a friend asked me if there is a way to see the past dml statements and I wasn't really sure on how to go about answering that question. What he wants to see is the last set of insert statements. So that means it could be more than 1 record. At first I was just saying to check the latest identity, but then he asked what if more inserts were performed at the same time. Can you guys help me out? Is there a DMV I should use that I just don't know about? Thanks.

If you did not prepare for this question then there is no build in way to get to that information. However, you could use third party log reader tools to recover (all) the last statements that where executed against the database. This requires the database to be in Full recovery mode. You could potentially go back as far as you have log backups with this method.
If you want to prepare for that question being ask in the future, you have several options.
The most obvious one is Change Data Capture. You also could write a trigger yourself that records data changes.
You could also run a trace capturing SQL Batch Started.
Finally you could use a third party network sniffer/logger to capture all statements send to the server (this however requires that connection encryption is not used).

Related

How to Avoid SQL Server hangs due to uncommited transaction caused by poor SW design

The problem: a .NET application trying to save many records to SQL Server. BeginTrans was used, and right before commit a warning messages shows to end user to confirm to proceed to save data or not. The user simply left the computer and go away!!!
Now all other users are unable to access the locked records. Sometimes almost the entire system is affected. Almost all transaction are updating the same records; the confirmation message must be shown after data gets updated, and before commit so if user can rollback. What could be the best solution?
If no solution is found, the last thing i might do is to rollback, show the confirmation message, if user accepts then i will again save the data without any confirmation message (which i don't thing the right way)
My question is: What best i can do? any ideas?
This sounds like a WinForms app? It also sounds like you want to confirm the intent of user's action. Are you in a position to only start the transaction once they confirm they intend to save the data?
Ideally, you should
Prompt the user via [OK | Cancel]
Perform the database transaction
If the result of the transaction is deadlock (or any other failure), inform the user the save operation failed
In other words, the update of records should be a synchronous call.
EDIT: after understanding the specifics as mentioned in the comment below, I would recommend some form of server side task queue that all these requests would need to flow through. Your client would submit a request to the server, and the server application would then become the software responsible for updating records in the database. The clients would make their requests to this application and would be processed in the order they were received. I don't have much experience with inventory tracking software, but understand it's need to be absolutely correct. So this is just a rough idea, I'm sure someone with more experience in inventory tracking will have a better pattern. The proposed pattern creates a large bottleneck on the server that is responsible for updating the records. For example, this pattern would be terrible for someone like Amazon.

Should the first step of an SSIS solution be to check connections?

In SSIS 2012, how would I check whether a connection and credentials are working?
In a scenario where you must pull and aggregate data from different sources and load it into some target databases, should the whole project be preceeded by a special package that checks whether the connections are up and running?
If so, how would I check whether the connections to which the package needs access are up and ready to be connected to and that the credentials are right?
I understand that if I omit the step of checking the connections, the whole thing will just error out, but I feel like the best practice would be to explicitly check whether the connections and credentials are working.
Thank you very much for your kind attention.
Apparently your check would be as Siyual suggested a select 1.. But the bigger question here is do all the connections need to be active for you to get the target data or would you have some data to load even if some connections faulted - which obviously would depend on the nature of your requirement. Here is where the concept of staging tables, concept of Delta and using the completion workflow instead of the success work flow would help. For example and simplicity sake lets say I am working on pulling sales data from 3 different point of sale locations for my company (Brazil, US and India), I store the dates I last pulled from in a table and update them when the extraction is successful. In this case if one extraction fails I would still want to go to the next location and to try to pull the data.

Oracle 11g: How to track origin of a dataset?

I am currently analyzing a database and happened to find two datasets whose origin is unknown to me. The problem is that they shouldn't even be in there...
I checked all insertion scripts and I'm sure that these datasets are not inserted. They are also not in the original dumpfile.
The only explanation I can come up with at the moment is, that they are inserted via some procedure call in the scripts.
Is there any way for me to track down the origin of a single dataset?
best regards,
daZza
yes, you can,
assuming your database is running in archivelog mode you can use logminer to find which transactions did the inserts. See Using LogMiner to Analyze Redo Log Files It will take some serious time but that might be worth it.

question about frequency of updating access

i have a table in an access database
this access database is used on a regular basis, basically from 9-5
someone else has a copy of this exact table. sometimes records are added, sometimes deleted, and sometimes data within the records is updated.
i need to update the access database table with the offsite table every hour or so. what is the best algorithm of updating the data? there are about 5000 records.
would it severely lock up the table for a few seconds every hour?
i would like to publicly apologize for my rude comment to david fenton
My impression is that this question ties together pieces you've been exploring with your previous questions:
a file "listener" to detect the presence of a new file and do something with it when found
list files with some extension in a folder
DoCmd.TransferText to pull file data into your database
Insert, Update, Delete records in a table based on an imported set of records
Maybe it's time to give us a more detailed picture of what you're dealing with.
Tony asked if both sites are on the same WAN (Wide Area Network). You replied they are on Windows. Elsewhere you said you're using a network. Please tell us about the network.
I'm still unsure whether you need a one-way or two-way data exchange. You've talked about importing changes from the remote table into the local master table. Do you need to do the same type of operation at the remote site: import changes made to the table at the master site?
Tell us what needs to happen regarding the issue James raised. Can local and remote users ever edit the same record? If they can, how will you resolve the conflict? Similarly, what should happen if a remote user updates a record and a local user deletes their copy of that record?
Based on what you've told us so far, this sounds like a real challenge for Access, made more challenging by the rate of record changes (5,000 per hour). I like the outline Kevin suggested. However your challenge will be more complicated since you also need to account for record deletions at both sites.
It seems like you may have to create something which duplicates Access' Replication feature. Maybe you should look at the Jet Replication Wiki to see if you can modify your design to take advantage of Replication. I can't help you there, and unfortunately you appear to have frustrated David Fenton who is a leading authority on Jet Replication.
If a few seconds performance is critical, you'd rather move to a better database engine (like Sqlite, MySQL, MS SQL server). If you want a single file, then Sqlite is the best for you. All these use by-single-record locks, so you can read and write simultaneously.
If you stay with access, you will probably have to implement a timer to update only a few records at a time.
Before you do anything else you need to establish the "rules" as far as collisions go.
If a row in the local copy is updated and the same row in the remote copy is updated which one is the "correct" version? Ditto for deletions, inserts are even more of a pain as you can have the "same" set of values but perhaps a different key.
After you have worked out how to handle each of these cases you can then go on to thinking about the implementation.
As other posters have suggested the way to completely avoid these issues is switch to SQLServer or any other "proper" database which can be updated over the network by all users and where concurrency issues are handled by the DBMS when the updates are applied.
Other users have already suggested switching to a server based database i.e. SQL server etc. I would echo this and say it is the best way to go however if you are stuck with access and have no choice then I would suggest you add a field (with an index) along the lines of “Last Updated”. You could then export all records that have been modified within a particular time frame. Export this file as a CSV, ship it over to the remote site and import it into the “master” access database. With a bit of scripting you could automate this process.

Stored Procedure failing on a specific user

I have a Stored Procedure that is constantly failing with the error message "Timeout expired," on a specific user.
All other users are able to invoke the sp just fine, and even I am able to invoke the sp normally using the Query Analyzer--it finishes in just 10 seconds. However with the user in question, the logs show that the ASP always hangs for about 5 minutes and then aborts with a timeout.
I invoke from the ASP page like so "EXEC SP_TV_GET_CLOSED_BANKS_BY_USERS '006111'"
Anybody know how to diagnose the problem? I have already tried looking at deadlocks in the DB, but didn't find any.
Thanks,
Some thoughts...
Reading the comments suggests that parameter sniffing is causing the issue.
For the other users, the cached plan is good enough for the parameter that they send
For this user, the cached plan is probably wrong
This could happen if this user has far more rows than other users, or has rows in another table (so a different table/index seek/scan would be better)
To test for parameter sniffing:
use RECOMPILE (temporarily) on the call or in the def. This could be slow for complex query
Rebuild the indexes (or just statistics) after the timeout and try again. This invalidates all cached plans
To fix:
Mask the parameter
DECLARE #MaskedParam varchar(10)
SELECT #MaskedParam = #SignaureParam
SELECT...WHERE column = #MaskedParam
Just google "Parameter sniffing" and "Parameter masking"
I think to answer your question, we may need a bit more information.
For example, are you using Active directory to authenticate your users? Have you used the SQL profiler to investigate? It sounds like it could be an auth issue where SQL Server is having problems authenticating this particular user.
Sounds to me like a dead lock issue..
Also make sure this user has execute rights and read rights in SQL Server
But if at the time info is being written as its trying to be read you will dead lock, as the transaction has not yet been committed.
Jeff did a great post about his experience with that and stackoverflow.
http://www.codinghorror.com/blog/archives/001166.html
Couple of things to check:
Does this happen only on that specific user's machine? Can he try it from another
machine? - it might be a client configuration problem.
Can you capture the actual string that this specific user runs and run it from an ASP page? It might be that user executes the SP in a way that generates either a loop or a massive load of data.
Finally, if you're using an intra-organization application, it might be that your particular user's permissions are different than the others. You can compare them at the Active Directory level.
Now, I can recommend a commercial software that will definitely solve your issue. It records end-to-end transactions, and analyzes particular failures. But I do not want to advertise in this forum. If you'd like, drop me a note and I'll explain more.
Well, I could suggest that you use SQL Server Profiler and open a new session. Invoke your stored procedure from your ASP page and see what is happening. While this may not solve your problem, it can surely provide a starting point for you to carry out some 'investigation' of your own.