SQL Server how to get last inserted data? - sql

I ran a large query (~30mb) which inserts data in ~20 tables. Accidentally, I selected wrong database. There are only 2 tables with same name but with different columns. Now I want to make sure that no data is inserted in this database, I just don't know how.

If your table has a timestamp you can test for that.
Also sql-server keeps a log of all transactions.
See: https://web.archive.org/web/20080215075500/http://sqlserver2000.databases.aspfaq.com/how-do-i-recover-data-from-sql-server-s-log-files.html
This will show you how to examine the log to see if any inserts happened.

Best option go for Trigger
Use trigger to find the db name and
table name and all the history of
records manipulated

Related

Create a trigger for making a single audit table in sql server

How do I create a trigger in Microsoft SQL server to keep a track of all deleted data of any table existing in the database into a single audit table? I do not want to write trigger for each and every table in the database. There will only be once single audit table which keeps a track of all the deleted data of any table.
For example:
If a data is deleted from a person table, get all the data of that person table and store it in an XML format in an audit table
Please check my solution that I tried to describe at SQL Server Log Tool for Capturing Data Changes
The solution is build on creating trigger on selected tables dynamically to capture data changes (after insert, update, delete) and store those in a general table.
Then a job executes periodically and parses data captured and stored in this general table. After data is parsed it will be easier for humans to understand and see easily which table field is changed and its old and new values
I hope this proposed solution helps you for your own solution,

What is the best way to update two tables at the same time in SQL?

I have two tables. one is "Data" and another one is "AnalyzedData"
Two tables are for, first table "Data" is used to store basic data and another table to store details about analyzed data.
In "Data" table we have 3 columns - "DataID","DataName","AnalyzedDataID" (foreign key to "AnalyzedData")
in AnalyzedData table we have 3 columns. "AnalyzedDataID","AnalyzedataName"
Initially we have data in DataID and DataName colums. Later, after analyzing the data, We are inserting data into AnalyzedData table. So we need to update the AnalyzedDataID in Data table after inserting data into AnalyzedData table.
What's the best way to do this?
Assuming that you are using SQL Server 2008 or later, OUTPUT clause can be pretty helpful in your scenario. You can insert data in your "Analyzed Data" table, which will generate an ID, which can be captured with the help of OUTPUT. Then that ID can be used to update your "Data" table.
Refer Implementing the OUTPUT Clause in SQL Server 2008 for more details on how to use the OUTPUT clause.
One way is to start a transaction in your stored procedure. If any of the insert/updates fails, you rollback the transaction; otherwise, you COMMIT the transaction. So the recipe is:
1. Take the parameters you need in the stored proc
2. Start a transaction
3. Insert/Update each table independently
4. If no error, Commit the TRANSACTION; otherwise, ROLLBACK
Some useful links:
Intro to Transactions
Best way to work with Transactions

SQL stored procedure failing in large database

I have a particular SQL file in which i copy all contents from on table in a database to another table in another database.
The traditional INSERT statements are used to perform the same operation. However this table has 8.5 Million records and it fails. The queries succeed with a smaller database.
Also in when i run the select * query for that particular table the SQL query express shows out of memory exception.
In particular there is one table that has some many records. So this table alone i want to copy from the old Db to the new Db.
What are alternate ways to achieve this?
Is there any quick work around by which we can avoid this exception and make the queries succeed?
Let me put it this way. Why would this operation fail when there are a lot of records?
I don't know if this counts as "traditional INSERT", but have you tried "INSERT INTO"?
http://www.w3schools.com/sql/sql_select_into.asp

How to figure out which record has been deleted in an effiecient way?

I am working on an in-house ETL solution, from db1 (Oracle) to db2 (Sybase). We needs to transfer data incrementally (Change Data Capture?) into db2.
I have only read access to tables, so I can't create any table or trigger in Oracle db1.
The challenge I am facing is, how to detect record deletion in Oracle?
The solution which I can think of, is by using additional standalone/embedded db (e.g. derby, h2 etc). This db contains 2 tables, namely old_data, new_data.
old_data contains primary key field from tahle of interest in Oracle.
Every time ETL process runs, new_data table will be populated with primary key field from Oracle table. After that, I will run the following sql command to get the deleted rows:
SELECT old_data.id FROM old_data WHERE old_data.id NOT IN (SELECT new_data.id FROM new_data)
I think this will be a very expensive operation when the volume of data become very large. Do you have any better idea of doing this?
Thanks.
Which edition of Oracle ? If you have Enterprise Edition, look into Oracle Streams.
You can grab the deletes out of the REDO log rather than the database itself
One approach you could take is using the Oracle flashback capability (if you're using version 9i or later):
http://forums.oracle.com/forums/thread.jspa?messageID=2608773
This will allow you to select from a prior database state.
If there may not always be deleted records, you could be more efficient by:
Storing a row count with each query iteration.
Comparing that row count to the previous row count.
If they are different, you know you have a delete and you have to compare the current set with the historical data set from flashback. If not, then don't bother and you've saved a lot of cycles.
A quick note on your solution if flashback isn't an option: I don't think your select query is a big deal - it's all those inserts to populate those side tables that will really take a lot of time. Why not just run that query against the sybase production server before doing your update?

How to figure out how many tables are affected in database after inserting a record?

One third party app is storing data in a huge database (SQL Server 2000/2005). This database has more than 80 tables. How would I come to know that how many tables are affected when application stores a new record in database? Is there something available I can retrieve the list of tables affected?
You might be able to tell by running a trace in SQL Profiler on the database - the SQL:StmtCompleted event is probably the one to monitor - i.e. if the application does a series of inserts into multiple tables, you should see them go through in Profiler.
You can use SQL Profiler to trace SQL queries. So you will see sequence of calls caused by one button click in your application.
Also use can use metadata or SQL tools to get list of triggers which could make a lot of actions on simple insert.
If you have the SQL script that used to store the new record(Usually, it should be insert statement, or other DML statement such as update, merge and so on). Then you may know how many tables were affected by parsing those SQL script.
Take this SQL for example:
Insert into emp(fname, lname)
Values('john', 'reyes')
You can get result like this:
sstinsert
emp(tetInsert)
Tables:
emp
Fields:
emp.fname
emp.lname
you can add triggers on tables that get fired on update - you could use this to update a log table that would report what was being updated.
see more here: http://www.devarticles.com/c/a/SQL-Server/Using-Triggers-In-MS-SQL-Server/
Profiler is the way to go, as others have said especially with an unfamilar third party database.
I would also spend some time creating diagrams so you can see the foreign key relationships and understand how the database is put together. I usaully know my database structure so well, I can tell from the fields being inserted what tables they affect and I know what triggers are on my tables and what they affect. There is no substitute for taking the time to understand the database you support.