Roll back a entire database like I can with transactions - sql

I am doing some testing on a dev database. I would like a easy way to roll back to a known state, however due to the size of the database restoring from a backup takes 5 minutes to perform.
The work I want to "Rollback" is distributed over many connections, I can not modify the sql for some of the connections because they are from an app I do not have access to the source (so I can't just wrap my connections with a giant BEGIN TRANSACTION)
Is there something lighter weight than restoring from a backup but I don't need to explicitly enable like BEGIN TRANSACTION and also works to roll back work done by connections that have opened, performed it's work, and closed after the point to rollback to was created?

You can use database snapshots at the beginning and revert to it at the end. You have to however have all connections closed as it is very similar to BACKUP/RESTORE, though it is certainly more lightweight. One way to do it is it kill all the connections before reverting. If your application can reconnect to a database after a connection failure,this should cover what you want to achieve.
----To create a snapshot
create database SrcDbSnapshot
on ( name = LogicalFileNameFromSrcDB,
filename = 'E:\SrcDB.ss')
AS SNAPSHOT OF SrcDB
go
----To roll back
--Kills all connections and performs the rollback
ALTER DATABASE [SrcDB] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
RESTORE DATABASE [SrcDB] FROM DATABASE_SNAPSHOT = 'SrcDbSnapshot'
go
----To remove the snapshot
drop database SrcDbSnapshot
go

Related

How to undo changes in PostgreSQL

Is there anyway to undo an update in postgreSQL?
I have used this query to update a column
UPDATE dashboard.inventory
SET address = 'adf'
WHERE address ## to_tsquery(('makati'))
but i made a huge stupid mistake because it was on a wrong column..
If you are inside a transaction block you can use : ROLLBACK
If you have already committed or did it in autocommit mode, then no.
The data is perhaps still in your database, just not visible. But autovacuum may soon clear it out, if it hasn't already. To best preserve your options, immediately stop your database in immediate mode and do a complete file-level backup. You could then hire a specialist firm to recovery it from that backup, if you decide to do that.
If you use WAL archiving, you could set up a copy of the database using point in time recovery, restored to just before the error, then use that to extract the lost column to a file, then use that repopulate the column in your real database.

SQL Transaction Logs getting bigger

I have 2 databases. I have created a logic where firstly i delete all the data from Database2 with Truncate & then copy all the data from Database1 to Database2 with INSERT INTO.
This process runs every 2 days. The size of Database1 is around 1 GB.
This was all working good but now suddenly i started running out of space. My C: drive just got full & the reason i found was Transaction Log of Database2. Every time i did the above mentioned process which runs with MVC Website application, the Transaction Log goes increasing & increasing.
I can afford to lose data from Database2 didn't want Transaction Logs.
Is there any solution for this?
One option is to shrink your log file.
USE YourDatabaseName;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDatabaseName
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (YourDatabaseName_Log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDatabaseName
SET RECOVERY FULL;
GO
Further reading: How do you clear the SQL Server transaction log?
Turn your transaction logging on the database 2 to simple:
https://technet.microsoft.com/en-us/library/ms175987(v=sql.105).aspx

How to rollback data in a SQL Server database?

I had unfortunately deleted data from database by using following query in SQL Server
exec usp_delete_cascade "someTable", "id='somexyz'"
Can anyone please tell me how to get back my data?
Is this possible?
There are two kinds of transactions - implicit and explicit.
Implicit transaction is used every time you do DML statement (in your case delete). This transaction is not user handled. And it is not true your qry did not run under transaction.
Explicit transaction can be defined by user (with begin transaction). When you do not specify transaction, there are only implicit transactions, which are autocommited when statement success.
There is a few ways how to data recover, but never with 100% success and without work. You have to use some external program as SysTools SQL Recovery, ApexSQL Recover or Veeam. Level of recovery depends on your storage use and your server configuration.
Only one 100% way is prevension (and backups, change tracking etc).
You can try to recover with this tool of ApexSQL, but you would think in backup and measures to avoid this kind of problem.
http://www.apexsql.com/sql_tools_recover.aspx
Obviously is a third party tool and you would pay for using it.
It depends on your server config. But, by default, SQL Server does not starts transaction when executing query. So, if you do not started transaction, or transaction started, but commited, rollback is impossible.
Other ways to restore the data: if your database recovery model is set to full, and you have diff or full backup, youre lucky. If no, data is missing forewer.

The transaction log for the database is full

I have a long running process that holds open a transaction for the full duration.
I have no control over the way this is executed.
Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.
So the process fails with the error "The transaction log for database 'xxx' is full".
I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.
Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.
Any ideas?
If anyone is interested, the process is an organisation import in Microsoft Dynamics CRM 4.0.
There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.
-=-=-=-=- UPDATE -=-=-=-=-
Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:
I am getting the following error...
Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
So following that advice I went to "log_reuse_wait_desc column in sys.databases" and it held the value "ACTIVE_TRANSACTION".
According to Microsoft:
http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx
That means the following:
A transaction is active (all recovery models).
• A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.
• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.
Have I misunderstood something?
-=-=-=- UPDATE 2 -=-=-=-
Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.
-=-=-=- Final UPDATE -=-=-=-
The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.
I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.
Thanks all for your input.
To fix this problem, change Recovery Model to Simple then Shrink Files Log
1.
Database Properties > Options > Recovery Model > Simple
2.
Database Tasks > Shrink > Files > Log
Done.
Then check your db log file size at
Database Properties > Files > Database Files > Path
To check full sql server log: open Log File Viewer at
SSMS > Database > Management > SQL Server Logs > Current
I had this error once and it ended up being the server's hard drive that run out of disk space.
Do you have Enable Autogrowth and Unrestricted File Growth both enabled for the log file? You can edit these via SSMS in "Database Properties > Files"
Is this a one time script, or regularly occurring job?
In the past, for special projects that temporarily require lots of space for the log file, I created a second log file and made it huge. Once the project is complete we then removed the extra log file.
This is an old school approach, but if you're performing an iterative update or insert operation in SQL, something that runs for a long time, it's a good idea to periodically (programmatically) call "checkpoint". Calling "checkpoint" causes SQL to write to disk all of those memory-only changes (dirty pages, they're called) and items stored in the transaction log. This has the effect of cleaning out your transaction log periodically, thus preventing problems like the one described.
Try this:
USE YourDB;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDB
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 50 MB.
DBCC SHRINKFILE (YourDB_log, 50);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDB
SET RECOVERY FULL;
GO
I hope it helps.
The following will truncate the log.
USE [yourdbname]
GO
-- TRUNCATE TRANSACTION LOG --
DBCC SHRINKFILE(yourdbname_log, 1)
BACKUP LOG yourdbname WITH TRUNCATE_ONLY
DBCC SHRINKFILE(yourdbname_log, 1)
GO
-- CHECK DATABASE HEALTH --
ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN RETURN 0 END
GO
If your database recovery model is full and you didn't have a log backup maintenance plan, you will get this error because the transaction log becomes full due to LOG_BACKUP.
This will prevent any action on this database (e.g. shrink), and the SQL Server Database Engine will raise a 9002 error.
To overcome this behavior I advise you to check this The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP that shows detailed steps to solve the issue.
I met the error: "The transaction log for database '...' is full due to 'ACTIVE_TRANSACTION' while deleting old rows from tables of my database for freeing disk space. I realized that this error would occur if the number of rows to be deleted was bigger than 1000000 in my case. So instead of using 1 DELETE statement, i divided the delete task by using DELETE TOP (1000000).... statement.
For example:
instead of using this statement:
DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
using following statement repeatedly:
DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
adding up to the answers above, I also want to mention that, if possible, u can also free up the server to fix this issue. If the server is already full due to the database overflow, u can delete some unnecessary files from the SERVER where ur DB is built upon. At least this temporarily fixes the issue and lets you to query the DB
My problem solved with multiple execute of limited deletes like
Before
DELETE FROM TableName WHERE Condition
After
DELETE TOP(1000) FROM TableName WHERECondition
The answer to the question is not deleting the rows from a table but it is the the tempDB space that is being taken up due to an active transaction. this happens mostly when there is a merge (upsert) is being run where we try to insert update and delete the transactions. The only option is is to make sure the DB is set to simple recovery model and also increase the file to the maximum space (Add an other file group). Although this has its own advantages and disadvantages these are the only options.
The other option that you have is to split the merge(upsert) into two operations. one that does the insert and the other that does the update and delete.
Here's my hero code. I've faced this problem. And use this code to fix this.
USE master;
SELECT
name, log_reuse_wait, log_reuse_wait_desc, is_cdc_enabled
FROM
sys.databases
WHERE
name = 'XX_System';
SELECT DATABASEPROPERTYEX('XX_System', 'IsPublished');
USE XX_System;
EXEC sp_repldone null, null, 0,0,1;
EXEC sp_removedbreplication XX_System;
DBCC OPENTRAN;
DBCC SQLPERF(LOGSPACE);
EXEC sp_replcounters;
DBCC SQLPERF(LOGSPACE);
Solved: As per the error the free space left the in the drive is not sufficient.
to resolve it either you can extend the drive space or move the MDF/LDF/LOG file to the drive with enough space.
Note: check the existing path from below steps
Database properties -> Select File option
enter image description here
Try this:
If possible restart the services MSSQLSERVER and SQLSERVERAGENT.

Design a Lock for SQL Server to help relax the conflict between INSERT and SELECT

SQL Server is SQL Azure, basically it's SQL Server 2008 for normal process.
I have a table, called TASK, constantly have new data in (new task), and removed (task complete)
For new data in, I use INSERT INTO .. SELECT ..., most of time takes very long, lets say dozen of minutes.
For old data out, I first use SELECT (WITH NOLOCK) to get task, UPDATE to let other thread know this task already starts to process, then DELETE once finished.
Dead lock sometime happens on SELECT, most time happens on UPDATE and DELETE.
this is not time critical task, so I can start process the new data once all INSERT finished. Is there any kind of LOCK to ask SELECT not to select it before the INSERT finished? Or any kind of other suggestion to avoid Conflict. I can redesign table if needed.
later the sqlserver2005,resolve lock is easy.
for conflict
1.you can use the service broker.
2.use the isolution level.
dbcc useroptions ,at last row ,you can see the deflaut isolution level is read_committed,this is the session level.
we can change the level to read_committed_snapshot for conflict,in sqlserver, not realy row lock like oracle.but we can use this method implement.
ALTER DATABASE DBName
SET READ_COMMITTED_SNAPSHOT ON;
open this feature,must in single user schame.
and you can test it.
for session A ,session B.
A:update table1 set name = 'new' with(Xlock) where id = 1
B:you still update other row and select all the data from table.
my english is not very good,but for lock ,i know.
in sqlserver,for function ,there are three locks.
1.optimistic lock ,use the timestamp(rowversion) control.
2.pessimism lock ,force lock when use the date.use Ulock,Xlock and so on.
3.virtual lock,use the proc getapplock().
if you need lock schame in system architecture,please me email : mjjjj2001#163.com
Consider using service broker if this is a processing queue.
There are a number of considerations that affect performance and locking. I surmise that the data is being updated and deleted in a separate session. Which transaction isolation level is in use for the insert session and the delete session.
Has the insert session and all transactions committed and closed when the delete session runs? Are there multiple delete sessions running concurrently? It is very important to have an index on the columns you are using to identify a task for the SELECT/UPDATE/DELETE statements, especially if you move to a higher isolation level such as REPEATABLE READ or SERIALIZED.
All of these issues could be solved by moving to Service Broker if it is appropriate.