SQL Server 2008 log will not truncate - sql

I consider myself a very experienced SQL person. But I'm failing to do these two things:
Reduce the size of the allocated log.
Truncate the log.
DBCC sqlperf(logspace)
returns:
Database Name Log Size (MB) Log Space Used (%) Status
ByBox 1964.25 30.0657 0
The following does not work with SQL 2008
DUMP TRANSACTION ByBox WITH TRUNCATE_ONLY
Running the following does nothing either
DBCC SHRINKFILE ('ByBox_1_Log' , 1)
DBCC shrinkdatabase(N'bybox')
I've tried backups. I've also tried setting the properties of the database - 'Recover Model' to both 'FULL' and 'SIMPLE' and a combination of all of the above. I also tried setting the compatibility to SQL Server 2005 (I use this setting as I want to match our production server) and SQL Server 2008.
No matter what I try, the log remains at 1964.25 MB, with 30% used, which is still growing.
I'd like the log to go back down near 0% and reduce the log file size to, say, 100 MB which is plenty. My database must hate me; it just ignores everything I ask it to do regarding the log.
One further note. The production database has quite a few replicated tables, which I turn off when I perform a restore on my development box by using the following:
-- Clear out pending replication stuff
exec sp_removedbreplication
go
EXEC sp_repldone #xactid = NULL, #xact_segno = NULL,
#numtrans = 0, #time = 0, #reset = 1
go
Trying:
SELECT log_reuse_wait, log_reuse_wait_desc
FROM sys.databases
WHERE NAME='bybox'
Returns
log_reuse_wait log_reuse_wait_desc
0 NOTHING
How can I fix this problem?
Looking at this and setting the recovery model to FULL I have tried the following:
USE master
GO
EXEC sp_addumpdevice 'disk', 'ByBoxData', N'C:\<path here>\bybox.bak'
-- Create a logical backup device, ByBoxLog.
EXEC sp_addumpdevice 'disk', 'ByBoxLog', N'C:\<path here>\bybox_log.bak'
-- Back up the full bybox database.
BACKUP DATABASE bybox TO ByBoxData
-- Back up the bybox log.
BACKUP LOG bybox TO ByBoxLog
which returned:
Processed 151800 pages for database 'bybox', file 'ByBox_Data' on file 3.
Processed 12256 pages for database 'bybox', file 'ByBox_Secondary' on file 3.
Processed 1 pages for database 'bybox', file 'ByBox_1_Log' on file 3.
BACKUP DATABASE successfully processed 164057 pages in 35.456 seconds (36.148 MB/sec).
Processed 2 pages for database 'bybox', file 'ByBox_1_Log' on file 4.
BACKUP LOG successfully processed 2 pages in 0.056 seconds (0.252 MB/sec).
Perfect! But it's not.
And DBCC SHRINKFILE ('ByBox_1_Log' , 1) now returns with
DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
7 2 251425 251425 251424 251424
and DBCC SQLPERF(LOGSPACE) still reports 30% usage.
I think I may have to resign myself to the fact there could well be a bug in SQL Server 2008, or that my log file has been corrupted in some manner. However, my database is in good working order, which leads me to think there is a bug (shudders at the thought).

In my situation, I had a 650 MB database with a 370 GB log file in SQL Server 2008. No matter what I tried, I could not get it to shrink down. I tried everything listed as answers here but still, nothing worked.
Finally, I found a very short comment somewhere else that did work. It is to run this:
BACKUP LOG DatabaseName TO DISK = N'D:\Backup\DatabaseName_log.bak'
GO
DBCC SHRINKFILE('MyDatabase_Log', 1)
GO
This caused the log file to shrink from 37 GB down to 1 MB. Whew!

I found DBCC SHRINKFILE (Transact-SQL) (MSDN).
The following example shrinks the log file in the AdventureWorks database to 1 MB. To allow the DBCC SHRINKFILE command to shrink the file, the file is first truncated by setting the database recovery model to SIMPLE.
USE AdventureWorks;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE AdventureWorks
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (AdventureWorks_Log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE AdventureWorks
SET RECOVERY FULL;
GO

Beware of the implications of changing recovery models!!
And now for one more sobering thought for all you production DBAs thinking about using the script:
BEFORE YOU CHANGE THE RECOVERY MODEL FROM FULL TO SIMPLE... there's no worries if you're in a development/QA environment. But if you're in a production environment where you're responsible to ensure full recovery of data in the event of an issue, you may want to take a closer look at what BOL says regarding doing this (see BOL under: "Managing Databases" -> "Transaction Log Management" -> "Recovery Models and Transaction Log Management"):
A database can be switched to another recovery model at any time. However, switching from the simple recovery model, is unusual. Be aware that if you switch to the full recovery model during a bulk operation, the logging of the bulk operation changes from minimal logging to full logging, and vice versa.
After switching from the simple recovery model
If you switch from the full or bulk-logged recovery model to the simple recovery model, you break the backup log chain. Therefore, we strongly recommend that you back up the log immediately before switching, which allows you to recover the database up to that point. After switching, you need to take periodic data backups to protect your data and to truncate the inactive portion of the transaction log.
After switching to the simple recovery model
If you switch from the full or bulk-logged recovery model to the simple recovery model, you break the backup log chain. Therefore, we strongly recommend that you back up the log immediately before switching, which allows you to recover the database up to that point. After switching, you need to take periodic data backups to protect your data and to truncate the inactive portion of the transaction log.
Really? "We strongly recommend that you back up the log immediately before switching, which allows you to recover the database up to that point." I cannot understand why this little tidbit of information is hidden in a section named "After switching to the simple recovery model" making most "normal people" think they can go ahead and switch it and then continue or come back and read this after changing it.
Rant
To Microsoft: So, please correct me if I'm wrong, if I fail to do the t-log backup BEFORE changing from FULL to SIMPLE and lo and behold my database gets corrupted somehow (ever heard of Murphy's law?) right before I'm able to take a backup... then I'm screwed, right? If switching the recovery model of my ****production**** database from FULL to SIMPLE is something that can break the backup log chain such that if I fail to take a transaction log backup before doing it (like it suggests above) I'm potentially going to lose data, then WHY THE HECK AREN'T YOU HIGHLIGHTING THAT IN A BLINKING MARQUEE MAKING IT A BIGGER DEAL than you seem to be??! You should literally be grabbing me by the shirt and slaping me to get my attention (so to speak) and warning me of the importance of this UPFRONT!!

Another easy way to reduce the size of the log file, is to:
Backup logs
Full backup of the database
Shrink the logs file
Backup logs again
Shrink the logs file again
In this way you don't have to modify any database parameters and your logs file is 1 MB sized.

I've always hated the way SQL Server handles the physical shrinking of log files. Please note that I've always done this via Enterprise Manager/SQL Server Management Studio, but it seems that when you shrink/truncate the log file, the physical size of the log file will not reduce until after doing a full backup on the database's data file, and then backing up the log file again. I could never nail down the exact pattern, but you could try and see what the exact sequence is. However, it has always involved doing a full backup of the data file.

Found the solution!
I added a load of data to the database, so the log was forced to expand. I then removed the uneeded data to get my database back to how it was.
Backup and voila, a perfect 0% log.
So the solution is to make the log expand.

Try running
DBCC OPENTRAN
to check if there are any open transactions.

This will sound kind of stupid, but I find that I have to perform two full database and log file backups to be able to shrink the database.
When using SQL Server Management Studio, after doing a full database backup followed by a full transaction log backup, the shrink files page shows plenty of free space available, but it will not truncate the log files.
However, when I run a second full backup of both the database and the log file, I find that the full backup of the transaction log is much much smaller as it appears to only be backing up the new changes since the last backup.
Once this second backup is complete, I run the shrink tool again within the management studio. It still shows that there is plenty of available free space, but this time when I click OK, the log file reduces in size.
So, try the following.
Take a full backup of both the database and the log file. If you check within the shrink tool, you should see that there is now plenty of available free space in the log file. However clicking OK won't remove the free space.
Take a second full backup of both the database and the log file. You should find the full backup of the database is similar in size to the first full backup. The size of the full transaction log backup should be much smaller.
Run the shrink tool on the log file and with any luck, the log file should reduce in size. The last time I did this, it reduced from 180 GB to 12 MB and the shrink tool states that there is still 10 MB of available free space within the file.

I finally found a solution for the logfile shrink problem. All of the previous options did not work for me and did not shrink the logfile to the required size. The solution I found is:
Backup your database
Set the recovery mode to simple
Detach the database with SQL Server Management Studio
Delete the logfile
Attach the database without the logfile. In the lower half of the "add screen" you get a line which says that the logfile is missing
Click remove for this line and OK for the attach
Set the recovery mode back to full

This can be a pain, and there are many things it could be. The first thing you should make sure of is that there is not a "stuck" transaction. If you have a transaction that never closes, you cannot ever shrink the log. Run "DBCC OPENTRAN" to find the longest running transaction.
Also, make sure you reorganize (I think that's the proper term) and move everything to the beginning of the file before shrinking.

I know what you mean - SQL server can be a bit maddening that way. Here is some sample code to try out. In essence, it truncates the log file and then tries to shrink the file. Let me know how it works. One other thing...you wouldn't have uncommitted transactions, would you?
Use YourDatabase
GO
DBCC sqlperf(logspace) -- Get a "before" snapshot
GO
BACKUP LOG BSDIV12Update WITH TRUNCATE_ONLY; -- Truncate the log file, don't keep a backup
GO
DBCC SHRINKFILE(YourDataBaseFileName_log, 2); -- Now re-shrink (use the LOG file name as found in Properties / Files. Note that I didn't quote mine).
GO
DBCC sqlperf(logspace) -- Get an "after" snapshot
GO
Update: Simon notes that he is getting an error on the BACKUP command. I didn't realize that "Truncate_only" has been discontinued in SQL Server 2008 when I answered earlier. After a bit of research, the recommended steps to shrink the log file is to (a) Change the Recovery Model to Simple and then (b) shrink the file using DBCC ShrinkFile as above. Unfortunately, you mention that you already tried setting the recovery model to Simple so I assume that you also ran the DBCC Shrinkfile afterward. Is this correct? Please let me know.

Reading the answers I hardly believe that they are written by DBAs. Basic golden rules:
In any database before you performing any maintenance, including
shrinking the log, you should perform a full backup.
Carry out any database maintenance, including shrinking, when
nothing else is happening on that particular database for the whole
duration of the maintenance. If it is necessary, suspend
non-essencial applications. Always keep it in mind that the healthy
database is the soul of any applications interacting with it.
After all this, the following commands for shrinking the database's transaction log always worked fine with me on SQL Server 2005 and later SQL Server versions:
USE DatabaseName
GO
-- Truncate the Transaction log
ALTER DATABASE DatabaseName SET RECOVERY SIMPLE
CHECKPOINT
ALTER DATABASE DatabaseName SET RECOVERY FULL
GO
-- Shrink the Transaction Log as recommended my Microsoft.
DBCC SHRINKFILE ('database_txlogfilelogicalname', [n -size to shrink in MBytes])
GO
-- Pass the freed pages back to OS control.
DBCC SHRINKDATABASE (DatabaseName, TRUNCATEONLY)
GO
-- Tidy up the pages after shrink
DBCC UPDATEUSAGE (0);
GO
-- IF Required but not essential
-- Force to update all tables statistics
exec sp_updatestats
GO

I have finally come to the conclusion that there is a bug in SQLServer 2008.
I've tried everything, and every combination I can think of. I've backed up the database, dropped it, re-created it, restored it. Exact same problem.
I also ran:
DBCC CHECKDB
DBCC UPDATEUSAGE (bybox)
And all checks out ok.
Roll on the next service pack is all I can say.

SQL Server 2012: I had an issue where no log file (and all were already in SIMPLE recovery) would shrink.
This worked for me... I restarted the SQL Server instance (because I could) and every one of those bad boys shrunk.
Whatever was holding it up from shrinking was released with the restart. This is only good for emergencies (or when it's your server), not a regular long-term solution.

Please run:
SELECT name, log_reuse_wait, log_reuse_wait_desc FROM sys.databases
and see what is the log_reuse_wait for your problem db
if it is replication that this is error you need this value 0, 2 or 4

Related

SQL server- Transaction log for database is full

I am trying to insert some data into a table in my database in sql server. It has huge amount of data, I am talking about millions of records.
I kept getting error 9002
The transaction log for database 'GCVS2' is full. To find
out why space in the log cannot be reused, see the log_reuse_wait_desc
column in sys.databases.
When I tried inserting data yesterday, it was fine with no problem, although it did take some time.
I tried it again today but kept getting this error. I checked the log file for my database, and it's auto increment is set to 10% ,unlimited. Is there any way to fix this?
You Can Truncate the transaction log. use the below query
BACKUP LOG databasename WITH TRUNCATE_ONLY
DBCC SHRINKFILE ( databasename_Log, 1)
Check here for more details
You will need to check the Recovery mode of your database. Put it in Full Recovery mode. After that, make sure there is a transaction log backup in place for your database. You will need to dig through it and make a Maintenance plan, depending upon how critical your data is. That will be the long term solution.
For time being you can shrink your log files using following DBCC command -
BACKUP LOG DBName WITH TRUNCATE_ONLY
DBCC SHRINKFILE ( DBNameLog, 1)
Or you can do it through Object Explorer. Refer to this link for details. But you will have to set your Database to Simple Recovery model to use the Shrink command

How to shrink a log file without backuping first?

I have a SQL Server 2008 database with a .mdf file with 1 GB and a .ldf file (log) with 70 GB. I really don't know what took my log file to be so big in a week and to stop to increase, but my main issue is to fix this problem.
I'm used to reduce the log file shrinking it, but I can only shrink IF I backup it first. If I try to shrink without backuping first (using SSMS), nothing happens, even with SSMS showing that available free space is big. I can try shrinking many times but it will work only if I backup first.
The problem is that I can't backup it this time because I don't have free space (the total size of my HD is 120 GB).
Note 1: my database is set to use the full recovery model because I need to be able to do point-in-time recoveries.
Note 2: I know that shrink increases the index fragmentation. After shrinking, I can use REBUILD in indexes to avoid this.
You can temporarily set recovery model to simple and truncate log
you will lose point-in-time recovery ability in time period between last successful log backup and end of the next differential backup that you can take after log cleanup. But point in time recovery possible after backup time onwards
You also need to find the long running transaction that is active and find the root cause
You can see here http://blog.sqlxdetails.com/transaction-log-survival-guide-shrink-100gb-log/
With the help of below command you can clear transaction log file. command is well commented.
-- see the log size
DBCC SQLPERF(LOGSPACE);
--taking backup for log file before shrink it
BACKUP LOG MyTestDB
TO DISK = 'E:\PartProcForOld_log_backup\MyTestDB.TRN'
GO
-- this command will tell you the log file name
SELECT name
FROM sys.master_files
WHERE database_id = db_id()
AND type = 1
--- these below command will alter database with actual shrink
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE MyTestDB
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (MyTestDB_log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE MyTestDB
SET RECOVERY FULL;
GO

The transaction log for the database is full

I have a long running process that holds open a transaction for the full duration.
I have no control over the way this is executed.
Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.
So the process fails with the error "The transaction log for database 'xxx' is full".
I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.
Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.
Any ideas?
If anyone is interested, the process is an organisation import in Microsoft Dynamics CRM 4.0.
There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.
-=-=-=-=- UPDATE -=-=-=-=-
Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:
I am getting the following error...
Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
So following that advice I went to "log_reuse_wait_desc column in sys.databases" and it held the value "ACTIVE_TRANSACTION".
According to Microsoft:
http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx
That means the following:
A transaction is active (all recovery models).
• A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.
• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.
Have I misunderstood something?
-=-=-=- UPDATE 2 -=-=-=-
Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.
-=-=-=- Final UPDATE -=-=-=-
The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.
I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.
Thanks all for your input.
To fix this problem, change Recovery Model to Simple then Shrink Files Log
1.
Database Properties > Options > Recovery Model > Simple
2.
Database Tasks > Shrink > Files > Log
Done.
Then check your db log file size at
Database Properties > Files > Database Files > Path
To check full sql server log: open Log File Viewer at
SSMS > Database > Management > SQL Server Logs > Current
I had this error once and it ended up being the server's hard drive that run out of disk space.
Do you have Enable Autogrowth and Unrestricted File Growth both enabled for the log file? You can edit these via SSMS in "Database Properties > Files"
Is this a one time script, or regularly occurring job?
In the past, for special projects that temporarily require lots of space for the log file, I created a second log file and made it huge. Once the project is complete we then removed the extra log file.
This is an old school approach, but if you're performing an iterative update or insert operation in SQL, something that runs for a long time, it's a good idea to periodically (programmatically) call "checkpoint". Calling "checkpoint" causes SQL to write to disk all of those memory-only changes (dirty pages, they're called) and items stored in the transaction log. This has the effect of cleaning out your transaction log periodically, thus preventing problems like the one described.
Try this:
USE YourDB;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDB
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 50 MB.
DBCC SHRINKFILE (YourDB_log, 50);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDB
SET RECOVERY FULL;
GO
I hope it helps.
The following will truncate the log.
USE [yourdbname]
GO
-- TRUNCATE TRANSACTION LOG --
DBCC SHRINKFILE(yourdbname_log, 1)
BACKUP LOG yourdbname WITH TRUNCATE_ONLY
DBCC SHRINKFILE(yourdbname_log, 1)
GO
-- CHECK DATABASE HEALTH --
ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN RETURN 0 END
GO
If your database recovery model is full and you didn't have a log backup maintenance plan, you will get this error because the transaction log becomes full due to LOG_BACKUP.
This will prevent any action on this database (e.g. shrink), and the SQL Server Database Engine will raise a 9002 error.
To overcome this behavior I advise you to check this The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP that shows detailed steps to solve the issue.
I met the error: "The transaction log for database '...' is full due to 'ACTIVE_TRANSACTION' while deleting old rows from tables of my database for freeing disk space. I realized that this error would occur if the number of rows to be deleted was bigger than 1000000 in my case. So instead of using 1 DELETE statement, i divided the delete task by using DELETE TOP (1000000).... statement.
For example:
instead of using this statement:
DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
using following statement repeatedly:
DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
adding up to the answers above, I also want to mention that, if possible, u can also free up the server to fix this issue. If the server is already full due to the database overflow, u can delete some unnecessary files from the SERVER where ur DB is built upon. At least this temporarily fixes the issue and lets you to query the DB
My problem solved with multiple execute of limited deletes like
Before
DELETE FROM TableName WHERE Condition
After
DELETE TOP(1000) FROM TableName WHERECondition
The answer to the question is not deleting the rows from a table but it is the the tempDB space that is being taken up due to an active transaction. this happens mostly when there is a merge (upsert) is being run where we try to insert update and delete the transactions. The only option is is to make sure the DB is set to simple recovery model and also increase the file to the maximum space (Add an other file group). Although this has its own advantages and disadvantages these are the only options.
The other option that you have is to split the merge(upsert) into two operations. one that does the insert and the other that does the update and delete.
Here's my hero code. I've faced this problem. And use this code to fix this.
USE master;
SELECT
name, log_reuse_wait, log_reuse_wait_desc, is_cdc_enabled
FROM
sys.databases
WHERE
name = 'XX_System';
SELECT DATABASEPROPERTYEX('XX_System', 'IsPublished');
USE XX_System;
EXEC sp_repldone null, null, 0,0,1;
EXEC sp_removedbreplication XX_System;
DBCC OPENTRAN;
DBCC SQLPERF(LOGSPACE);
EXEC sp_replcounters;
DBCC SQLPERF(LOGSPACE);
Solved: As per the error the free space left the in the drive is not sufficient.
to resolve it either you can extend the drive space or move the MDF/LDF/LOG file to the drive with enough space.
Note: check the existing path from below steps
Database properties -> Select File option
enter image description here
Try this:
If possible restart the services MSSQLSERVER and SQLSERVERAGENT.

SQL Server 2005 transaction log is always too big

I've read other posts and have done hours of research but am still none the wiser.
I have a database that is 65 gig in the data file, and currently 230 gig in the log file. I'm trying to redesign the database so it is more efficient but when making schema changes, the log file tries to grow too large and the windows whinges that it's out of disk space.
I've tried to shrink the file and the lowest it goes is ~220 gig. using DBCC OPENTRAN I can see there are no active transactions. using select * FROM sys.dm_tran_database_transactions I can see that there is nothing interesting going on.
My interpretation of what I have read is that the log file should only be large if there is an active transaction and once all transactions are committed, the file should be able to be shrunk to theoretically something very small. correct?
I've tried backup log with truncate_only followed by dbcc shrinkfile (dbname, 2)
What can I do to shrink this file to something more manageable?
Have you read Kimberly Tripp's article? (Pretty much the canonical reference on the subject): 8 Steps to better Transaction Log throughput.
You might be experiencing VLF fragmentation: Transaction Log VLFs - too many or too few?. Run this command to find out:
DBCC LOGINFO;
Have you followed this standard procedure to shrink the Log:
1) Backup your transaction log (even if you are in simple mode) to clear all activity.
BACKUP LOG [MyDB]
TO DISK = N'E:\db.bak'
GO
2) Shrink the transaction log.
USE [MyDB]
GO
DBCC SHRINKFILE ('MyDB_Log', TRUNCATEONLY)
GO
3) Modify the size of the transaction log and configure your autogrowth:
USE [MyDB]
GO
ALTER DATABASE [MyDB]
MODIFY FILE ( NAME = N'MyDB_Log', SIZE = 1024000KB, FILEGROWTH = 1024000KB)
GO

There must be a way to delete data in SQL Server w/o overloading the log

I need to delete a bunch of data, and don't have the disk space for the log to continue growing. Upon looking into the matter further, it looks like there isn't any way around this, but I thought I'd ask for sure; it's difficult for me to believe that something so simple is impossible.
I tried looping, deleting in chunks, and calling shrinkfile on the log after each iteration. SQL Server just seems to ignore the shrinkfile command. Did the same with backup log (then deleting the backup file afterwards). Same thing - log just keeps on growing. The recovery model on the database I'm trying this on is simple - I thought that would make it easier, but it doesn't.
Do the delete in chunks, but rather than trying to shrink the log between times, do log backups between the chunks (that is if you're in full recovery)
The problem is that the log is full and hence has to grow. If it's full, trying to shrink it is useless, there's no free space in the log to release to the OS. What you need to do instead is make the space inside the file available for reuse.
Since the DB is in simple recovery, run the delete in chunks with a CHECKPOINT command in between each chunk. You can't do log backups in Simple recovery
Here's sample code that does deletes without filling the log (in simple recovery). DO NOT wrap this in a custom transaction. That completely defeats the point of deleting in batches as the log can't be cleared until the entire transaction commits.
(SQL 2005 and above. For SQL 2000, remove TOP and use SET ROWCOUNT)
DECLARE #Done BIT
SET #Done = 0
WHILE #Done = 0
BEGIN
DELETE TOP (20000) -- reduce if log still growing
FROM SomeTable WHERE SomeColumn = SomeValue
IF ##ROWCOUNT = 0
SET #Done = 1
CHECKPOINT -- marks log space reusable in simple recovery
END
To understand log management, take a look at this article - http://www.sqlservercentral.com/articles/64582/
One trick I have used depending on the size of the data I'm keeping vs. the amount I'm deleting is to:
select all the "data to keep" into another table (just for temporary
storage)
truncate the original table
insert all the data from the temp storage table back into the original
It works well if the amount you are keeping is smaller than what you are deleting.
A similar option if all the database files are on the same disk (data and logs) and the data to be deleted is about half of the data, would be to export the "data to keep" to a file on a separate drive using the bcp command line utility, then truncate and insert the data file with bcp again.
I've seen the DBAs take the database offline, backup the logs, turn off the logging and do it that way but that seems like a lot of hassle. :-)
If your recovery setting is set to Full, you might try doing a transaction log backup to clear the log just before the delete.