My process reads a csv file and updates the DB with the data from CSV. I want to do a bulk update but if I am using batch commit in batch process, and set commit size to 50, it works fine for success records. But if DB update statement fails for even one record, the whole commit size (50 records) are failing to update in DB. I read in mule documentation that some connectors have the ability to handle record-level errors without failing the whole batch(i.e. upsert) and Database connector is one of them. Not sure if this scenario falls under it or not. Did anyone face this kind of issue? Is there a work around this issue without doing record by record update. I would appreciate any thoughts around this issue.
The documentation for database bulk operations says it is up to the JDBC driver
It may happen that while some statements in the bulk operation can be
successfully executed, some may result in an error. When this occurs,
it will be up to the driver to either:
Stop execution immediately and ignore all remaining operations, or
Continue to execute the remaining statements.
I have a stored procedure that performs calculations and stores a large amount of data in new tables, in another database.
If anything goes wrong, I just drop the tables.
I've already set the recovery mode for this database as simple, as the data is available elsewhere. Is there anything else I can do in the stored procedure to limit writing to the transaction log or remove transactions entirely to speed up the process?
It is impossible to completely eliminate transaction log from the equation in SQL Server.
You may try to check bulk logged recovery model in conjunction with bulk insert, but if your calculations are complex and cannot be expressed within a single select statement, it could be worth trying SSIS.
I suggest you that using SSIS package in order to convert data from one database to another database. in SSIS you can control converted data and can use balk insert. In buck insert mode you limit your database to write transaction logs completely.
I ran into similar situations even while using SSIS where my staging database(s) kept logs more then 10 times the size of the actual data (on simple logging and using bulk insert). After lots of searching I have found that it is not feasable to prevent this from happening when doing large data operations like loading a datawarehouse. Instead it is easier to just clean up after you are done by shrinking the log.
dbcc shrinkfile
I have a SQL statement that updates records in a table if the query returns any records. The query only returns records if they need to be updated. When I run the select on the query I get no records so when the update runs there should be no records updated.
The problem I'm having is that the query in the stored procedure won't finish becuase the transaction log fills up before the query can complete. I'm not concerned about the transaction log filling up right now.
My question is, if there no records are being updated then why is anything being written to the transaction log?
We need more information before this problem can be solved ...
Remus has a great idea to look at the entries in the log file.
Executing DBCC SQLPERF(logspace) will give you how full the log file is.
Clear the log file using a transaction log backup. This is assuming the recovery model is FULL and a FULL backup has been done.
Re-run the update stored procedure. Look at the transaction log file entries.
A copy of the stored procedure and table definitions would be great. Looking for other processes (sp_who2) during then execution that might fill the log is another good place to look.
Any triggers that might cause updates, deletes or inserts can add to the log file size, suggested by Martin.
Good luck.
Looks like the issue was in the join. It was tyring to join so many records that tempdb was filling up to the point there was no more space on the drive.
I have a long running process that holds open a transaction for the full duration.
I have no control over the way this is executed.
Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.
So the process fails with the error "The transaction log for database 'xxx' is full".
I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.
Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.
Any ideas?
If anyone is interested, the process is an organisation import in Microsoft Dynamics CRM 4.0.
There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.
-=-=-=-=- UPDATE -=-=-=-=-
Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:
I am getting the following error...
Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
So following that advice I went to "log_reuse_wait_desc column in sys.databases" and it held the value "ACTIVE_TRANSACTION".
According to Microsoft:
http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx
That means the following:
A transaction is active (all recovery models).
• A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.
• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.
Have I misunderstood something?
-=-=-=- UPDATE 2 -=-=-=-
Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.
-=-=-=- Final UPDATE -=-=-=-
The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.
I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.
Thanks all for your input.
To fix this problem, change Recovery Model to Simple then Shrink Files Log
1.
Database Properties > Options > Recovery Model > Simple
2.
Database Tasks > Shrink > Files > Log
Done.
Then check your db log file size at
Database Properties > Files > Database Files > Path
To check full sql server log: open Log File Viewer at
SSMS > Database > Management > SQL Server Logs > Current
I had this error once and it ended up being the server's hard drive that run out of disk space.
Do you have Enable Autogrowth and Unrestricted File Growth both enabled for the log file? You can edit these via SSMS in "Database Properties > Files"
Is this a one time script, or regularly occurring job?
In the past, for special projects that temporarily require lots of space for the log file, I created a second log file and made it huge. Once the project is complete we then removed the extra log file.
This is an old school approach, but if you're performing an iterative update or insert operation in SQL, something that runs for a long time, it's a good idea to periodically (programmatically) call "checkpoint". Calling "checkpoint" causes SQL to write to disk all of those memory-only changes (dirty pages, they're called) and items stored in the transaction log. This has the effect of cleaning out your transaction log periodically, thus preventing problems like the one described.
Try this:
USE YourDB;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDB
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 50 MB.
DBCC SHRINKFILE (YourDB_log, 50);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDB
SET RECOVERY FULL;
GO
I hope it helps.
The following will truncate the log.
USE [yourdbname]
GO
-- TRUNCATE TRANSACTION LOG --
DBCC SHRINKFILE(yourdbname_log, 1)
BACKUP LOG yourdbname WITH TRUNCATE_ONLY
DBCC SHRINKFILE(yourdbname_log, 1)
GO
-- CHECK DATABASE HEALTH --
ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN RETURN 0 END
GO
If your database recovery model is full and you didn't have a log backup maintenance plan, you will get this error because the transaction log becomes full due to LOG_BACKUP.
This will prevent any action on this database (e.g. shrink), and the SQL Server Database Engine will raise a 9002 error.
To overcome this behavior I advise you to check this The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP that shows detailed steps to solve the issue.
I met the error: "The transaction log for database '...' is full due to 'ACTIVE_TRANSACTION' while deleting old rows from tables of my database for freeing disk space. I realized that this error would occur if the number of rows to be deleted was bigger than 1000000 in my case. So instead of using 1 DELETE statement, i divided the delete task by using DELETE TOP (1000000).... statement.
For example:
instead of using this statement:
DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
using following statement repeatedly:
DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
adding up to the answers above, I also want to mention that, if possible, u can also free up the server to fix this issue. If the server is already full due to the database overflow, u can delete some unnecessary files from the SERVER where ur DB is built upon. At least this temporarily fixes the issue and lets you to query the DB
My problem solved with multiple execute of limited deletes like
Before
DELETE FROM TableName WHERE Condition
After
DELETE TOP(1000) FROM TableName WHERECondition
The answer to the question is not deleting the rows from a table but it is the the tempDB space that is being taken up due to an active transaction. this happens mostly when there is a merge (upsert) is being run where we try to insert update and delete the transactions. The only option is is to make sure the DB is set to simple recovery model and also increase the file to the maximum space (Add an other file group). Although this has its own advantages and disadvantages these are the only options.
The other option that you have is to split the merge(upsert) into two operations. one that does the insert and the other that does the update and delete.
Here's my hero code. I've faced this problem. And use this code to fix this.
USE master;
SELECT
name, log_reuse_wait, log_reuse_wait_desc, is_cdc_enabled
FROM
sys.databases
WHERE
name = 'XX_System';
SELECT DATABASEPROPERTYEX('XX_System', 'IsPublished');
USE XX_System;
EXEC sp_repldone null, null, 0,0,1;
EXEC sp_removedbreplication XX_System;
DBCC OPENTRAN;
DBCC SQLPERF(LOGSPACE);
EXEC sp_replcounters;
DBCC SQLPERF(LOGSPACE);
Solved: As per the error the free space left the in the drive is not sufficient.
to resolve it either you can extend the drive space or move the MDF/LDF/LOG file to the drive with enough space.
Note: check the existing path from below steps
Database properties -> Select File option
enter image description here
Try this:
If possible restart the services MSSQLSERVER and SQLSERVERAGENT.
I need to delete a bunch of data, and don't have the disk space for the log to continue growing. Upon looking into the matter further, it looks like there isn't any way around this, but I thought I'd ask for sure; it's difficult for me to believe that something so simple is impossible.
I tried looping, deleting in chunks, and calling shrinkfile on the log after each iteration. SQL Server just seems to ignore the shrinkfile command. Did the same with backup log (then deleting the backup file afterwards). Same thing - log just keeps on growing. The recovery model on the database I'm trying this on is simple - I thought that would make it easier, but it doesn't.
Do the delete in chunks, but rather than trying to shrink the log between times, do log backups between the chunks (that is if you're in full recovery)
The problem is that the log is full and hence has to grow. If it's full, trying to shrink it is useless, there's no free space in the log to release to the OS. What you need to do instead is make the space inside the file available for reuse.
Since the DB is in simple recovery, run the delete in chunks with a CHECKPOINT command in between each chunk. You can't do log backups in Simple recovery
Here's sample code that does deletes without filling the log (in simple recovery). DO NOT wrap this in a custom transaction. That completely defeats the point of deleting in batches as the log can't be cleared until the entire transaction commits.
(SQL 2005 and above. For SQL 2000, remove TOP and use SET ROWCOUNT)
DECLARE #Done BIT
SET #Done = 0
WHILE #Done = 0
BEGIN
DELETE TOP (20000) -- reduce if log still growing
FROM SomeTable WHERE SomeColumn = SomeValue
IF ##ROWCOUNT = 0
SET #Done = 1
CHECKPOINT -- marks log space reusable in simple recovery
END
To understand log management, take a look at this article - http://www.sqlservercentral.com/articles/64582/
One trick I have used depending on the size of the data I'm keeping vs. the amount I'm deleting is to:
select all the "data to keep" into another table (just for temporary
storage)
truncate the original table
insert all the data from the temp storage table back into the original
It works well if the amount you are keeping is smaller than what you are deleting.
A similar option if all the database files are on the same disk (data and logs) and the data to be deleted is about half of the data, would be to export the "data to keep" to a file on a separate drive using the bcp command line utility, then truncate and insert the data file with bcp again.
I've seen the DBAs take the database offline, backup the logs, turn off the logging and do it that way but that seems like a lot of hassle. :-)
If your recovery setting is set to Full, you might try doing a transaction log backup to clear the log just before the delete.