Sql server - log is full due to ACTIVE_TRANSACTION [duplicate] - sql

This question already has answers here:
The transaction log for the database is full
(15 answers)
Closed 8 years ago.
I have a very large database (50+ GB). In order to free space in my hard drive, I tried deleting old records from one of the tables . I ran the command:
delete from Table1 where TheDate<'2004-01-01';
However, SQL Server 2012 said:
Msg 9002, Level 17, State 4, Line 1
The transaction log for database 'MyDb' is full due to 'ACTIVE_TRANSACTION'.
and it did not delete a thing. What does that message mean? How can I delete the records?

Here is what I ended up doing to work around the error.
First, I set up the database recovery model as SIMPLE. More information here.
Then, by deleting some old files I was able to make 5GB of free space which gave the log file more space to grow.
I reran the DELETE statement sucessfully without any warning.
I thought that by running the DELETE statement the database would inmediately become smaller thus freeing space in my hard drive. But that was not true. The space freed after a DELETE statement is not returned to the operating system inmediatedly unless you run the following command:
DBCC SHRINKDATABASE (MyDb, 0);
GO
More information about that command here.

Restarting the SQL Server will clear up the log space used by your database.
If this however is not an option, you can try the following:
* Issue a CHECKPOINT command to free up log space in the log file.
* Check the available log space with DBCC SQLPERF('logspace'). If only a small
percentage of your log file is actually been used, you can try a DBCC SHRINKFILE
command. This can however possibly introduce corruption in your database.
* If you have another drive with space available you can try to add a file there in
order to get enough space to attempt to resolve the issue.
Hope this will help you in finding your solution.

Related

SQL server Warning: Fatal error 829 occurred at Oct 10 2019 12:48 PM. Note the error and time, and contact your system administrator

The 2 table not insert or select or delete or drop table command execute then show error below:
The error I'm receiving
Warning: Fatal error 829 occurred at Oct 10 2019 12:48PM. Note the
error and time, and contact your system administrator.
DROP TABLE [dbo].[tbl_SalesMaster_tmp]
GO
Just a quick search on Google and find a similar thread here. However, I extracted the possible solution for an easy reference.
Means there's an I/O subsystem problem. Is something called a 'hard I/O error'. SQL Server asks the OS to read a page and it says no - this means the I/O subsystem couldn't read the page in question.
The CHECKDB output means that it couldn't create the internal database snapshot that it uses to get a transactionally-consistent point-in-time view of the database. There are a number of different causes of this:
There may not be any free space on the volume(s) storing the data files for the database
The SQL service account might not have create-file permissions in the directory containing the data files for the database
If neither of these are the case, you can create your own database snapshot and run DBCC CHECKDB on that. One you have, run the following:
DBCC CHECKDB (yourdbname) WITH NO_INFOMSGS, ALL_ERRORMSGS
Whatever the results are, you're looking at either restoring from a backup, extracting data to a new database, or running repair. Each involves varying amounts of downtime and data-loss. You're also going to have to do some root-cause analysis to figure out what happened to cause the corruption in the first place.
By the way - do you have page checksums enabled? Have you looked in the SQL error log or Windows application event log for any signs of corruption or things going wrong with the I/O subsystem?

Why will my SQL Transaction log file not auto-grow?

The Issue
I've been running a particularly large query, generating millions of records to be inserted into a table. Each time I run the query I get an error reporting that the transaction log file is full.
I've managed to get a test query to run with a reduced set of results and by using SELECT INTO instead of INSERT into as pre built table. This reduced set of results generated a 20 gb table, 838,978,560 rows.
When trying to INSERT into the pre built table I've also tried using it with and without a Cluster index. Both failed.
Server Settings
The server is running SQL Server 2005 (Full not Express).
The dbase being used is set to SIMPLE for recovery and there is space available (around 100 gb) on the drive that the file is sitting on.
The transaction log file setting is for File Growth of 250 mb and to a maximum of 2,097,152 mb.
The log file appears to grow as expected till it gets to 4729 mb.
When the issue first appeared the file grow to a lower value however i've reduced the size of other log files on the same server and this appears to allow this transaction log file grow further by the same amount as the reduction on the other files.
I've now run out of ideas of how to solve this. If anyone has any suggestion or insight into what to do it would be much appreciated.
First, you want to avoid auto-growth whenever possible; auto-growth events are HUGE performance killers. If you have 100GB available why not change the log file size to something like 20GB (just temporarily while you troubleshoot this). My policy has always been to use 90%+ of the disk space allocated for a specific MDF/NDF/LDF file. There's no reason not to.
If you are using SIMPLE recovery SQL Server is supposed manage the task of returning unused space but sometimes SQL Server does not do a great job. Before running your query check the available free log space. You can do this by:
right-click the DB > go to Tasks > Shrink > Files.
change the type to "Log"
This will help you understand how much unused space you have. You can set "Reorganize pages before releasing unused space > Shrink File" to 0. Moving forward you can also release unused space using CHECKPOINT; this may be something to include as a first step before your query runs.

SQL server 2008R2 The transaction log for database 'MGR' is full due to 'ACTIVE_TRANSACTION'

I run a query in which I wanted to update more then 130 mln of records. After few hours I got an error:
The transaction log for database 'MGR' is full due to 'ACTIVE_TRANSACTION'.
now I ve got 70 MB free on my C disk drive.
I supose that the problem was with to little disc space and thats why query failed but how can I now regain the lost disc space from before query ?
Im using sql server 2008 R2
Thanks for any hints
The problem has to do with how sql logs all the changes during an active transaction. While a transaction is active, the log cannot be flushed, so if you have a huge active transaction the log keeps growing until it reaches a point where it can exceed its capacity. The amount of logging depends on many factors: the recovery mode (full recovery mode is the one that generates more logging activity). Also, you can breakdown the transaction in small chunks to enable log flushing in between. Also look into table hint TABLOCK. The lost amount of disk must possibly have gone to the log file. Check that out.

Running select command on postgres relational Table containing data in tera bytes

I have a relational table in postgres of 3 TB. Now I want to dump its content to a csv file. For doing so I am following the tutorial: http://www.mkyong.com/database/how-to-export-table-data-to-file-csv-postgresql/
My problem is after specifying the file to which the export has to be done and select statement. Postgres shows "Killed". Is it because of the relational table being of 3TB. If yes, then how should I export my data from postgres to another file (txt or csv, etc). If not, then how should I figure out the possible cause of the select command getting Killed.
Killed suggests you're running on a system where the out-of-memory killer (OOM killer) is enabled by memory over-commit settings. This isn't recommended by the manual.
If you disable overcommit you'll get a neater 'out of memory' error to the client instead of a sigkill and server re-start.
As for the COPY ... are you running COPY (SELECT ...) ? Or just COPY tablename TO .... ? Try a direct copy without a query, see if that helps.
When diagnosing faults you should be looking at the PostgreSQL error logs (which would tell you more about this problem) and system logs like the kernel logs or dmesg output.
When asking questions about PostgreSQL on Stack Overflow always include the exact server version from select version(), the exact command text/code run, the exact unedited text of any error messages, etc.

empty sql server 2008 db backup file is very big

Im deploying my db, i more or less emptied the db(data) and then created a backup.
the .bak file is over 100mb.
why is this?
how do i get it down?
im using sql server 2008
EDIT
When you back up, please note that SQL Server backup files can contain multiple backups. It does not overwrite by default. If you choose the same backup file and do not choose the overwrite option, it simply adds another backup to the same file. So your file just keeps getting larger.
Run this and all will be revealed:
select dpages *8 [size in kbs]
from sysindexes
where indid <= 1
order by 1 desc
You can also..
Do two backups in a row to have the 2nd backup contain minimal log data. The first backup will contain logged activity so as to be able to recover. The 2nd one would no longer contain them.
There is also an issue with leaked Service Broker handles if you use SSSB in your database with improper code, but if this is the case, the query above will reveal it.
To get the size down, you can use WITH COMPRESSION, eg.
backup database mydb to disk = 'c:\tempdb.bak' with compression
It will normally bring it down to about 20% the size. As Martin has commented above, run also
exec sp_spaceused
To view the distribution of data/logs. From what you are saying, 1.5 MB for first table... down to 8kB on the 45th row, that accounts for maybe tens of MB, so the rest could be in the log file.