SQL Log and ACTIVE TRANSACTIONS - sql

I have a Web Server with SQL 2008 running a simulated SQL 2005 db, and I have a local SQL 2005 db for testing environment.
This causes me to use scripts for backup/restoring data for testing as 2008 server backups do not restore to a 2005 server.
When I run this SQL Query to reduce the size of a Table on my production Web SQL Server (2008)
DELETE FROM TickersDay
WHERE (DATEDIFF(day, TickersDay.[date], GETDATE()) >= 8)
GO
I get this message:
Msg 9002, Level 17, State 4, Line 3
The transaction log for database 'VTNET' is full. To find out why space in the log
cannot be reused, see the log_reuse_wait_desc column in sys.databases
It comes up when I publish scripts also at times.
When I run this SQL Command I get the following Result:
SELECT [name], recovery_model_desc, log_reuse_wait_desc
FROM sys.databases
RESULT:
[name] recovery_model_desc log_reuse_wait_desc
VTNET SIMPLE ACTIVE_TRANSACTION
Here are my questions and issues:
I get it.. I have a transaction statement that needs a Rollback Command
< if ##Trancount > 0 Rollback > .. but I have 100 stored procedures so before I do that....
IN THE MEANTIME... how can I eradicate this issue?? I have tried SHRINKING and I have tried backuping up the Db...
As you can see, it is in SIMPLE mode... I do not have any idea how to backup a LOG ONLY file... (have not found how to do that)...

You might be able to get around this issue simply by getting SQL NOT to process the entire table by use the index on only the dates required to be deleted. Rephrase it to be index friendly
DELETE FROM TickersDay
WHERE TickersDay.[date] <= DATEADD(day, -8, GETDATE())
GO
If you run this frequently enough (at least daily) then it only has to process 1/9th or less via an index on TickersDay([Date]) instead of having to go through the entire table if you use DATEDIFF on the field.
If that still causes this:
The transaction log for database
'VTNET' is full
You really need to increase the Log size because I suspect it is not set to autogrow and is not big enough for this operation. Either that or start looking at batching the deletes (again assuming you have an index on date, so this efficiently targets only the 100 rows), e.g.
DELETE TOP (100) FROM TickersDay
WHERE TickersDay.[date] <= DATEADD(day, -8, GETDATE())
GO
You can either loop it (while ##rowcount > 0) or just schedule it more frequently as a trickling background delete.

Related

SQL Server Update Permissions

I'm currently working with SQL Server 2008 R2, and I have only READ access to a few tables that house production data.
I'm finding that in many cases, it would be extremely nice if I could run something like the following, and get the total record count back that was affected :
USE DB
GO
BEGIN TRANSACTION
UPDATE Person
SET pType = 'retailer'
WHERE pTrackId = 20
AND pWebId LIKE 'rtlr%';
ROLLBACK TRANSACTION
However, seeing as I don't have the UPDATE permission, I cannot successfully run this script without getting :
Msg 229, Level 14, State 5, Line 5
The UPDATE permission was denied on the object 'Person', database 'DB', schema 'dbo'.
My questions :
Is there any way that my account in SQL Server can be configured so that if I want to run an UPDATE script, it would automatically be wrapped in a transaction with an rollback (so no data is actually affected)
I know I could make a copy of that data and run my script against a local SSMS instance, but I'm wondering if there is a permission-based way of accomplishing this.
I don't think there is a way to bypass SQL Server permissions. And I don't think it's a good idea to develop on production database anyway. It would be much better to have development version of the database you work with.
If the number of affected rows is all you need then you can run select instead of update.
For example:
select count(*)
from Person
where pTrackId = 20
AND pWebId LIKE 'rtlr%';
If you are only after the amount of rows that would be affected with this update, that would be same amount of rows that currently comply to the WHERE clause.
So you can just run a SELECT statement as such:
SELECT COUNT(pType)
FROM Person WHERE pTrackId = 20
AND pWebId LIKE 'rtlr%';
And you'd get the resulting potential rows affected.
1.First Login as admin in sqlserver
2.Goto login->your name->Check the roles.
3.IF u have write access,then you can accomplish the above task.
4.If not make sure you grant access to write.
If it's strictly necessary to try the update, you could write a stored procedure, accepting dynamic SQL as a string (Your UPDATE query) and wrapping the dynamic SQL in a transaction context which is then rolled back. Your account could then be granted access to that stored procedure.
Personally, I think that's a terrible idea, and incredibly unsafe - some queries break out of such transaction contexts (e.g. ALTER TABLE). You may be able to block those somehow, but it would still be a security/auditing problem.
I recommend writing a query to count the relevant rows:
SELECT COUNT(*)
FROM --tables
WHERE --your where clause
-- any other clauses here e.g. GROUP BY, HAVING ...

The transaction log for the database is full

I have a long running process that holds open a transaction for the full duration.
I have no control over the way this is executed.
Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.
So the process fails with the error "The transaction log for database 'xxx' is full".
I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.
Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.
Any ideas?
If anyone is interested, the process is an organisation import in Microsoft Dynamics CRM 4.0.
There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.
-=-=-=-=- UPDATE -=-=-=-=-
Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:
I am getting the following error...
Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
So following that advice I went to "log_reuse_wait_desc column in sys.databases" and it held the value "ACTIVE_TRANSACTION".
According to Microsoft:
http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx
That means the following:
A transaction is active (all recovery models).
• A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.
• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.
Have I misunderstood something?
-=-=-=- UPDATE 2 -=-=-=-
Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.
-=-=-=- Final UPDATE -=-=-=-
The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.
I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.
Thanks all for your input.
To fix this problem, change Recovery Model to Simple then Shrink Files Log
1.
Database Properties > Options > Recovery Model > Simple
2.
Database Tasks > Shrink > Files > Log
Done.
Then check your db log file size at
Database Properties > Files > Database Files > Path
To check full sql server log: open Log File Viewer at
SSMS > Database > Management > SQL Server Logs > Current
I had this error once and it ended up being the server's hard drive that run out of disk space.
Do you have Enable Autogrowth and Unrestricted File Growth both enabled for the log file? You can edit these via SSMS in "Database Properties > Files"
Is this a one time script, or regularly occurring job?
In the past, for special projects that temporarily require lots of space for the log file, I created a second log file and made it huge. Once the project is complete we then removed the extra log file.
This is an old school approach, but if you're performing an iterative update or insert operation in SQL, something that runs for a long time, it's a good idea to periodically (programmatically) call "checkpoint". Calling "checkpoint" causes SQL to write to disk all of those memory-only changes (dirty pages, they're called) and items stored in the transaction log. This has the effect of cleaning out your transaction log periodically, thus preventing problems like the one described.
Try this:
USE YourDB;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDB
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 50 MB.
DBCC SHRINKFILE (YourDB_log, 50);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDB
SET RECOVERY FULL;
GO
I hope it helps.
The following will truncate the log.
USE [yourdbname]
GO
-- TRUNCATE TRANSACTION LOG --
DBCC SHRINKFILE(yourdbname_log, 1)
BACKUP LOG yourdbname WITH TRUNCATE_ONLY
DBCC SHRINKFILE(yourdbname_log, 1)
GO
-- CHECK DATABASE HEALTH --
ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN RETURN 0 END
GO
If your database recovery model is full and you didn't have a log backup maintenance plan, you will get this error because the transaction log becomes full due to LOG_BACKUP.
This will prevent any action on this database (e.g. shrink), and the SQL Server Database Engine will raise a 9002 error.
To overcome this behavior I advise you to check this The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP that shows detailed steps to solve the issue.
I met the error: "The transaction log for database '...' is full due to 'ACTIVE_TRANSACTION' while deleting old rows from tables of my database for freeing disk space. I realized that this error would occur if the number of rows to be deleted was bigger than 1000000 in my case. So instead of using 1 DELETE statement, i divided the delete task by using DELETE TOP (1000000).... statement.
For example:
instead of using this statement:
DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
using following statement repeatedly:
DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
adding up to the answers above, I also want to mention that, if possible, u can also free up the server to fix this issue. If the server is already full due to the database overflow, u can delete some unnecessary files from the SERVER where ur DB is built upon. At least this temporarily fixes the issue and lets you to query the DB
My problem solved with multiple execute of limited deletes like
Before
DELETE FROM TableName WHERE Condition
After
DELETE TOP(1000) FROM TableName WHERECondition
The answer to the question is not deleting the rows from a table but it is the the tempDB space that is being taken up due to an active transaction. this happens mostly when there is a merge (upsert) is being run where we try to insert update and delete the transactions. The only option is is to make sure the DB is set to simple recovery model and also increase the file to the maximum space (Add an other file group). Although this has its own advantages and disadvantages these are the only options.
The other option that you have is to split the merge(upsert) into two operations. one that does the insert and the other that does the update and delete.
Here's my hero code. I've faced this problem. And use this code to fix this.
USE master;
SELECT
name, log_reuse_wait, log_reuse_wait_desc, is_cdc_enabled
FROM
sys.databases
WHERE
name = 'XX_System';
SELECT DATABASEPROPERTYEX('XX_System', 'IsPublished');
USE XX_System;
EXEC sp_repldone null, null, 0,0,1;
EXEC sp_removedbreplication XX_System;
DBCC OPENTRAN;
DBCC SQLPERF(LOGSPACE);
EXEC sp_replcounters;
DBCC SQLPERF(LOGSPACE);
Solved: As per the error the free space left the in the drive is not sufficient.
to resolve it either you can extend the drive space or move the MDF/LDF/LOG file to the drive with enough space.
Note: check the existing path from below steps
Database properties -> Select File option
enter image description here
Try this:
If possible restart the services MSSQLSERVER and SQLSERVERAGENT.

Modify table creation date

Is it possible to modify the table creation date of a table? The date which we see on right clicking a table > properties > Created date or in sys.tables.create_date.
Even though the tables were created months ago, I want it to look like they were created today.
No more than you can change your birthday, and why would you want to ?
You could just
select * into #tmp from [tablename]
drop table [tablename]
select * into [tablename] from #tmp
That would rebuild the table and preserve the structure (to a point). You could script a new table , copy data then drop and rename. As above.
In SQL Server 2000, you could do this by hacking into the system tables with the sp_configure option 'allow updates' set to 1.
This is not possible in SQL Server 2005 and up. From Books Online:
This option is still present in the sp_configure stored procedure,
although its functionality is unavailable in Microsoft SQL Server 2005
(the setting has no effect). In SQL Server 2005, direct updates to the
system tables are not supported.
In 2005 I believe you could "game the system" by using a dedicated administrator connection, but I think that was fixed shortly after RTM (or it needs a trace flag or some other undocumented setting to work). Even using DAC and with the sp_configure option set to 1, trying this on both SQL Server 2005 SP4 and SQL Server 2008 R2 SP1 yields:
Msg 259, Level 16, State 1
Ad hoc updates to system catalogs are not allowed.
Who are you trying to fool, and why?
EDIT
Now that we have more information on the why, you could create a virtual machine that is not attached to any domain, set the clock back to whatever date you want, create a database, create your objects, back up the database, copy it to the host, and restore it. The create_date for those objects should still reflect the earlier date, though the database itself might not (I haven't tested this).
You can do this by shifting the clock back on your own system, but I don't know if I'd want to mess with my clock this way after SQL Server has been installed and has been creating objects in what will become "the future" for a short period of time. VM definitely seems safer to me.

SQL Server 2000 - Is there a fast way of deleting backup history?

We have a SQL Server 2000 instance where the MSDB has grown to a huge size due to the backup history never having been deleted in several years. I would like to purge the backup history completely (I don't see why it's needed) and free up the disk space used by all this data.
I realise you can use the sp_delete_backuphistory command, but this is far too slow (nothing happens in 2+ hours) and while it's executing the transaction log file grows to fill the entire disk (several GB). SQL Server 2000 does not appear to support doing this database by database.
I need to find a way of deleting all the data which doesn't fill the disk up first. So either deleting in stages so the log doesn't grow to big, or perhaps using truncate table somehow, but I'm not sure if there's a safe way to do this, and as I'm not a SQL expert, I wouldn't really know how to do this without destroying my MSDB database!
Any help would be appreciated!
I use something like the following:
declare #oldest_date datetime, #newest_date datetime
select #oldest_date = min(backup_start_date) from backupset
select #newest_date = dateadd(day, -45, getdate())
while(#oldest_date <= #newest_date)
begin
exec sp_delete_backuphistory #oldest_date
set #oldest_date = dateadd(day, 7, #oldest_date)
end
This will delete a week's worth of history at a time until you're caught up. The nice thing is that you can stick this in a job and run it periodically (weekly, for instance) and it'll do the right thing.
Try to reduce the number of rows you delete in one go. The first parameter to sp_delete_backuphistory is the oldest day to keep.
EXEC sp_delete_backuphistory '2000-01-01'
EXEC sp_delete_backuphistory '2001-01-01'
EXEC sp_delete_backuphistory '2002-01-01'
...
It can also help to lower the recovery model to Simple if it's currently at Full.
First take a backup,
Then create database for each year and restore database from backup file for one year data.
then clear all log file after take all process.

generating log timestamps using Stored procedures in SQL server 2005

I have a Stored Procedure, Which has some select Statements and insert statements.
Is there any way , I can log the Timestamps of execution before and after the sqls inside the Stored procedure?
If this is not something you want to leave in permanently (i.e. it's just for debugging/performance analysis purposes) then your best bet is to use SQL Profiler and monitor the SP:StmtCompleted event which will record the stats for each statement within a sproc. You can dump this data to a db table.
Edit: Running SQL Profiler:
1) In SSMS, under Tools, select SQL Server Profiler
2) Connect to your db server you want to monitor
3) In the trace properties dialog that appears, go to the Events Selection tab and tick the "Show all events" checkbox
4) The grid will then show all types of events you can monitor. Find the Stored Procedures section, and in there cick the SP:StmtCompleted checkbox to define that you want to monitor that type of event.
5) The general tab allows you to save the trace to a file, or to a db table if you want to. Or, you don't have to save it to either, just display it to screen. You can always save it to a table/file later if you really need to.
6) When you're ready just click "Run"
For a lot more info on SQL Profiler, see MSDN
If this is something you want to keep (i.e. an audit table), then you'll need to INSERT records into your own audit table yourself e.g.
DECLARE #StartTime DATETIME
SET #StartTime = GETDATE()
SELECT Something FROM SomeTable WHERE....
INSERT MyAuditTable (Statement, StartTime, EndTime)
VALUES ('SELECT Something FROM SomeTable WHERE...', #StartTime, GETDATE())
Sure. Add TSQL to write to an audit table at the start and end of execution, adding TRY/CATCH error handling to make sure an early exit doesn't occur.