Error In Database Renaming - sql

I need to rename one of my database and tried a query like this
ALTER DATABASE Test MODIFY NAME = NewTest
But this throws an error
Msg 5030, Level 16, State 2, Line 1
The database could not be exclusively locked to perform the operation.
Can any one give me any suggestion?

Try something like;
USE master
GO
ALTER DATABASE Test
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE
GO
ALTER DATABASE Test MODIFY NAME = NewTest
GO
ALTER DATABASE NewTest
SET MULTI_USER
GO
Be aware of the fact that this may not rename the physical file on the hard drive though.

There are a couple of things you need to investigate. The reason you're getting that error could be due to one or more of the following things:
The account you're using does not have permission to run the command
The DB is locked by another process/user
The database contains a file group that is read-only
You can try and force single user mode to check 2.
ALTER DATABASE SINGLE_USER ROLLBACK IMMEDIATE.
That will kill any concurrent connections to the DB allowing you to rule out number two.

You have two options:
Look for and kill all connections like OMG Priories suggest
Open the database in Single Server Mode as described here:
http://technet.microsoft.com/en-us/library/ms345598(v=sql.105).aspx

Related

How to drop a database when it's currently in use?

NB. I don't want to mark the check box in the wizard for deletion. This question's strictly about scripting the behavior.
When I run the following script to get a fresh start, I get the error that the database Duck can't be deleted because it's currently in use.
use Master
drop database Duck
drop login WorkerLogin
drop login AdminLogin
go
Be that as it may (even though I'm the only user currently in the system and I run no other queries but that's another story), I need to close all the existing connections. One way is to wait it out or restart the manager. However I'd like to script in that behavior so I can tell the stubborn server to drop the duck down. (Yes, "typo" intended.)
What do I need to add to the dropping statement?
Try below code.
USE master;
ALTER DATABASE [Duck] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE [Duck] ;
For deep discussion see this answer.
You have to kill first all active connections before you can drop the database.
ALTER DATABASE YourDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE
--do you stuff here
ALTER DATABASE YourDatabase SET MULTI_USER
http://wiki.lessthandot.com/index.php/Kill_All_Active_Connections_To_A_Database
How do you kill all current connections to a SQL Server 2005 database?
if you're ssms tab is not currently on the db to be dropped (meaning you are in the master db), then these will help:
https://dba.stackexchange.com/questions/2387/sql-server-cannot-drop-database-dbname-because-it-is-currently-in-use-but-n
https://dba.stackexchange.com/questions/34264/how-to-force-drop-database-in-sql-server-2008

SQL deadlocking..in single user mode now

Couple of databases produced an error this morning whilst running in Single User Mode. Due to the following error I am unable to do anything :(
Msg 1205, Level 13, State 68, Line 1
Transaction (Process ID 62) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I receive that error when trying the following (using the Master Database as a Sys Admin):
ALTER DATABASE dbname
SET MULTI_USER;
GO
For the sake of it I have tried Restarting the SQL Server, I have tried killing any processes and I have even tried resetting the single user myself:
ALTER DATABASE dbname
SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
The job which was running was designed to copy the database and put it in single user mode immediately to try and make it faster.
Anyway I can remove the locks?
Had the same problem. This worked for me:
set deadlock_priority high; -- could also try "10" instead of "high" (5)
alter database dbname set multi_user; -- can also add "with rollback immediate"
From ideas/explanation:
http://myadventuresincoding.wordpress.com/2014/03/06...
http://www.sqlservercentral.com/blogs/pearlknows/2014/04/07/...
Ok, I will answer my own.
I had to use the following:
sp_who
which displayed details of the current connected users and sessions, I then remembered about Activity Monitor which shows the same sort of stuff...Anyway that led me away from my desk to some bugger who had maintained connections to the database against my wishes...
Anyway once I had shut the PC down (by unplugging it...deserved it) I could then run the SQL to amend it into MULTI_USER mode (using system admin user):
USE Master
GO
ALTER DATABASE dbname
SET MULTI_USER;
GO
FYI for those who care, this can be used to immediately set the DB to SINGLE_USER:
ALTER DATABASE dbname
SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
Further details, if you know the process id you can use kill pid:
kill 62
Bare in mind SSMS creates a process for your user as well, in my case this was being rejected due to another.
EDIT: As Per Bobby's recommendations we can use:
sp_Who2
This can show us which process is blocked by the other process.
tanks a lot.
SET DEADLOCK_PRIORITY HIGH ---- could also try "10" instead of "high" (5)
GO
ALTER DATABASE SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
ALTER DATABASE SET MULTI_USER
GO
When related system processes are at dead block scenario.
SPID 15 & SPID 29 - both are background SysProcesses.
This helps in such cases:
--------- START OF CMDs--------
SET DEADLOCK_PRIORITY HIGH ---- could also try "10" instead of "high" (5)
GO
ALTER DATABASE <dbname> SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
ALTER DATABASE <dbname> SET MULTI_USER
GO
-------------------------- END OF CMDs ------------------
Blocking Case with an User DB RESTORE process putting DB in Single_User

The transaction log for the database is full

I have a long running process that holds open a transaction for the full duration.
I have no control over the way this is executed.
Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.
So the process fails with the error "The transaction log for database 'xxx' is full".
I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.
Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.
Any ideas?
If anyone is interested, the process is an organisation import in Microsoft Dynamics CRM 4.0.
There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.
-=-=-=-=- UPDATE -=-=-=-=-
Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:
I am getting the following error...
Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
So following that advice I went to "log_reuse_wait_desc column in sys.databases" and it held the value "ACTIVE_TRANSACTION".
According to Microsoft:
http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx
That means the following:
A transaction is active (all recovery models).
• A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.
• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.
Have I misunderstood something?
-=-=-=- UPDATE 2 -=-=-=-
Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.
-=-=-=- Final UPDATE -=-=-=-
The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.
I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.
Thanks all for your input.
To fix this problem, change Recovery Model to Simple then Shrink Files Log
1.
Database Properties > Options > Recovery Model > Simple
2.
Database Tasks > Shrink > Files > Log
Done.
Then check your db log file size at
Database Properties > Files > Database Files > Path
To check full sql server log: open Log File Viewer at
SSMS > Database > Management > SQL Server Logs > Current
I had this error once and it ended up being the server's hard drive that run out of disk space.
Do you have Enable Autogrowth and Unrestricted File Growth both enabled for the log file? You can edit these via SSMS in "Database Properties > Files"
Is this a one time script, or regularly occurring job?
In the past, for special projects that temporarily require lots of space for the log file, I created a second log file and made it huge. Once the project is complete we then removed the extra log file.
This is an old school approach, but if you're performing an iterative update or insert operation in SQL, something that runs for a long time, it's a good idea to periodically (programmatically) call "checkpoint". Calling "checkpoint" causes SQL to write to disk all of those memory-only changes (dirty pages, they're called) and items stored in the transaction log. This has the effect of cleaning out your transaction log periodically, thus preventing problems like the one described.
Try this:
USE YourDB;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDB
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 50 MB.
DBCC SHRINKFILE (YourDB_log, 50);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDB
SET RECOVERY FULL;
GO
I hope it helps.
The following will truncate the log.
USE [yourdbname]
GO
-- TRUNCATE TRANSACTION LOG --
DBCC SHRINKFILE(yourdbname_log, 1)
BACKUP LOG yourdbname WITH TRUNCATE_ONLY
DBCC SHRINKFILE(yourdbname_log, 1)
GO
-- CHECK DATABASE HEALTH --
ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN RETURN 0 END
GO
If your database recovery model is full and you didn't have a log backup maintenance plan, you will get this error because the transaction log becomes full due to LOG_BACKUP.
This will prevent any action on this database (e.g. shrink), and the SQL Server Database Engine will raise a 9002 error.
To overcome this behavior I advise you to check this The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP that shows detailed steps to solve the issue.
I met the error: "The transaction log for database '...' is full due to 'ACTIVE_TRANSACTION' while deleting old rows from tables of my database for freeing disk space. I realized that this error would occur if the number of rows to be deleted was bigger than 1000000 in my case. So instead of using 1 DELETE statement, i divided the delete task by using DELETE TOP (1000000).... statement.
For example:
instead of using this statement:
DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
using following statement repeatedly:
DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())
adding up to the answers above, I also want to mention that, if possible, u can also free up the server to fix this issue. If the server is already full due to the database overflow, u can delete some unnecessary files from the SERVER where ur DB is built upon. At least this temporarily fixes the issue and lets you to query the DB
My problem solved with multiple execute of limited deletes like
Before
DELETE FROM TableName WHERE Condition
After
DELETE TOP(1000) FROM TableName WHERECondition
The answer to the question is not deleting the rows from a table but it is the the tempDB space that is being taken up due to an active transaction. this happens mostly when there is a merge (upsert) is being run where we try to insert update and delete the transactions. The only option is is to make sure the DB is set to simple recovery model and also increase the file to the maximum space (Add an other file group). Although this has its own advantages and disadvantages these are the only options.
The other option that you have is to split the merge(upsert) into two operations. one that does the insert and the other that does the update and delete.
Here's my hero code. I've faced this problem. And use this code to fix this.
USE master;
SELECT
name, log_reuse_wait, log_reuse_wait_desc, is_cdc_enabled
FROM
sys.databases
WHERE
name = 'XX_System';
SELECT DATABASEPROPERTYEX('XX_System', 'IsPublished');
USE XX_System;
EXEC sp_repldone null, null, 0,0,1;
EXEC sp_removedbreplication XX_System;
DBCC OPENTRAN;
DBCC SQLPERF(LOGSPACE);
EXEC sp_replcounters;
DBCC SQLPERF(LOGSPACE);
Solved: As per the error the free space left the in the drive is not sufficient.
to resolve it either you can extend the drive space or move the MDF/LDF/LOG file to the drive with enough space.
Note: check the existing path from below steps
Database properties -> Select File option
enter image description here
Try this:
If possible restart the services MSSQLSERVER and SQLSERVERAGENT.

Script for creating database if it doesn't exist yet in SQL Azure

This question may related to Checking if database exists or not in SQL Azure.
In SQL Azure, I tried to use a script like this to check the existence of a database, and create the database if it doesn't exist yet (in both SQLCmd and SSMS):
IF db_id('databasename') IS NULL CREATE DATABASE databasename
GO
However, SQL Azure keeps telling me
Msg 40530, Level 16, State 1, Line 2
The CREATE DATABASE statement must be the only statement in the batch.
While the same script did work on a local SQL express instance.
Does this mean it is not supported on SQL Azure?
Or is there any work around?
Thanks in advance.
Eidt:
Let me clarify what I want to achieve:
I want a script which will create a certain database only if it doesn't exist before.
Is it possible to have such kind of script for SQL Azure?
We have a similar problem.
It looks like we can do something with SET NOEXEC ON, as in the following StackExchange answer.
IF (<condition>)
SET NOEXEC ON
ELSE
SET NOEXEC OFF
GO
CREATE DATABASE databasename
GO
SET NOEXEC OFF
GO
It's saying you can't do theck and create in the same piece of sql.
i.e you need to do Select IF db_id('databasename')
test the whether it returns null, and if so then execute the create database.

ALTER DATABASE failed because a lock could not be placed on database

I need to restart a database because some processes are not working. My plan is to take it offline and back online again.
I am trying to do this in Sql Server Management Studio 2008:
use master;
go
alter database qcvalues
set single_user
with rollback immediate;
alter database qcvalues
set multi_user;
go
I am getting these errors:
Msg 5061, Level 16, State 1, Line 1
ALTER DATABASE failed because a lock could not be placed on database 'qcvalues'. Try again later.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.
Msg 5061, Level 16, State 1, Line 4
ALTER DATABASE failed because a lock could not be placed on database 'qcvalues'. Try again later.
Msg 5069, Level 16, State 1, Line 4
ALTER DATABASE statement failed.
What am I doing wrong?
After you get the error, run
EXEC sp_who2
Look for the database in the list. It's possible that a connection was not terminated. If you find any connections to the database, run
KILL <SPID>
where <SPID> is the SPID for the sessions that are connected to the database.
Try your script after all connections to the database are removed.
Unfortunately, I don't have a reason why you're seeing the problem, but here is a link that shows that the problem has occurred elsewhere.
http://www.geakeit.co.uk/2010/12/11/sql-take-offline-fails-alter-database-failed-because-a-lock-could-not-error-5061/
I managed to reproduce this error by doing the following.
Connection 1 (leave running for a couple of minutes)
CREATE DATABASE TESTING123
GO
USE TESTING123;
SELECT NEWID() AS X INTO FOO
FROM sys.objects s1,sys.objects s2,sys.objects s3,sys.objects s4 ,sys.objects s5 ,sys.objects s6
Connections 2 and 3
set lock_timeout 5;
ALTER DATABASE TESTING123 SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
Try this if it is "in transition" ...
http://learnmysql.blogspot.com/2012/05/database-is-in-transition-try-statement.html
USE master
GO
ALTER DATABASE <db_name>
SET OFFLINE WITH ROLLBACK IMMEDIATE
...
...
ALTER DATABASE <db_name> SET ONLINE
Just to add my two cents. I've put myself into the same situation, while searching the minimum required privileges of a db login to run successfully the statement:
ALTER DATABASE ... SET SINGLE_USER WITH ROLLBACK IMMEDIATE
It seems that the ALTER statement completes successfully, when executed with a sysadmin login, but it requires the connections cleanup part, when executed under a login which has "only" limited permissions like:
ALTER ANY DATABASE
P.S. I've spent hours trying to figure out why the "ALTER DATABASE.." does not work when executed under a login that has dbcreator role + ALTER ANY DATABASE privileges. Here's my MSDN thread!
I will add this here in case someone will be as lucky as me.
When reviewing the sp_who2 list of processes note the processes that run not only for the effected database but also for master. In my case the issue that was blocking the database was related to a stored procedure that started a xp_cmdshell.
Check if you have any processes in KILL/RollBack state for master database
SELECT *
FROM sys.sysprocesses
WHERE cmd = 'KILLED/ROLLBACK'
If you have the same issue, just the KILL command will probably not help.
You can restarted the SQL server, or better way is to find the cmd.exe under windows processes on SQL server OS and kill it.
In SQL Management Studio, go to Security -> Logins and double click your Login. Choose Server Roles from the left column, and verify that sysadmin is checked.
In my case, I was logged in on an account without that privilege.
HTH!
Killing the process ID worked nicely for me.
When running "EXEC sp_who2" Command over a new query window... and filter the results for the "busy" database , Killing the processes with "KILL " command managed to do the trick. After that all worked again.
I know this is an old post but I recently ran into a very similar problem. Unfortunately I wasn't able to use any of the alter database commands because an exclusive lock couldn't be placed. But I was never able to find an open connection to the db. I eventually had to forcefully delete the health state of the database to force it into a restoring state instead of in recovery.
In rare cases (e.g., after a heavy transaction is commited) a running CHECKPOINT system process holding a FILE lock on the database file prevents transition to MULTI_USER mode.
In my scenario, there was no process blocking the database under sp_who2. However, we discovered because the database is much larger than our other databases that pending processes were still running which is why the database under the availability group still displayed as red/offline after we tried to 'resume data'by right clicking the paused database.
To check if you still have processes running just execute this command:
select percent complete from sys.dm_exec_requests
where percent_complete > 0