This sutiation can be easily reproduced on your test environment. Open SSMS and connect to you server. Open New Query tab connected to MYTEST database (I assume that MYTEST is online).
Don't do anything with this tab. Open new tab connected to the same database. Type the following code in your new tab
USE master
GO
ALTER DATABASE MYTEST
SET OFFLINE
Your code will be head blocked by the process you are running from your first tab.(Please see Activity Monitor).
Why is the execution blocked even though there is no task accosiated with process in the first tab?
You'd need to tell SQL to kick every connection out
ALTER DATABASE MYTEST
SET OFFLINE
WITH ROLLBACK IMMEDIATE
This is by design: a connection to a database has shared DB lock, whether executing or not.
WITH <termination>::=
Specifies when to roll back incomplete transactions when the database is transitioned from one state to another. If the termination clause is omitted, the ALTER DATABASE statement waits indefinitely if there is any lock on the database. Only one termination clause can be specified, and it follows the SET clauses.
Just run sp_lock (or whatever the new dmvs are :-) and you'll see them
Related
I am trying call to sp_rename inside transaction (BEGIN TRANSACTION), but it shows this error message:
Can't run sp_rename from within a transaction., Error 17260, Procedure sp_rename, Line 78
The sp_rename code checks for any open transactions::
/*
** Running sp_rename inside a transaction would endanger the
** recoverability of the transaction/database. Disallow it.
** Do the ##trancount check before initializing any local variables,
** because "select" statement itself will start a transaction
** if chained mode is on.
*/
if ##trancount > 0
begin
/*
** 17260, "Can't run %1! from within a transaction."
*/
raiserror 17260, "sp_rename"
return (1)
end
else
begin
set chained off
end
I don't understand why these actions are a danger....
Additionally, I need a way to call this stored procedure within the transaction and then rollback this action.
Any suggestions?
ASE isn't really designed for rolling back schema changes (as you're seeing).
If you want a means of testing your 'framework functionality' consider:
create a new test db
run your scripts against this test db
when done just drop the test db; an alternative would be to run a series of drop commands to 'undo' all the schema changes
New databases are initially created as a copy of the model database so you could go so far as to install some base components in the model database but keep in mind the model database (and its contents) are used when creating all new databases (eg, all temp dbs when starting ASE), so don't add anything to the model database that you wouldn't want showing up in any new databases (outside your 'framework functionality' testing).
What you're proposing doesn't sound much different than what I've seen developers regularly do when testing a new 'release':
load a copy of the prod db
apply release package against said db
rinse/repeat until release package completes successfully
key being to start with a newly loaded (or created) database
A variation on the above:
create a new test db
add base components as needed
dump/save a copy of the test db
run your tests
when you want to run your test again then load that dump/saved copy back into the test db, then run tests again
basically same thing as loading copy of prod db but in this case you load a copy of your base test db
In Oracle SQL Developer, I need to switch the active database connection manually. Is there a command that will connect to a different database programmatically, assuming that the login credentials are already saved? I'm trying to avoid clicking on the drop-down menu at the top right of the window which selects the active connection.
Perhaps I should rather have a single SQL file per database? I could understand that argument. But this to prepare to migrate some tables from one database to another and so it's nice to have all of the context in one file.
On database1, run a query on table1 which is located in schema1.
-- manually switch to database1 (looking for a command to replace this step)
ALTER SESSION SET CURRENT_SCHEMA = schema1
SELECT * FROM table1;
On database2, run a query on table2 which is located in schema2.
-- manually switch to database2
ALTER SESSION SET CURRENT_SCHEMA = schema2
SELECT * FROM table2;
Looks like this is well documented here
Use this command
CONN[ECT] [{<logon>| / |proxy} [AS {SYSOPER | SYSDBA | SYSASM}] [edition=value]]
You need a DDL TRIGGER to perform an event after your presql
CREATE TRIGGER sample
ON TABLE
AFTER
Event
........
THEN
ALTER SESSION SET
CURRENT_SCHEMA = schema2
SELECT * FROM table2;
I don't know of a way in which to change your selected connection in SQL Developer, but there is a programmatic method for temporarily changing the connection under which the script commands are run, as #T.S. pointed out. I want to give a few examples, which might be helpful to people (as they would have been for me).
So let's say your script has part A and part B and you want to execute them one after the other but from different connections. Then you can use this:
CONNECT username1/password1#connect_identifier1;
-- Put commands A here to be executed under this connection.
DISCONNECT; -- username1
CONNECT username2/password2#connect_identifier2;
-- Put commands B here to be executed under this connection.
DISCONNECT; -- username2
The connect_identifier part identifies the database where you want to connect. For instance, if you want to connect to a pluggable database on the local machine, you may use something like this:
CONNECT username/password#localhost/pluggable_database_name;
or if you want to connect to a remote database:
CONNECT username/password#IP:port/database_name;
You can omit the password, but then you will have to input it in a prompt each time you run that section. If you want to consult the CONNECT command in more detail, this reference document may be useful.
In order to execute the commands, you would then select the code that you are interested in (including the relevant CONNECT commands) and use Run Script (F5) or just use Run Script (F5) without selecting anything which will execute the entire script file. SQL Developer will execute your commands, put the output into the Script Output tab and then stop the connection. Note that the output of SELECT commands might be unpleasant to read inside Script Output. This can be mitigated by running the following command first (just once):
SET sqlformat ansiconsole;
There is also Run Statement (Ctrl+Enter), but do note that Run Statement (Ctrl+Enter) does not seem to work well with this workflow. It will execute and display each SELECT statement into a separate Query Result tab, which is easier to read, BUT the SELECT query will always be executed from the context of the active connection in SQL Developer (the one in the top right), not the current code connection of the CONNECT statement. On the other hand, INSERT commands, for instance, DO seem to be executed in the context of the current code connection of the CONNECT statement. This (rather inconsistent) behaviour is probably not what you want, so I recommend using Run Script (F5) as described above.
SQL Server is SQL Azure, basically it's SQL Server 2008 for normal process.
I have a table, called TASK, constantly have new data in (new task), and removed (task complete)
For new data in, I use INSERT INTO .. SELECT ..., most of time takes very long, lets say dozen of minutes.
For old data out, I first use SELECT (WITH NOLOCK) to get task, UPDATE to let other thread know this task already starts to process, then DELETE once finished.
Dead lock sometime happens on SELECT, most time happens on UPDATE and DELETE.
this is not time critical task, so I can start process the new data once all INSERT finished. Is there any kind of LOCK to ask SELECT not to select it before the INSERT finished? Or any kind of other suggestion to avoid Conflict. I can redesign table if needed.
later the sqlserver2005,resolve lock is easy.
for conflict
1.you can use the service broker.
2.use the isolution level.
dbcc useroptions ,at last row ,you can see the deflaut isolution level is read_committed,this is the session level.
we can change the level to read_committed_snapshot for conflict,in sqlserver, not realy row lock like oracle.but we can use this method implement.
ALTER DATABASE DBName
SET READ_COMMITTED_SNAPSHOT ON;
open this feature,must in single user schame.
and you can test it.
for session A ,session B.
A:update table1 set name = 'new' with(Xlock) where id = 1
B:you still update other row and select all the data from table.
my english is not very good,but for lock ,i know.
in sqlserver,for function ,there are three locks.
1.optimistic lock ,use the timestamp(rowversion) control.
2.pessimism lock ,force lock when use the date.use Ulock,Xlock and so on.
3.virtual lock,use the proc getapplock().
if you need lock schame in system architecture,please me email : mjjjj2001#163.com
Consider using service broker if this is a processing queue.
There are a number of considerations that affect performance and locking. I surmise that the data is being updated and deleted in a separate session. Which transaction isolation level is in use for the insert session and the delete session.
Has the insert session and all transactions committed and closed when the delete session runs? Are there multiple delete sessions running concurrently? It is very important to have an index on the columns you are using to identify a task for the SELECT/UPDATE/DELETE statements, especially if you move to a higher isolation level such as REPEATABLE READ or SERIALIZED.
All of these issues could be solved by moving to Service Broker if it is appropriate.
Short Version:
Does anyone know of a way --inside a SQL 2000 trigger-- of detecting which process modified the data, and exiting the trigger if a particular process is detected?
Long Version
I have a customized synchronization routine that moves data back and forth between dis-similar database schemas.
When this process grabs a modified record from Database A, it needs to transform it into a record that goes into Database B. The database are radically different, but share some of the same data such as user accounts and user activity (however even these tables are structurally different).
When data is modified in one of the pertinent tables, a trigger fires which writes the PK of that record to a "sync" table. This "sync" table is monitored by a process (a stored proc) which will grab the PK's in sequence, and copy over the related data from database A to database B, making transformations as necessary.
Both databases have triggers that fire and copy the PK to the sync table, however these triggers must ignore the sync process itself so as not to enter into "endless" loop (or less, depending on nesting limits).
In SQL 2005 and up, I use the following code in the Sync process to identify itself:
SET CONTEXT_INFO 0xHexValueOfProcName
Each trigger has the following code at the beginning, to see if the process that modified the data is the sync process itself:
IF (CONTEXT_INFO() = 0xHexValueOfProcName)
BEGIN
-- print '## Process Sync Queue detected. This trigger is exiting! ##'
return
END
This system works great, keep chugging along, keeps the data in sync. The problem now however is that a SQL2000 server wants to join the party.
Does anyone know of a way --inside a SQL 2000 trigger-- of detecting which process modified the data, and exiting the trigger if a particular process is detected?
Thanks guys!
(As per Andriy's request, I am answering my own question.)
I put this at the top of my trigger, works like a charm.
-- How to check context info in SQL 2000
IF ((select CONTEXT_INFO from master..sysprocesses where spid = ##SPID) = 0xHexValueOfProcName)
BEGIN
print 'Sync Process Detected -- Exiting!'
return
END
I need to restart a database because some processes are not working. My plan is to take it offline and back online again.
I am trying to do this in Sql Server Management Studio 2008:
use master;
go
alter database qcvalues
set single_user
with rollback immediate;
alter database qcvalues
set multi_user;
go
I am getting these errors:
Msg 5061, Level 16, State 1, Line 1
ALTER DATABASE failed because a lock could not be placed on database 'qcvalues'. Try again later.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.
Msg 5061, Level 16, State 1, Line 4
ALTER DATABASE failed because a lock could not be placed on database 'qcvalues'. Try again later.
Msg 5069, Level 16, State 1, Line 4
ALTER DATABASE statement failed.
What am I doing wrong?
After you get the error, run
EXEC sp_who2
Look for the database in the list. It's possible that a connection was not terminated. If you find any connections to the database, run
KILL <SPID>
where <SPID> is the SPID for the sessions that are connected to the database.
Try your script after all connections to the database are removed.
Unfortunately, I don't have a reason why you're seeing the problem, but here is a link that shows that the problem has occurred elsewhere.
http://www.geakeit.co.uk/2010/12/11/sql-take-offline-fails-alter-database-failed-because-a-lock-could-not-error-5061/
I managed to reproduce this error by doing the following.
Connection 1 (leave running for a couple of minutes)
CREATE DATABASE TESTING123
GO
USE TESTING123;
SELECT NEWID() AS X INTO FOO
FROM sys.objects s1,sys.objects s2,sys.objects s3,sys.objects s4 ,sys.objects s5 ,sys.objects s6
Connections 2 and 3
set lock_timeout 5;
ALTER DATABASE TESTING123 SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
Try this if it is "in transition" ...
http://learnmysql.blogspot.com/2012/05/database-is-in-transition-try-statement.html
USE master
GO
ALTER DATABASE <db_name>
SET OFFLINE WITH ROLLBACK IMMEDIATE
...
...
ALTER DATABASE <db_name> SET ONLINE
Just to add my two cents. I've put myself into the same situation, while searching the minimum required privileges of a db login to run successfully the statement:
ALTER DATABASE ... SET SINGLE_USER WITH ROLLBACK IMMEDIATE
It seems that the ALTER statement completes successfully, when executed with a sysadmin login, but it requires the connections cleanup part, when executed under a login which has "only" limited permissions like:
ALTER ANY DATABASE
P.S. I've spent hours trying to figure out why the "ALTER DATABASE.." does not work when executed under a login that has dbcreator role + ALTER ANY DATABASE privileges. Here's my MSDN thread!
I will add this here in case someone will be as lucky as me.
When reviewing the sp_who2 list of processes note the processes that run not only for the effected database but also for master. In my case the issue that was blocking the database was related to a stored procedure that started a xp_cmdshell.
Check if you have any processes in KILL/RollBack state for master database
SELECT *
FROM sys.sysprocesses
WHERE cmd = 'KILLED/ROLLBACK'
If you have the same issue, just the KILL command will probably not help.
You can restarted the SQL server, or better way is to find the cmd.exe under windows processes on SQL server OS and kill it.
In SQL Management Studio, go to Security -> Logins and double click your Login. Choose Server Roles from the left column, and verify that sysadmin is checked.
In my case, I was logged in on an account without that privilege.
HTH!
Killing the process ID worked nicely for me.
When running "EXEC sp_who2" Command over a new query window... and filter the results for the "busy" database , Killing the processes with "KILL " command managed to do the trick. After that all worked again.
I know this is an old post but I recently ran into a very similar problem. Unfortunately I wasn't able to use any of the alter database commands because an exclusive lock couldn't be placed. But I was never able to find an open connection to the db. I eventually had to forcefully delete the health state of the database to force it into a restoring state instead of in recovery.
In rare cases (e.g., after a heavy transaction is commited) a running CHECKPOINT system process holding a FILE lock on the database file prevents transition to MULTI_USER mode.
In my scenario, there was no process blocking the database under sp_who2. However, we discovered because the database is much larger than our other databases that pending processes were still running which is why the database under the availability group still displayed as red/offline after we tried to 'resume data'by right clicking the paused database.
To check if you still have processes running just execute this command:
select percent complete from sys.dm_exec_requests
where percent_complete > 0