unable to open table in sql server - sql

when I try open table in sql server management studio I receive the message
Timeout Expired
Then when I try and rename the table I get
Rename Failed Lock Request Time out Expired
Noting that it just contain one row with primary key field, datetime field and varbinary(max) field. and the other tables are working just fine. this occures once in a while.
so what should i do to resoulve this.

What do you mean open? Design window or query?
Check locks on your server first:
exec sp_lock
To set database singleuser you can do this:
USE master;
GO
ALTER DATABASE db SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
USE db;
GO
--DO YOUR WORK HERE
GO
ALTER DATABASE db SET MULTI_USER;
Some helpful stored procedures and dynamic views:
select DB_NAME(1)
select OBJECT_NAME(1131151075,1)
exec sp_lock
-- locks for sesion 51
exec sp_lock 51
select DB_NAME(dbid)
select OBJECT_NAME(ObjId,dbid)
select * from sys.dm_tran_locks
select * from sys.dm_exec_connections
select * from sys.dm_exec_sessions
select * from sys.dm_tran_active_transactions
select * from sys.dm_tran_database_transactions
You can also watch activity monitor and kill sesion that create lock.
There is more information about that:
http://msdn.microsoft.com/en-us/library/ms173730.aspx

Related

Drop Database Error For Azure Synapse Database

When I try to drop a database in my builtin synapse pool, I'm getting the following error:
Cannot drop database "database name" because it is currently in use.
I've tried in both SSMS and Studio Synapse Studio and both returned errors.
I made sure there's no external datasources and file formats in the database.
The SSMS command I used was:
DROP DATABASE [database name]
Set Single_use mode doesn't work either. If you try this:
ALTER DATABASE [Database Name] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
You'll get this:
SINGLE_USER is not supported for ALTER DATABASE.
What blocks a database from being dropped in synapse?
Thanks.
What worked for me is:
Run this in master
select DB_NAME(database_id), 'kill '+cast(session_id as varchar(10)), *
from sys.dm_exec_sessions
where DB_NAME(database_id) NOT IN ('master')
order by 1
Kill all sessions (active as well as sleeping) for the database you want to delete using the kill command returned by the query above. Run it in master:
kill 82
Drop the database from Synapse Studio, not from SSMS
Were you connected to MASTER?
Assuming there were no users connected, that is the only reason I can think it would be blocked.
Make sure to kill all the sessions manually.
One of the reasons why you could get this error is that there is an active connection via SSMS/ADS/Synapse Studio/Power BI or some other tool which is using that database at the moment.
When you close all the sessions, you should be able to delete the database successfully.
Here is the procedure for SQL serverless pool (aka SQL on-demand).
Step 1: Find the session which you want to kill using query bellow.
SELECT
'Running' as [Status],
Transaction_id as [Request ID],
'SQL serverless' as [SQL Resource],
s.login_name as [Submitter],
s.Session_Id as [Session ID],
req.start_time as [Submit time],
req.command as [Request Type],
SUBSTRING(
sqltext.text,
(req.statement_start_offset/2)+1,
(
(
CASE req.statement_end_offset
WHEN -1 THEN DATALENGTH(sqltext.text)
ELSE req.statement_end_offset
END - req.statement_start_offset
)/2
) + 1
) as [Query Text],
req.total_elapsed_time as [Duration]
FROM
sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) sqltext
JOIN sys.dm_exec_sessions s ON req.session_id = s.session_id
Step 2: Use the value in the Session ID column to kill the process which you want to. For example, if the Session ID is 81 then execute the following command
kill 81
Here is the procedure for SQL dedicated pool (aka SQL DW).
Step 1: Find the session which you want to kill using the bellow query
SELECT * FROM sys.dm_pdw_exec_sessions
where [status] = 'Active' and not sql_spid = ##SPID
GO
Step 2: Use the value in the [session_id] column to kill the process which you want to. for example if the session_id is 'SID210' then execute the following command
kill 'SID210'
GO

How do I update triggers across multiple databases?

I have a query that I can select the databases from the sys.databases with the triggers that I wish to update. From there I can create a cursor. However when I go into the cursor to update my triggers using a dynamic db name #DatabaseExecuteName that is set to MyDatabaseName.dbo I receive the error ''CREATE/ALTER TRIGGER' does not allow specifying the database name as a prefix to the object name.' Because I am in a cursor I am not able to execute a USE MyDatabaseName ... GO, the GO statement is not allowed inside the CURSOR. I have tried SQLCMD MODE :setvar DatabaseName "MyDatabaseName" with USE [$(DatabaseName)]; to try to set the use database. I feel I am very close however my strength is not SQL queries. I could use some assistance on what I am missing.
You can nest EXEC calls so that you can use a USE and then execute a further statement and you don't need to use GO to seperate the batches. This is a complete script to demonstrate the technique:
create database DB1
go
create database DB2
go
use DB2
go
create table T1 (ID int not null)
go
create table T2 (ID int not null)
go
use DB1
go
exec('use DB2; exec(''create trigger T_T on T1 after insert as
insert into T2(ID) select i.ID from inserted i'')');
select DB_NAME()
insert into DB2..T1(ID) values (1),(2);
select * from DB2..T2
Which then shows that this connection is still in the DB1 database, but the trigger was successfully created on the T1 table within the DB2 database.
What you have to watch for is getting your quote-escaping correct.

SQL Server: Does 'DROP TABLE' inside transaction causes an implicit commit?

My question is kind of easy but i'm still doubting after I created this transaction. If I execute the following code:
BEGIN TRANSACTION
DROP TABLE Table_Name
Can I perform a ROLLBACK TRANSACTION that recovers the dropped table? I'm asking because I don't know what happens in the 'Object Explorer' and I didn't found any question of this topic, so I think that it could be a useful issue.
DROP TABLE can be rolled back and it does not auto-commit.
This is incredibly easy to test.
create table TransactionTest
(
ID int identity primary key clustered,
SomeValue varchar(20)
)
insert TransactionTest
select 'Here is a row'
begin transaction
drop table TransactionTest
rollback transaction
select * from TransactionTest
I just want to add that I tried in Oracle 11g, Mysql 5.7 and MSSQL 2016. It only rolled back (worked) with MSSQL RDBMS. I would expect that most other RDBMS won't support it since it execute schema changes.
ORACLE PL/SQL EX:
savepoint mysave;
DROP TABLE test_table;
ROLLBACK TO mysave;
select * from test_table;

How to force a running t-sql query (half done) to commit?

I have database on Sql Server 2008 R2.
On that database a delete query on 400 Million records, has been running for 4 days , but I need to reboot the machine. How can I force it to commit whatever is deleted so far? I want to reject that data which is deleted by running query so far.
But problem is that query is still running and will not complete before the server reboot.
Note : I have not set any isolation / begin/end transaction for the query. The query is running in SSMS studio.
If machine reboot or I cancelled the query, then database will go in recovery mode and it will recovering for next 2 days, then I need to re-run the delete and it will cost me another 4 days.
I really appreciate any suggestion / help or guidance in this.
I am novice user of sql server.
Thanks in Advance
Regards
There is no way to stop SQL Server from trying to bring the database into a transactionally consistent state. Every single statement is implicitly a transaction itself (if not part of an outer transaction) and is executing either all or nothing. So if you either cancel the query or disconnect or reboot the server, SQL Server will from transaction log write the original values back to the updated data pages.
Next time when you delete so many rows at once, don't do it at once. Divide the job in smaller chunks (I always use 5.000 as a magic number, meaning I delete 5000 rows at the time in the loop) to minimize transaction log use and locking.
set rowcount 5000
delete table
while ##rowcount = 5000
delete table
set rowcount 0
If you are deleting that many rows you may have a better time with truncate. Truncate deletes all rows from the table very efficiently. However, I'm assuming that you would like to keep some of the records in the table. The stored procedure below backs up the data you would like to keep into a temp table then truncates then re-inserts the records that were saved. This can clean a huge table very quickly.
Note that truncate doesn't play well with Foreign Key constraints so you may need to drop those then recreate them after cleaned.
CREATE PROCEDURE [dbo].[deleteTableFast] (
#TableName VARCHAR(100),
#WhereClause varchar(1000))
AS
BEGIN
-- input:
-- table name: is the table to use
-- where clause: is the where clause of the records to KEEP
declare #tempTableName varchar(100);
set #tempTableName = #tableName+'_temp_to_truncate';
-- error checking
if exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(#tempTableName)) begin
print 'ERROR: already temp table ... exiting'
return
end
if not exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(#TableName)) begin
print 'ERROR: table does not exist ... exiting'
return
end
-- save wanted records via a temp table to be able to truncate
exec ('select * into '+#tempTableName+' from '+#TableName+' WHERE '+#WhereClause);
exec ('truncate table '+#TableName);
exec ('insert into '+#TableName+' select * from '+#tempTableName);
exec ('drop table '+#tempTableName);
end
GO
You must know D(Durability) in ACID first before you understand why database goes to Recovery mode.
Generally speaking, you should avoid long running SQL if possible. Long running SQL means more lock time on resource, larger transaction log and huge rollback time when it fails.
Consider divided your task some id or time. For example, you want to insert large volume data from TableSrc to TableTarget, you can write query like
DECLARE #BATCHCOUNT INT = 1000;
DECLARE #Id INT = 0;
DECLARE #Max = ...;
WHILE Id < #Max
BEGIN
INSERT INTO TableTarget
FROM TableSrc
WHERE PrimaryKey >= #Id AND #PrimaryKey < #Id + #BatchCount;
SET #Id = #Id + #BatchCount;
END
It's ugly more code and more error prone. But it's the only way I know to deal with huge data volume.

SQL Server 2005 triggered audit tables moved to SQL Server 2008, now trigger does not respond when trying to insert row into audit table

We began with SQL Server 2005 database and tables. [UPDATE, INSERT and DELETE] in this case we were using the UPDATE trigger(s) to insert rows into audit tbl(s) when application (VB6) data table is modified. We moved the audit tables to SQL Server 2008. The only change in the trigger statement(s) (on the SQL Server 2005) we modified the original ([FHA-4]) to the new (SQL Server 2008 [FHA-DMZ-CL1SQL]) server name.
When the trigger is activated the hour glass stays on until a sql timeout message appears and the application aborts. When checking the audit tables nothing new is added so the insert did not work.
Here is the actual trigger statement for the table:
USE [BCC_DHMH]
GO
/****** Object: Trigger [dbo].[TriggerAddressUpdate] Script Date: 04/07/2010 09:47:34 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
--Logic to save to the table that supports Tripwire
ALTER TRIGGER [dbo].[TriggerAddressUpdate]
ON [dbo].[tblAddress]
AFTER UPDATE
AS
SET XACT_ABORT ON
BEGIN DISTRIBUTED TRANSACTION
SET NOCOUNT ON;
--IF (SYSTEM_USER <> 'FHA\kYarberough' AND SYSTEM_USER <> 'FHA\ljlee' AND SYSTEM_USER <> 'FHA\PHarvey' AND SYSTEM_USER <> 'FHA\BShenosky' AND SYSTEM_USER <> 'FHA\BBrodie' AND SYSTEM_USER <> 'FHA\DRandolph')
Declare #UpdateID as varchar(50)
Set #UpdateID = newid()
BEGIN
INSERT [FHA-4].[ECMS_Audit].[dbo].[tblAddress_Audit]
([fldAddressOwnerID], [fldUpdateID], [fldAddressTypeCode], [fldAddressMailcode], [fldAddressSequence],
[fldAddressID], [fldName], [fldLine1], [fldLine2], [fldCity], [fldState], [fldCounty],
[fldZipcode], [fldWorkFax], [fldWorkPhone], [fldWorkExtension], [fldWorkEMail], [fldHomePhone],
[fldHomeEMail], [fldContactName], [fldContactPhone], [fldContactFax], [fldContactExtension], [fldEffectiveDate],
[fldExpirationDate], [fldUpdateTimestamp], [fldUpdateUserID], [fldRelationship], [fldNotes], [fldNCPDPNum],
[fldMedicaidNum], [fldStoreNum],
[ModifiedBySqlUser], [ModifiedByNTUser], [ModifiedDate], [Action] )
SELECT [fldAddressOwnerID], #UpdateID, [fldAddressTypeCode], [fldAddressMailcode], [fldAddressSequence],
[fldAddressID], [fldName], [fldLine1], [fldLine2], [fldCity], [fldState], [fldCounty],
[fldZipcode], [fldWorkFax], [fldWorkPhone], [fldWorkExtension], [fldWorkEMail], [fldHomePhone],
[fldHomeEMail], [fldContactName], [fldContactPhone], [fldContactFax], [fldContactExtension], [fldEffectiveDate],
[fldExpirationDate], [fldUpdateTimestamp], [fldUpdateUserID], [fldRelationship], [fldNotes], [fldNCPDPNum],
[fldMedicaidNum], [fldStoreNum],
CURRENT_USER, SYSTEM_USER, GETDATE(), 'InitialValues' FROM deleted
INSERT [FHA-4].[ECMS_Audit].[dbo].[tblAddress_Audit]
([fldAddressOwnerID], [fldUpdateID], [fldAddressTypeCode], [fldAddressMailcode], [fldAddressSequence],
[fldAddressID], [fldName], [fldLine1], [fldLine2], [fldCity], [fldState], [fldCounty],
[fldZipcode], [fldWorkFax], [fldWorkPhone], [fldWorkExtension], [fldWorkEMail], [fldHomePhone],
[fldHomeEMail], [fldContactName], [fldContactPhone], [fldContactFax], [fldContactExtension], [fldEffectiveDate],
[fldExpirationDate], [fldUpdateTimestamp], [fldUpdateUserID], [fldRelationship], [fldNotes], [fldNCPDPNum],
[fldMedicaidNum], [fldStoreNum],
[ModifiedBySqlUser], [ModifiedByNTUser], [ModifiedDate], [Action] )
SELECT [fldAddressOwnerID], #UpdateID, [fldAddressTypeCode], [fldAddressMailcode], [fldAddressSequence],
[fldAddressID], [fldName], [fldLine1], [fldLine2], [fldCity], [fldState], [fldCounty],
[fldZipcode], [fldWorkFax], [fldWorkPhone], [fldWorkExtension], [fldWorkEMail], [fldHomePhone],
[fldHomeEMail], [fldContactName], [fldContactPhone], [fldContactFax], [fldContactExtension], [fldEffectiveDate],
[fldExpirationDate], [fldUpdateTimestamp], [fldUpdateUserID], [fldRelationship], [fldNotes], [fldNCPDPNum],
[fldMedicaidNum], [fldStoreNum],
CURRENT_USER, SYSTEM_USER, GETDATE(), 'NewValues' FROM inserted
END
COMMIT TRANSACTION
SET XACT_ABORT OFF
well that trigger appears to have the old name to me. But if it really does have the new name...hmmm...
Since it is a distributed transaction are you sure you have the linked server set up correctly?
Also I'd prefer not to use a distributed transaction in a trigger, it could affect users being able to change records if the other server is down. MIght be better to send the records to an audit table on the same server or to a staging table that runs a job to move the records to the other server.