I have used SQL Merge for CRUD operations in a table that holds records more than 3 million and due to merge locks the table this would gives time out errors on other selects and updates in the production environment.
Site has been accessed by multiple users at a given time and same table could be updated by different set of users at the same time.
so based on my scenario is using SQL merge is efficient or do i have to use INSERT/UPDATE/DELETE statements?
is there are any way that improve merge by only locking the data record instead of locking the entire table?
This is caused by a very stupid default setting in sql server.
For some reason sql server blocks all readers on a table while a writer is using that table. You can correct this oversight from microsoft like this :
ALTER DATABASE yourdatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE yourdatabase SET READ_COMMITTED_SNAPSHOT ON with NO_WAIT
ALTER DATABASE yourdatabase SET MULTI_USER
There is an excellent article about this here http://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/#comment-2220427
Related
I have an SSIS package which takes data that has changed in the last 1/2 hour and transfers it from a DB2 database into a SQL server. This data is loaded into an empty import table (import.tablename) then inserted into a staging table (newlive.tablename). The staging table is then schema switched with the live (dbo) table within a transaction. FYI, the dbo tables are the backend to a visualization tool (Looker)
My problem is that the schema switching is now creating deadlocks. Everytime I run the package, it affects different tables. I've been using this process with larger tables before (also backend to Looker) and have not had this problem before.
I read in another post that the user was having a similar problem because of indexes but all the data has been written to the destination tables.
Any ideas or suggestion of where to look would be much appreciated
The schema switching code is within a Execute SQL Task in the SSIS Package with:
BEGIN TRAN
ALTER SCHEMA LAST_LIVE TRANSFER DBO.TABLENAME
ALTER SCHEMA DBO TRANSFER NEW_LIVE.TABLENAME
GRANT SELECT ON DBO.TABLENAME TO LOOKER_LOOKUP
COMMIT TRAN
I have two tables: table 1 is disk based storage and table 2 is in memory storage. I create a DML trigger on table 2 and in that trigger, I insert a record into table 1. Will it be possible?
You can't, it's not possible in the same in-memory transaction using mvcc isolation access a disk-based table through the trigger.
For workaround this you could saving to a stage table memory-based inside the trigger and insert or update to table 1 from stage table using update-join between a memory-based-disk and table-based-disk although it cames with some admin to manage and control but works.
DDL Tiggers still doesn't work even SQL 2017.
In Sybase, I can specify the locking schema for the table if it is, data rows, pages or table lock.
The below is an example in SYBASE how to create a table with specifying the lock table.
create table dbo.EX_EMPLOYEE(
TEXT varchar(1000) null
)
alter table EX_EMPLOYEE lock allpages
go
In SQL server there are such lock tables(SO answer) but can I specify the lock for the table?
My question: Can I specify the table type of locks ? or in SQL server it is different? Does it depend on the query that I run?
in this link it says :
As Andreas pointed out there is no default locking level locks are
taken as per operation you are trying to perform in the database. Just
some examples. If it is delete/update for a particular row exclusive
lock will be taken on that row If it is select operation Shared lock
will be taken If it is altered table Schema Mod lock will be taken soon
and so forth As Jeremy pointed out If you are looking for Isolation
level it is read committed.
are they are right ? can I say that locking table in Sybase is different than SQL server?
The locking mechanisms are not the same, but you do have some control in SQL Server for locking - you can specify with rowlock, with paglock or with (tablockx) for example on a query to take an exclusive table lock.
As with all such locks when you take control - you have to take responsibility for the blocking you can cause - so use carefully.
Docs with full descriptions : https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table
I am trying to create a stored procedure to recreate a table from scratch, with a possible change of schema (including possible additions/removals of columns), by using a DROP TABLE followed by a SELECT INTO, like this:
BEGIN TRAN
DROP TABLE [MyTable]
SELECT (...) INTO [MyTable] FROM (...)
COMMIT
My concern is that errors could be generated if someone tries to access the table after it has been dropped but before the SELECT INTO has completed. Is there a way to lock [MyTable] in a way that will persist through the DROP?
Instead of DROP/SELECT INTO, I could TRUNCATE/INSERT INTO, but this would not allow the schema to be changed. SELECT INTO is convenient in my situation because it allows the new schema to be automatically determined. Is there a way to make this work safely?
Also, I would like to be sure that the source tables in "FROM (...)" are not locked during this process.
If you try to make a significant change to the table (like adding a column in the middle of existing columns, not at the end) using SSMS and see what script it generates, you'll see that SSMS uses sp_rename.
The general structure of the SSMS's script:
create a new table with temporary name
populate the new table with data
drop the old table
rename the new table to the correct name.
All this in a transaction.
This should keep the time when tables are locked to a minimum.
BEGIN TRANSACTION
SELECT (...) INTO dbo.Temp_MyTable FROM (...)
DROP TABLE dbo.MyTable
EXECUTE sp_rename N'dbo.Temp_MyTable', N'dbo.MyTable', 'OBJECT'
COMMIT
DROP TABLE MyTable acquires a schema modification (Sch-M) lock on it until the end of transaction, so all other queries using MyTable would wait. Even if other queries use the READ UNCOMMITTED isolation level (or the infamous WITH (NOLOCK) hint).
See also MSDN Lock Modes:
Schema Locks
The Database Engine uses schema modification (Sch-M)
locks during a table data definition language (DDL) operation, such as
adding a column or dropping a table. During the time that it is held,
the Sch-M lock prevents concurrent access to the table. This means the
Sch-M lock blocks all outside operations until the lock is released.
I am busy creating tables in Sql Server from a Sybase database. In the Sybase database when the tables are created an option 'lock allpages' was used, how can I replicate this when creating the tables in Sql Server 2005.
In SQL Server you cannot specify a lock option for the table in CREATE TABLE. You can at most disable row level and page level locking by adding the WITH ALLOW_ROW_LOCKS = OFF or WITH ALLOW_PAGE_LOCKS_OFF. The equivalent of locking the entire table in SQL Server is to use a lock hint WITH (TABLOCK) when running queries and updates on the table, but that is not recommended.
My recommendation would be to just ignore this option when transferring the tables from Sybase to SQL Server.
What do you want to achieve with this "lock allpages" option? Is the database you're working on up and running productively? If not, in SQL Server, you can restrict access to the entire database to a single user:
ALTER DATABASE YourDatabaseName SET SINGLE_USER
and that way you're sure no one else if going to come in your way and fiddle around until you're totally done :-)
Set it back to "normal" usage with:
ALTER DATABASE YourDatabaseName SET MULTI_USER
Marc