Can I specify locking -schema- for a table or it depends on the transaction? - sql

In Sybase, I can specify the locking schema for the table if it is, data rows, pages or table lock.
The below is an example in SYBASE how to create a table with specifying the lock table.
create table dbo.EX_EMPLOYEE(
TEXT varchar(1000) null
)
alter table EX_EMPLOYEE lock allpages
go
In SQL server there are such lock tables(SO answer) but can I specify the lock for the table?
My question: Can I specify the table type of locks ? or in SQL server it is different? Does it depend on the query that I run?
in this link it says :
As Andreas pointed out there is no default locking level locks are
taken as per operation you are trying to perform in the database. Just
some examples. If it is delete/update for a particular row exclusive
lock will be taken on that row If it is select operation Shared lock
will be taken If it is altered table Schema Mod lock will be taken soon
and so forth As Jeremy pointed out If you are looking for Isolation
level it is read committed.
are they are right ? can I say that locking table in Sybase is different than SQL server?

The locking mechanisms are not the same, but you do have some control in SQL Server for locking - you can specify with rowlock, with paglock or with (tablockx) for example on a query to take an exclusive table lock.
As with all such locks when you take control - you have to take responsibility for the blocking you can cause - so use carefully.
Docs with full descriptions : https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table

Related

How to lock a table in SQL Server

How to lock a table in SQL Server ? I found running queries with lock and also read transactions but
confused how to use these.
I have two processes which are reading a table first then updating data in it . I want only one to update and other get this update in its read . working of my processes is as follows:-
Lock table
read data
update data if it is not updated by other process.
release Lock.
thanks
You can use TABLOCKX hint to lock entire table, but locking entire table is usually a bad idea, you might want to reconsider if you really need it.
If you want to ensure you're updating latest data, you can use rowversion column, and double check before update instead of locking entire table for reading.
In your select statement you can provide a "select for update" table hint: with (updlock). Depending on what percentage of records you are updating and their physical distribution this might perform better than a table lock.
But as Fedor Hajdu pointed out, what you probably want is an optimistic locking scheme. Check out the documentation for the READ COMMITTED SNAPSHOT isolation level. You might also find this article useful as an introduction.

Trace Flag 1211 Not Working - SQL Server 2008 R2

During a SSIS load, when an employee table is getting updated, locking comes into effect.
However, have disabled lock escalation on the table using the following statements:
ALTER TABLE dbo.Employee SET (LOCK_ESCALATION = DISABLE)
DBCC TRACEON (1211,-1)
However, the table (object) does get locked and is held for almost an hour. The total no. of updates (insert, update, delete statements) are approx 200,000
The ultimate objective here is not really to avoid locking but to successfully allow reads on the table.
The no. of updates (inserts/updates/deletes) are significantly high in the range of 50,000 every day, compared to only about 50-100 search/select queries on the table which are actually getting affected to due the locks.
from BOL:
SET LOCK_ESCALATION = DISABLE
Prevents lock escalation in most cases. Table-level locks are not
completely disallowed. For example, when you are scanning a table that
has no clustered index under the serializable isolation level,
Database Engine must take a table lock to protect data integrity.
Serializable is the default IsolationLevel on SSIS packages (click any blank area on your control flow and check the package's proprieties).
Any change your table doesn't have a clustered index?

Is it possible to roll back CREATE TABLE and ALTER TABLE statements in major SQL databases?

I am working on a program that issues DDL. I would like to know whether CREATE TABLE and similar DDL can be rolled back in
Postgres
MySQL
SQLite
et al
Describe how each database handles transactions with DDL.
http://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis provides an overview of this issue from PostgreSQL's perspective.
Is DDL transactional according to this document?
PostgreSQL - yes
MySQL - no; DDL causes an implicit commit
Oracle Database 11g Release 2 and above - by default, no, but an alternative called edition-based redefinition exists
Older versions of Oracle - no; DDL causes an implicit commit
SQL Server - yes
Sybase Adaptive Server - yes
DB2 - yes
Informix - yes
Firebird (Interbase) - yes
SQLite also appears to have transactional DDL as well. I was able to ROLLBACK a CREATE TABLE statement in SQLite. Its CREATE TABLE documentation does not mention any special transactional 'gotchas'.
PostgreSQL has transactional DDL for most database objects (certainly tables, indices etc but not databases, users). However practically any DDL will get an ACCESS EXCLUSIVE lock on the target object, making it completely inaccessible until the DDL transaction finishes. Also, not all situations are quite handled- for example, if you try to select from table foo while another transaction is dropping it and creating a replacement table foo, then the blocked transaction will finally receive an error rather than finding the new foo table. (Edit: this was fixed in or before PostgreSQL 9.3)
CREATE INDEX ... CONCURRENTLY is exceptional, it uses three transactions to add an index to a table while allowing concurrent updates, so it cannot itself be performed in a transaction.
Also the database maintenance command VACUUM cannot be used in a transaction.
Can't be done with MySQL it seems, very dumb, but true... (as per the accepted answer)
"The CREATE TABLE statement in InnoDB is processed as a single
transaction. This means that a ROLLBACK from the user does not undo
CREATE TABLE statements the user made during that transaction."
https://dev.mysql.com/doc/refman/5.7/en/implicit-commit.html
Tried a few different ways and it simply won't roll back..
Work around is to simply set a failure flag and do "drop table tblname" if one of the queries failed..
Looks like the other answers are pretty outdated.
As of 2019:
Postgres has supported transactional DDL for many releases.
SQLite has supported transactional DDL for many releases.
MySQL has supported Atomic DDL since 8.0 (which was released in 2018).
While it is not strictly speaking a "rollback", in Oracle the FLASHBACK command can be used to undo these types of changes, if the database has been configured to support it.

Default Locking Granularity for Local Temp Tables - Microsoft SQL Server 2000

What is the locking granularity used for local temp tables in MSSQL? Given that local temp tables are local to sessions, and sessions are the same as connections in MSSQL2K, and that there is no way to execute any statements or other code in parallel on the same connection through TSQL or other means (I believe), then intuitively the DB should just hold an exclusive table lock for the lifetime of the table. This would avoid locking memory usage and locking escalation. I can't get a clear answer on this anywhere. Is this the case?
A # temp table just a table, but sits in tempdb. The same locking granularity applies because it is a table.

read-access to a MyISAM table during a long INSERT?

On mysql and using only myisam tables, I need to access the contents of a table during the course of a long-running INSERT.
Is there a way to prevent the INSERT from locking the table in a way that keeps a concurrent SELECT from running?
This is what I am driving at: to inspect how many records have been inserted up to now. Unfortunately WITH (NOLOCK) does not work on mysql and I could only find commands that control the transaction locks (eg, setting the transaction isolation level to READ UNCOMMITTED) -- which, from my understanding, should not apply to myisam tables at all since they don't support transactions in the first place.
MyISAM locking will block selects. Is there a reason for using MyISAM above InnoDB? If you don't want to change your engine, I suspect this might be a solution for you:
1: Create a materialized view of the table using a cron job (or other scheduled task) that your application can query without blocking.
2: Use a trigger to count up the number of inserts that have occurred, and look up the number of inserts using this meta-data table.