read-access to a MyISAM table during a long INSERT? - sql

On mysql and using only myisam tables, I need to access the contents of a table during the course of a long-running INSERT.
Is there a way to prevent the INSERT from locking the table in a way that keeps a concurrent SELECT from running?
This is what I am driving at: to inspect how many records have been inserted up to now. Unfortunately WITH (NOLOCK) does not work on mysql and I could only find commands that control the transaction locks (eg, setting the transaction isolation level to READ UNCOMMITTED) -- which, from my understanding, should not apply to myisam tables at all since they don't support transactions in the first place.

MyISAM locking will block selects. Is there a reason for using MyISAM above InnoDB? If you don't want to change your engine, I suspect this might be a solution for you:
1: Create a materialized view of the table using a cron job (or other scheduled task) that your application can query without blocking.
2: Use a trigger to count up the number of inserts that have occurred, and look up the number of inserts using this meta-data table.

Related

Can I specify locking -schema- for a table or it depends on the transaction?

In Sybase, I can specify the locking schema for the table if it is, data rows, pages or table lock.
The below is an example in SYBASE how to create a table with specifying the lock table.
create table dbo.EX_EMPLOYEE(
TEXT varchar(1000) null
)
alter table EX_EMPLOYEE lock allpages
go
In SQL server there are such lock tables(SO answer) but can I specify the lock for the table?
My question: Can I specify the table type of locks ? or in SQL server it is different? Does it depend on the query that I run?
in this link it says :
As Andreas pointed out there is no default locking level locks are
taken as per operation you are trying to perform in the database. Just
some examples. If it is delete/update for a particular row exclusive
lock will be taken on that row If it is select operation Shared lock
will be taken If it is altered table Schema Mod lock will be taken soon
and so forth As Jeremy pointed out If you are looking for Isolation
level it is read committed.
are they are right ? can I say that locking table in Sybase is different than SQL server?
The locking mechanisms are not the same, but you do have some control in SQL Server for locking - you can specify with rowlock, with paglock or with (tablockx) for example on a query to take an exclusive table lock.
As with all such locks when you take control - you have to take responsibility for the blocking you can cause - so use carefully.
Docs with full descriptions : https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table

How to lock a table in SQL Server

How to lock a table in SQL Server ? I found running queries with lock and also read transactions but
confused how to use these.
I have two processes which are reading a table first then updating data in it . I want only one to update and other get this update in its read . working of my processes is as follows:-
Lock table
read data
update data if it is not updated by other process.
release Lock.
thanks
You can use TABLOCKX hint to lock entire table, but locking entire table is usually a bad idea, you might want to reconsider if you really need it.
If you want to ensure you're updating latest data, you can use rowversion column, and double check before update instead of locking entire table for reading.
In your select statement you can provide a "select for update" table hint: with (updlock). Depending on what percentage of records you are updating and their physical distribution this might perform better than a table lock.
But as Fedor Hajdu pointed out, what you probably want is an optimistic locking scheme. Check out the documentation for the READ COMMITTED SNAPSHOT isolation level. You might also find this article useful as an introduction.

Trigger on Audit Table failing due to update conflict

I have a number of tables that get updated through my app which return a lot of data or are difficult to query for changes. To get around this problem, I have created a "LastUpdated" table with a single row and have a trigger on these complex tables which just sets GetDate() against the appropriate column in the LastUpdated table:
CREATE TRIGGER [dbo].[trg_ListItem_LastUpdated] ON [dbo].[tblListItem]
FOR INSERT, UPDATE, DELETE
AS
UPDATE LastUpdated SET ListItems = GetDate()
GO
This way, the clients only have to query this table for the last updated value and then can decided whether or not they need to refresh their data from the complex tables. The complex tables are using snapshot isolation to prevent dirty reads.
In busy systems, around once a day we are getting errors writing or updating data in the complex tables due to update conflicts in "LastUpdated". Because this occurs in the statement executed by the trigger, the affected complex table fails to save data. The following error is logged:
Snapshot isolation transaction aborted due to update conflict. You
cannot use snapshot isolation to access table 'dbo.tblLastUpdated'
directly or indirectly in database 'devDB' to update, delete, or
insert the row that has been modified or deleted by another
transaction. Retry the transaction or change the isolation level for
the update/delete statement.
What should I be doing here in the trigger to prevent this failure? Can I use some kind of query hints on the trigger to avoid this - or can I just ignore errors in the trigger? Updating the data in LastUpdated is not critical, but saving the data correctly into the complex tables is.
This is probably something very simple that I have overlooked or am not aware of. As always, thanks for any info.
I would say that you should look into using Change Tracking (http://msdn.microsoft.com/en-gb/library/cc280462%28v=sql.100%29.aspx), which is lightweight builtin SQL Server functionality that you can use to monitor the fact that a table has changed, as opposed to logging each individual change (which you can also do with Change Data Capture). It needs Snapshot Isolation, which you are already using.
Because your trigger is running in your parent transaction, and your snapshot has become out of date, your whole transaction would need to start again. If this is a complex workload, maintaining this last updated data in this way would be costly.
Short answer - don't do that! Making the updated transactions dependent on one single shared row makes it prone to deadlocks and and update conflicts whole gammut of nasty things.
You can either use views to determine last update, e.g.:
SELECT
t.name
,user_seeks
,user_scans
,user_lookups
,user_updates
,last_user_seek
,last_user_scan
,last_user_lookup
,last_user_update
FROM sys.dm_db_index_usage_stats i JOIN sys.tables t
ON (t.object_id = i.object_id)
WHERE database_id = db_id()
Or, if you really insist on the solution with LastUpdate, you can implement it's update from the trigger in an autonomous transactions. Even though SQL Server doesn't support autonomous transactions, it could done using liked servers: How to create an autonomous transaction in SQL Server 2008
The schema needs to change. If you have to keep your update table, make a row for every table. That would greatly reduce your locks because each table could update their very own row and not competing for the sole row in a table.
LastUpdated
table_name (varchar(whatever)) pk
modified_date (datetime)
New Trigger for tblListItem
CREATE TRIGGER [dbo].[trg_ListItem_LastUpdated] ON [dbo].[tblListItem]
FOR INSERT, UPDATE, DELETE
AS
UPDATE LastUpdated SET modified_date = GetDate() WHERE table_name = 'tblListItem'
GO
Another option that I use a lot is having a modified_date column in every table. Then people know exactly which records to update/insert to sync with your data rather than dropping and reloading everything in the table each time one record changes or is inserted.
Alternatively, you can update the log table inside the same transaction which you use to update your complex tables inside your application & avoid the trigger altogether.
Update
You can also opt for inserting a new row instead of updating the same row in LastUpdated table. You can then query max timestamp for latest update. However, with this approach your LastUpdated table would grow each day which you need to take care of if volume of transactions is high.

Insert/Update Table Locks on SQL Server

I have a big table with around 70 columns in SQL Server 2008. A multithreaded .NET application is calling a stored proc on database to insert into / update the table. Frequency is around 3 times a second.
I have made weekly partitions on table since almost every query has a datetime constraint on the table.
Sometimes it takes a long time to insert/update the table. I am suspicious that sometimes INSERTION makes UPDATE wait; sometimes UPDATE makes INSERTION wait. Is it possible?
How can I design the table to avoid such locks? Performance is the main issue here.
You're right that you're probably hitting deadlocks causing things to wait. A couple things to check first;
Are your indexes correct?
If your DB is in 'Full' recovery mode do you need it? Simple recovery really speeds up inserts/updates, but you loose point-in-time restores for backups.
Are you likely to have multiple threads writing the same record? If not, NOLOCK might be your friend here, but that would mean your data might be inconsitent for a second or two on occasion.

Determine which tables are locked

I have to following problem. I have an application where users can login and do some things like adding new item. I also have statistics in Reporting Services. The problem is that statistic is time consuming and when it is executed, users cannot make new items. In my sql query for statistic I have all select statements decorated by WITH nolock statement. However I can see some tables are locked using Activity Monitor. Is it correct that I see them in locked by objects tab? How can I figure out which tables are locked?
When I use the following statement:
SELECT * FROM MyTable WITH (nolock)
I can also see this query locks MyTable table. Please help me.
Don't use NOLOCK. Dirty reads are inconsistent reads.
Use instead SNAPSHOT ISOLATION. Then you get the best of both worlds: consistent reads and no locks. Remove all lock hints from your queries, then enable read committed snapshot:
ALTER DATABASE [<dbname>] SET READ_COMMITTED_SNAPSHOT ON