Microsoft SQL Server multiple locks - sql

I'm currently running Microsoft SQL Express Server.
When one user performs a query without committing it, it locks the entire table.
The problem is that malicious users might "ruin" the database by doing so on purpose.
How can I prevent this from happening?

You need to understand database isolation levels http://en.wikipedia.org/wiki/Isolation_(database_systems). Most likely you are running queries as seralizable which will have that effect. Try submitting some code.

Related

Lock request time out period exceeded - Telerik OpenAccess ORM

I have a big SQL Server 2008 R2 database with many rows that are updated constantly. Updating is done by a back end service application that calls stored procedures. Within one of those stored procedures there is a SQL cursor that recalculates and updates data. This all runs fine.
But, our frontend web application needs to search through these rows and this search sometimes results in a
Lock request time out period exceeded. at
Telerik.OpenAccess.RT.Adonet2Generic.Impl.PreparedStatementImp.executeQuery()..
After doing some research I have found that the best way to make this query to run without problems is to make it run with "read uncommitted isolation level". I've found that this setting can be made in the Telerik OpenAccess settings, but that's a setting that affects the complete database ORM project. That's not what I want! I want this level for this query only.
Is there a way to make this specific LINQ query to run in this uncommitted isolation level?
Or can we make this one query to use a WITH NOLOCK hint?
Use
SET LOCK_TIMEOUT -1
in the beginning of your query.
See the reference manual
Runnung the queries in read uncommitted isolation level (and using NOLOCK hint) can cause many strange problems, you have to clearly understand why do you do this and how it can interfere with your dataflow

Really slow schema information queries on SQL Server 2005

I have a database with a rather large number of tables, about 3500, and an application that needs to access a table list.
On a particular server this takes over 2.5 min to return.
EXEC sp_tables #table_type="'TABLE'"
I know there are faster ways to do that but sadly I'm not in a position to modify the application and need to find a way to push it below 30 seconds so the application doesn't throw timeout errors.
So. What, if anything, can I do to improve the performance of this sp within sql server?
I have seen these stored procedures run slow if you do not have the GRANT VIEW DEFINITION permission set on your user account. From what I read, this will cause a security check to occur slowing down the query.
Maybe a SQL guru can comment on why, if this does help your problem.
Well, sp_tables is system code and can't be changed (could workaround in SQL Server 2000, not SQL Server 2005+)
Your options are
Change the SQL
Change command timeout
Bigger server
You've already said "no" to the obvious solutions...
You need to approach this just like any other performance problem. Why is it slow? Namely, where does it block? Disk IO? CPU? Network? Lock contention? The scientific method is to use a methodology like Waits and Queues, or its newer SQL 2008 equivalent Troubleshooting Performance Problems in SQL Server 2008. The lazy way is to simply check the wait_type, wait_time and wait_resource columns in sys.dm_exec_requests for the session executing the sp_tables call. Once you find out what is blocking the execution, you can proceed accordingly.
If I'd venture a guess, you'll discover contention as the cause: other sessions are locking table's metadata exclusively and thus block the execution of sp_tables, which has to wait until all operations in front of it finish.

Replication from MySQL to MS SQL

I'm facing a new challenge here.
I can't seem to find precedence for replication from MySQL, running on a Linux box to MS SQL Server.
Has anybody done this before?
Most importantly all changes made to the MySQL database should be replicated on the MS database realtime or close. MS database are not likely to be updated in any other way, so a bidirectional facility is not required.
I thought one way is to read the changes out of the binary log.
Has anyone parsed one before?
Thanks for your help guys.
Triggers in MySQL could be used to catch changes and call a UDF, which could then execute ODBC queries to MSSQL. Likely terrible for performance, though.
If immediate replication isn't required:
Write triggers in MySQL that capture insert, update, and delete statements in a log table.
Poll the log table from MSSQL using ODBC and execute them, then delete those log entries.
Of course, T-SQL and MySQL's variant of SQL isn't exactly the same, but it should be close for trivial CUD operations.
Check to see if DBSync will help you do what you want
I had similar task, but I had to replicate from MSSQL 2008 to Mysql in real time.
I tried this application http://enterprise.replicator.daffodilsw.com/ and it worked but it didn't look reliable. But you can check I may be wrong.
Finally I decided to use interface OLE DB and postgress instead instead of Mysql. It works properly.

sql server 2008 reads blocking writes

I have upgraded a set of databases from sql server 2000 to sql server 2008 and now large reads are blocking writes whereas this wasn't a problem in sql server 2000 (same databases and same applications & reports) Why? What setting in 2008 is different? Did 2000 default to read uncommitted transactions?
(update)
Adding with (nolock) to the report views in question fixes the problem in the short term - in the long run we'll have to make copies of the data for reporting either with snapshots or by hand. [sigh] I'd still like to know what about sql server 2008 makes this necessary.
(update 2) Since the views in question are only used for reports 'read uncommited' should be ok for now.
SQL Server 2000 did not use READ UNCOMMITTED by default, no.
It might have something to do with changes in optimizations in the execution plan. Some indexes are likely locked in a different order from what they were in SQL Server 2000. Or SQL Server 2008 is using an index that SQL Server 2000 was ignoring altogether for that particular query.
It's hard to say exactly what's going on without more information, but read up on lock types and check the execution plans for your two conflicting queries. Here's a nice short article that explains another example on why things can deadlock.
Read Committed is the default isolation level in SQL Server 2000, not Read Uncommitted.
http://msdn.microsoft.com/en-us/library/aa259216(SQL.80).aspx
I imagine that something in your app was setting the isolation level - perhaps via one of the connection object properties. Have a look here for the methods used to set Transaction Isolation levels via ADO, ODBC, and OLE DB.
You can do the same in SQL Server 2008, but...are you sure that your app should be running under read uncommitted? Is your app specifically designed to handle data movement and phantom reads?
I'm actually surprised that you didn't run into problems in SQL Server 2000. It seems like every week we were fixing stored procedures that were locking tables because someone forgot nolocks.
You could look into snapshot isolation, this will allow the app to read the older version of the rows whilst the writing threads are still busy updating the rows.
http://msdn.microsoft.com/en-us/library/ms189050.aspx

MS SQL Concurrency, excess Locks

I have a database on ms sql 2000 that is being hit by hundreds of users at a time. There are intense reports using reporting services 2005 hitting the same database.
When there are lots of reports running and people using the database concurrently we see blocking processes to the level that the system starts to give time out to any transaction made after some time in that situation.
Is there a global way of minimize blocking so the transaction can continue to flow.
Use optimistic locking, if updates are not happening often and the database is mainly used for reporting.
SQL Server has quite a pessimistic locking default.
A look into SQL Server Table Hints might get you started.
The reports can use WITH(NOLOCK).
Other possibilities are having the reports run off a read-only replica of the database or running off a datawarehouse version of the database which is optimized for the reporting needs.
Since you are already using NOLOCK hints and READ UNCOMMITTED isolation level for your reports, the investigation needs to turn to the transactional queries coming in. This may get deep. Perhaps applications are keeping transactions open too long. It may also be the case that you have a lot of table scans or range scans in some of the other query volume, and those may be holding shared locks for long-running transactions. Those shared locks will block your writers.
You need to start looking at sp_lock, and seeing what kinds of locks are outstanding, see what locks the blocked queries are trying to obtain, and then examine the queries that are blocking the requestors.
This will help you if you are unfamiliar with SQL Server locking:
Understanding SQL Server 2000 Locking
Also, perhaps you could describe your disk subsystem. It may be undersized.
Thanks everyone for your support. What we do to mitigate the problem was to create a new database whit a logshipping procedure every hour to mantain in sync to the real one. The reports that do no need real time data where point to that database and the ones that needs real time data where restricted so only a few people can access them. The drawbacks whit the method is tha the data will be up to one hour out of sync and we need to create a new server for that purpose only. Also when the loggshipping procedure runs every connetion is drop for a very short period of time but it can be a problem to really long procedures or reports. After this I will verify the querys from the reports so I can understand what can be optimize. Thanks and I will recomend the site to the whole IT department.