We have a monitoring query for our admins that they run directly in the management studio. This query is a long running query that analyses all records in a table. We have noticed now that when this query is running incoming updates a blocked until this query is finished. After a little investigation we have seen that on sql server select does a shared lock that blocks exclusive locks and that an update does a update lock and then a exclusive lock.
https://www.sqlpassion.at/archive/2014/07/28/why-do-we-need-update-locks-in-sql-server/#comment-104367
https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms186396(v=sql.105)
Is this a behavior that we can change? What is the best practice to avoid this situation if we have a public portal as frontend where we have many selects also a little bit longer running and different updates on the same table?
Related
I am trying to see what queries are currently running in an oracle DB. However, when I try using the table v$session it gives me an error:
What's the cause of this and what would be the correct way to get the active processes that are running?
I'm looking to get the necessary information to be able to cancel a query for a given user. Let me give an example:
1) User executes query in the application. We add in a comment so we can 'track' that query:
/* Query-ID-1283849 */ select * from mytable
2) Now, if the user clicks the "Cancel" button while the query is running (let's say the query is taking a very long time to respond), we allow the user to cancel that query, given that the user will probably NOT be a sys user but a 'normal' user with read-only privileges.
How could this be done?
At the fundamental database level, you can't kill individual queries. You kill individual sessions. I'm inferring from your question that your specific use case is inside an application, not a tool like Sql Developer or Sql Plus.
Session killing can be done by users that have special database privileges to kill sessions. If you are inside an application running multiple queries in one session, killing the session will effectively kill your application and require either a) an application restart or b) gracefully programatically handling the dropped session.
If you are using an n-tier ORM framework that handles database interactions for you, you may be in a position where killing the session won't have any affect on your application other than the currently running statement.
Another way in your app to handle isolating sessions and queries is to run a multi-threaded application. Each query spawns a new thread, and the thread can be killed without necessarily killing the session.
Basically, the short answer is you can kill a query only by killing its session. Your approach of looking at v$session is correct and necessary to find the session id for any givel sql statement, you just need to have your DBA grant your privileges to the v$session and v$sql synonyms.
Update specific to Sql Developer, based on OP's comment for clarification:
Sql Developer has an option to allow running parallel queries, taking advantage of threads and multiple connections. That setting is found at Tools > Preferences > Database > Worksheet. Regardless of the setting, when you click the query cancel button, the app is still sending a session kill request. The GUI will usually gracefully start a new session and the end user is none the wiser about it. But sometimes things don't work out and the GUI freezes or you end up with no connection and have to manually reconnect.
To add to the complexity, the behavior depends on the driver/client used by your application. OCI, thick clients, and thin clients have shown different behaviors in the past when it comes to kill requests. In fact, in Sql Developer, you have an option to force it to use OCI or a thick driver so that you can avoid certain behaviors.
I'd highly recommend getting rights to view v$session and play around with this. It's fun to learn more about how Oracle manages sessions.
I just discovered that the latest version, Oracle 18c, allows killing an individual query within a session. I'm on 12c so I have not tried this. https://docs.oracle.com/en/database/oracle/oracle-database/18/admin/managing-processes.html#GUID-7D8E5E00-515D-4338-8B86-C2044F6D2957
Relevant parts from the documentation.
5.10.5 Cancelling a SQL Statement in a Session You can cancel a SQL statement in a session using the ALTER SYSTEM CANCEL SQL statement.
Instead of terminating a session, you can cancel a high-load SQL
statement in a session. When you cancel a DML statement, the statement
is rolled back.
The following clauses are required in an ALTER SYSTEM CANCEL SQL
statement:
SID – Session ID
SERIAL – Session serial number
The following clauses are optional in an ALTER SYSTEM CANCEL SQL
statement:
INST_ID – Instance ID
SQL_ID – SQL ID of the SQL statement
You can view this information for a session by querying the GV$SESSION
view.
The following is the syntax for cancelling a SQL statement:
ALTER SYSTEM CANCEL SQL 'SID, SERIAL, #INST_ID, SQL_ID';
I'm currently running Microsoft SQL Express Server.
When one user performs a query without committing it, it locks the entire table.
The problem is that malicious users might "ruin" the database by doing so on purpose.
How can I prevent this from happening?
You need to understand database isolation levels http://en.wikipedia.org/wiki/Isolation_(database_systems). Most likely you are running queries as seralizable which will have that effect. Try submitting some code.
I'm developing an ETL application with batch processing. There is low (i.e. no) concurrency for updates. I'd like to avoid the overhead of granular locks and lock escalation by merely locking the entire table.
I'd like to avoid having to specify TABLOCK in every statement. Is there any way to set the locking granularity at the top of a stored procedure such that every statement automatically gets table locks on every table used? Shared or exclusive doesn't matter, though shared is preferred; the ETL will run overnight with no adhoc query load and prior to a batch of reports triggered when the ETL is complete.
Thanks!
You will need to take a look at Transaction Isolation Levels
To be honest though, I can't see why you need to be doing anything. I would have thought SQL Server would do a good enough job of locking by itself.
I have a database with a rather large number of tables, about 3500, and an application that needs to access a table list.
On a particular server this takes over 2.5 min to return.
EXEC sp_tables #table_type="'TABLE'"
I know there are faster ways to do that but sadly I'm not in a position to modify the application and need to find a way to push it below 30 seconds so the application doesn't throw timeout errors.
So. What, if anything, can I do to improve the performance of this sp within sql server?
I have seen these stored procedures run slow if you do not have the GRANT VIEW DEFINITION permission set on your user account. From what I read, this will cause a security check to occur slowing down the query.
Maybe a SQL guru can comment on why, if this does help your problem.
Well, sp_tables is system code and can't be changed (could workaround in SQL Server 2000, not SQL Server 2005+)
Your options are
Change the SQL
Change command timeout
Bigger server
You've already said "no" to the obvious solutions...
You need to approach this just like any other performance problem. Why is it slow? Namely, where does it block? Disk IO? CPU? Network? Lock contention? The scientific method is to use a methodology like Waits and Queues, or its newer SQL 2008 equivalent Troubleshooting Performance Problems in SQL Server 2008. The lazy way is to simply check the wait_type, wait_time and wait_resource columns in sys.dm_exec_requests for the session executing the sp_tables call. Once you find out what is blocking the execution, you can proceed accordingly.
If I'd venture a guess, you'll discover contention as the cause: other sessions are locking table's metadata exclusively and thus block the execution of sp_tables, which has to wait until all operations in front of it finish.
I have a database on ms sql 2000 that is being hit by hundreds of users at a time. There are intense reports using reporting services 2005 hitting the same database.
When there are lots of reports running and people using the database concurrently we see blocking processes to the level that the system starts to give time out to any transaction made after some time in that situation.
Is there a global way of minimize blocking so the transaction can continue to flow.
Use optimistic locking, if updates are not happening often and the database is mainly used for reporting.
SQL Server has quite a pessimistic locking default.
A look into SQL Server Table Hints might get you started.
The reports can use WITH(NOLOCK).
Other possibilities are having the reports run off a read-only replica of the database or running off a datawarehouse version of the database which is optimized for the reporting needs.
Since you are already using NOLOCK hints and READ UNCOMMITTED isolation level for your reports, the investigation needs to turn to the transactional queries coming in. This may get deep. Perhaps applications are keeping transactions open too long. It may also be the case that you have a lot of table scans or range scans in some of the other query volume, and those may be holding shared locks for long-running transactions. Those shared locks will block your writers.
You need to start looking at sp_lock, and seeing what kinds of locks are outstanding, see what locks the blocked queries are trying to obtain, and then examine the queries that are blocking the requestors.
This will help you if you are unfamiliar with SQL Server locking:
Understanding SQL Server 2000 Locking
Also, perhaps you could describe your disk subsystem. It may be undersized.
Thanks everyone for your support. What we do to mitigate the problem was to create a new database whit a logshipping procedure every hour to mantain in sync to the real one. The reports that do no need real time data where point to that database and the ones that needs real time data where restricted so only a few people can access them. The drawbacks whit the method is tha the data will be up to one hour out of sync and we need to create a new server for that purpose only. Also when the loggshipping procedure runs every connetion is drop for a very short period of time but it can be a problem to really long procedures or reports. After this I will verify the querys from the reports so I can understand what can be optimize. Thanks and I will recomend the site to the whole IT department.