Preventing runaway SQL in Oracle - sql

in an existing application there are several dynamically generated SQL statements that are being executed. Some of those are very slow in performance and block the UI.
Without changing the code that generates the SQL I was wondering if there is a way to prematurely stop an Oracle/SQL statement that exceeds
a) an execution time threshold
b) number of result rows
While b) sounds easy, it is not easy in the application infrastructure I am dealing with, because I do not get a recordset back that I can iterate over as the SQL is being executed. I guess in some ways a huge number of result rows would eventually trigger the time threshold.
I read something about using the Oracle Resource Manager, but I wasn't sure if that can address a) and b) and if that is the easiest way to solve this. I was hoping there are session/connection options that would help me.
Thanks in advance!

Create a profile to limit the CPU time on a single SQL call. Assign that profile to the application user.
--Create profile that limits CPU per call to 1 second.
create profile temp_profile limit cpu_per_call 100;
--Create user, assign profile.
create user profile_test_user identified by "asDF1234!";
alter user profile_test_user profile temp_profile;
grant connect to profile_test_user;
That user will get errors like this:
PROFILE_TEST_USER#someDB> select count(*) from user_objects;
COUNT(*)
----------
0
PROFILE_TEST_USER#someDB> select count(*) from all_objects;
select count(*) from all_objects
*
ERROR at line 1:
ORA-02393: exceeded call limit on CPU usage
In general this approach should be a last resort. It's usually better to spend time tuning queries and databases.

Look into Resource Consumer groups. You can limit things like CPU time to sessions/queries for specfic groups of users.
This would be a database solution, not an application one - so it would work for that user regardless of how they are logged into the database.

Related

How to get a mutual exclusion on select queries in SQL Server

I know maybe I'm asking something stupid in my application users can create a sort of agendas but only a specific number of agendas is allowed per day. So, users perform this pseudo-code:
select count(*) as created
from Agendas
where agendaDay = 'dd/mm/yyyy'
if created < allowedAgendas {
insert into Agendas ...
}
All this obviously MUST be executed in mutual exclusion. Only one user at time can read the number of created agendas and, possibly, insert a new one if allowed.
How can I do this?
I tried to open a transaction with default Read Committed isolation level but this doesn't help because during the transaction the other users can still get the number of the created agendas at the same time with a select query and so try
to insert a new one even if it wouldn't be allowed.
I don't think changing the isolation level could help.
How can I do this?
For testing I'm using SQL Server 2008 while in our production server SQL Server 2012 is run.
it sounds like you have an architecture problem there, but you may be able to achieve this requirement with:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
If you're reading an inserting within the same transaction, I don't see where the problem will be, but if you're expecting interactive input on the basis of the count then you should probably ensure you do this within a single session of implement some kind of queuing functionality

Sql Server 2008 Update query is taking so much time

I have a table name Companies with 372370 records.
And there is only one row which has CustomerNo = 'YP20324'.
I an running following query and its taking so much time I waited 5 minutes and it was still running. I couldn't figure out where is the problem.
UPDATE Companies SET UserDefined3 = 'Unzustellbar 13.08.2012' WHERE CustomerNo = 'YP20324'
You don't have triggers on update on that table?
Do you have a cascade foreign key based on that column?
Are you sure of the performance of your server? try to take a look of the memory, cpu first when you execute the query (for example on a 386 with 640mb i could understand it's slow :p)
And for the locks, you can right click the database and on the report you can see the blocking transactions. Sometimes it helps for concurrent access.
Try adding an index on the field you are using in your WHERE clause:
CREATE INDEX ix_CompaniesCustomerNo ON Companies(CustomerNo);
Also check if there are other active queries which might block the update.
Try this SQL and see what is running:
SELECT TOP 20
R.session_id, R.status, R.start_time, R.command, Q.text
FROM
sys.dm_exec_requests R
CROSS APPLY sys.dm_exec_sql_text(R.sql_handle) Q
WHERE R.status in ('runnable')
ORDER BY R.start_time
More details:
List the queries running on SQL Server
or
http://sqlhint.com/sqlserver/scripts/tsql/list-long-running-queries
Once I found someone shrinking database and blocking all other people.
More likely than not your UPDATE is not doing anything, is just waiting, blocked by some other statement. Use Activity Monitor to investigate what is causing the blocking. Most likely you have another statement that started a transaction and you forgot to close it.
There could be other causes too, eg. database/log growth. Only you can do the investigation. An index on CustomerNo is required, true, but lack of an index is unlikely to explain 5 minutes on 370k records. Blocking is more likely.
There are more advanced tools out there like sp_whoisactive.
5mn is way too long for 370k rows, even without any indexes, someone else is locking your update. use sp_who2 (or activity monitor) and check for BlockedBy Column to find who is locking your update
I would suggest to rebuild your indexes. This should surely help you.
If you do not have index on CustomerNo field you must add one.
In my case, there was a process that was blocking the update;
Run: 'EXEC sp_who;'
Find the process that is blocked by inspecting the 'blk' column; Let's say that we find a process that is blocked by '73';
Inspect the record with column 'spid' = '73' and if it's not important, run 'kill 73';
370k records is nothing for sql erver. You should check indexes on this table. Each index makes update operation longer.

What does "query" mean in SQL Server in "use query governor"

If I am executing a stored procedure containing a number of successive query statements,
Does use query governor apply to each statement executed in the stored procedure, or does it mean a single executed statement - in this case the entire stored procedure?
I think you are mixing up some concepts.
The first concept is what is a transaction? That depends if you are using explicit or implicit transactions.
By default, implicit transactions are set on.
If you want to have all the statements either committ or rollback in a stored procedure, you will have to use BEGIN TRANS, COMMIT and/or ROLLBACK statements with error checking in the stored procedure.
Now lets talk about the second concept. The resource governor is used to limit the amount of resources given to a particular user group.
Basically, a login id is mapped by a classifier function to a workload group and resource pool. This allows you to put all your users in a low priority group giving them only a small slice of the CPU and MEMORY while your production jobs can be in a high priority group with the LION's share of the resources.
This prevents a typical user from writing a report that has a huge CROSS JOIN and causes a performance issue on the production database.
I hope this clears up the confusion. If not, please ask exactly what you are looking for.
It appears the answer to my question is that a stored procedure counts as a query in this context;
We have spent some time examining an issue with a stored procedure comprising a number of EXEC'd dml statements, which timed out with "Use Query Governor" selected, according to the value of the number of seconds applicable. Deselecting "Use Query Governor" resolved the problem.

How to investigate why sql script that runs every day taking 2 min is taking 2 hours?

My colleague asked me a question today
"I have a SQL script containing 4 select queries. I have been using it
daily for more than a month but yesterday same query took 2 hours and
I had to aborting execution."
His questions were
Q1. What happened to this script on that day?
Q2. How can I check of those 4 queries which of them got executed and which one culprit for abort?
My answer to Q2 was to use SQL profiler and check trace for Sql statement event.
For Q1:
I asked few questions to him
What was the volume of data on that day?
His answer: No change
Was there any change in indexing i.e. someone might have dropped indexing? His answer: No Change
Did it trapped in a deadlock by checking data management views to track it? His answer: Not in a deadlock
What else do you think I should have considered to ask? Can there be any other reason for this?
Since I didn't see the query so I can't paste it here.
Things to look at (SQL Server):
Statistics out of date? Has somebody run a large bulk insert operation? Run update statistics.
Change in indexing? If so, if it's a stored procedure, check the execution plan and/or recompile it...then check the execution plan again and correct any problems.
SQL Server caches execution plans. If you query is parameterized or uses if-then-else logic, the first time it runs, if the parameters are an edge case, the execution plan cached can work poorly for ordinary executions. You can read more about this...ah...feature at:
http://www.solidq.com/sqj/Pages/2011-April-Issue/Parameter-Sniffing-Problem-with-SQL-Server-Stored-Procedures.aspx
http://social.msdn.microsoft.com/Forums/en-US/transactsql/thread/88ff51a4-bfea-404c-a828-d50d25fa0f59
SQL poor stored procedure execution plan performance - parameter sniffing
In this case my approach would be:
Here is the case, he had to abort the execution because the query was taking more than expected time and finally it didn't complete. As per my understanding, there might be any blocking session/uncommitted transaction for the table you are querying(executed by any different user on the day). Since you were executing 'select' statement and as I know, 'select' statements used to wait for any other transactions to get completed(if the transaction executed before 'select'). Your query might be waiting for any other transaction to get completed(the transaction might have update/insert or delete). Check for the blocking session if any.
For a single session sql server switches between threads. You need to check either the thread containing your query is in 'suspended'/'running' or 'runnable' mode. In your case your query might be in suspended mode. Investigate in which mode the query is and why.
Next thing is fragmentation. Best practice is to have a index rebuild/reorganize job configured in your environment which helps to remove unnecessary fragmentation. So that your query will need to scan less amount of pages while returning data. Otherwise , your query will be taking more and more time for returning data. Configure the job and execute the job at least once in a week. It will keep refreshing your indexes and pages.
Use EXPLAIN to analyze the four queries. That will tell you how the optimizer will be using indexes (or not using indexes).
Add queries to the script to SELECT NOW() in between the statements, so you can measure how long each query took. You can also have MySQL do arithmetic for you, by storing NOW() into a session variable and then use TIMEDIFF() to calculate the difference between start and finish of the statement.
SELECT NOW() INTO #start;
SELECT SLEEP(5); -- or whatever query you need to measure
SELECT TIMEDIFF(#start, NOW());
#Scott suggests in his comment, use the slow query log to measure the time for long-running queries.
Once you have identified the long-running query, use the query PROFILER while executing the query to see exactly where it's spending its time.

does it makes sense to do SP for simple queries like select * from users

is it going to be faster if instead of doing
select * from users where id = 1
or
delete from users where id = 1
or
select count(*) from users
I would create a SP for it ?
Performance-wise, no it doesn't make any difference.
Security-wise, it does made a difference. Using a sproc means you only need to grant execute permissions on the sproc, whereas the non-sproc approach would require permissions to be granted directly on the underlying table(s).
Network-traffic-wise - potential, slight/negligible difference. More applicable to larger statements whereby you either send the entire SQL statement across the wire or send just the sproc call. Pretty neglible overall.
Maintenance-wise - the sproc approach would allow you to (e.g.) tune a query without having to redeploy the whole application.
Something I'd be thinking is parameterising the query instead of using "hardcoded" values within an sql statement to support execution plan reuse.
If you're concerned about efficiency, don't do:
Select * From Users
Instead do:
Select column1, column2 From Users
Otherwise, SQL Server needs to do a lookup on all the columns in the Users table.
Personally, I wouldn't put something like this in a stored proc, but some people would, if they are doing all their data access via stored procedures.
If you are using SPs for all data access then yes. Otherwise I would not use it. It's never a good idea to have inconsistent behavior especially in source code.
I guess you are asking this question because you think that SPs are faster than inline queries sent by program but starting from sql server 2005 this is no more true as all execution plans are cached.
I'm leaning more towards a no on this one but then again I can see scenarios where you may want to do this.
For example, you may wish to implement a layer of security by utilizing stored procedures.
There is no improved plan reuse by using a stored procedure when considering the sample queries you have provided.