In order to retrieve an ID, I first do a select and then an update, in two consequent queries.
The problem is that I am having problems with locked rows. I've read that putting both this statements, Select and Update in one stored procedure it helps with the locks. Is this true?
The queries I run are:
select counter
from dba.counter_list
where table_name = :TableName
update dba.counter_list
set counter = :NewCounter
where table_name = :TableName
The problem is that it can happen that multiple users are selecting the same row and also possible that they update the same row.
Assumptions:
you're using Sybase ASE
your select returns a single value for counter
you may want the old counter value for some purpose other than performing the update
Consider the following update statement which should eliminate any race conditions that may occur with multiple users running your select/update logic concurrently:
declare #counter int -- change to the appropriate datatype
update dba.counter_list
set #counter = counter, -- grab current value
counter = :NewCounter -- set to new value
where table_name = :TableName
select #counter -- send previous counter value to client
the update obtains an exclusive lock on the desired row (or page/table depending on table design and locking scheme)
with an exclusive lock in place you're able to retrieve the current value and set the new value with a single statement
Whether you submit the above via a SQL batch or a stored proc call is up to you and your DBA to decide ...
if statement cache is disabled, a SQL batch will need to be compiled each time it's submitted to the dataserver
if statement cache is enabled, and you submit this SQL batch on a regular basis then there's a chance the previous query plan is still in statement/procedure cache thus eliminating the (costly) compilation step
if a copy of previous stored proc (query) plan is not in procedure cache then you'll incur the (costly) compilation step when loading a (proc) query plan into procedure cahe
a stored proc is typically easier to replace in the event of a syntax/logic/performance issue (as opposed to editing, and possibly compiling, a front-end application)
... add your (least) favorite argument for SQL batch vs stored proc (vs prepared statement?) vs ??? ...
Is the table counter_list accessed by multiple clients concurrently ?
The best practices for OLTP is to call a stored procedure that will perform the update logic in one transaction.
Check that the table dba.counter_list has an index on column table_name.
Check also that it is row level locked.
Related
For my job I need to prepare two tables (CTAS) and then do some joins between them. For this job I created a script (run it in SQL Developer) which consequentially creates these two tables one after another. Since these two tables are not related I'd like to start creating them in parallel. Is it possible in SQL script to start two table creations (or two other scripts) in parallel and then proceed when both finish their jobs?
Here's one option.
I wouldn't really CTAS - I'd rather create both tables in advance, and then insert rows into them. Why? Because this approach uses stored procedures which - in order to perform DDL (which is CTAS) - require dynamic SQL. Not that it is impossible to do that; on the contrary, but it is way simpler NOT to use it.
I'd create yet another table (let's call it table_done) which contains only one row with two columns: table_1 and table_2 whose values can be 0 (meaning: data for that table is not ready) or 1 (data ready).
Furthermore, I'd create two stored procedures which look the same; the only difference is that each of them inserts rows into its own table:
create procedure p_insert_1 as
begin
-- remove old data
execute immediate 'truncate table table_1';
-- table_1 data not ready
update table_done set table_1 = 0;
-- prepare new data
insert into table_1 (...) select ...;
-- table_1 data ready
insert into table_done (table_1) values (1);
commit;
end;
The 3rd, "main" procedure, is the one you'd run manually. What would it do? Create two one-time database jobs that run immediately, each of them starting its own p_insert procedure so that they run in parallel. That procedure would then (in a loop) check whether both columns in table_done are set to 1 and - if so - continue execution.
create procedure p_main is
l_job_1 number;
l_job_2 number;
--
l_t1_done number;
l_t2_done number;
begin
dbms_job.submit(l_job_1, 'begin p_insert_1; end;');
dbms_job.submit(l_job_2, 'begin p_insert_2; end;');
loop
select table_1, table_2
into l_t1_done, l_t2_done
from table_done;
if l_t1_done = 1 and l_t2_done = 1 then
-- exit the loop
exit;
else
-- tables aren't ready yet; wait 60 seconds and try again
dbms_lock.sleep(60);
end if;
end loop;
-- process data prepared in table_1 and table_2
end;
That's just a simplified idea; I didn't test it myself so I apologize if there are any errors I made. Also,
instead of dbms_job, you could choose to use advanced dbms_scheduler
if you're on 18c (or later), use dbms_session.sleep instead of dbms_lock.sleep
and so forth
Use SQL parallelism instead of process concurrency. While the words parallelism and concurrency are colloquially interchangeable, in Oracle they have different meanings. Parallelism implies that the SQL engine handles all the coordination of breaking work into little pieces, running those pieces at the same time, and then re-assembling the results at the end. Concurrency implies that the user will create multiple sessions and handle the coordination manually.
For simply creating two tables, parallelism will probably be simpler and faster than concurrency. For parallelism, you may only need to create the table in parallel. (And you probably want to reset the parallelism back to none at the end.)
CREATE TABLE TABLE1 PARALLEL 2 AS SELECT ...;
ALTER TABLE TABLE1 NOPARALLEL;
The PARALLEL 2 option instructs Oracle to run two server processes at the same time while the SQL statement is running. You can easily increase that number, but don't go too high or you'll be stealing too many resources from other sessions.
DBMS_SCHEDULER and other concurrency mechanisms are powerful and useful, but I recommend avoiding them if possible. Running and monitoring scheduler jobs will likely be much more complicated than the preceding code. (Although you may still need to occasionally monitor the parallel SQL statement using a tool like OEM SQL Monitor Reports to ensure that the server is actually using the requested parallelism.)
I'm using SQL Server 2008 R2.
I have a view; let's call it view1. This view is complex and slow. It cannot be made into an indexed view because it uses left joins and various other trickery. As such, we created a stored procedure which basically:
obtains an exclusive lock
selects * into computed_view1_tmp from view1; (slow)
creates indexes on the above computed table (slow)
renames computed_view1 to computed_view1_todelete; and does the same for its indexes (assumed fast)
renames computed_view1_tmp to computed_view1; and does the same for its indexes (assumed fast)
drops the table computed_view1_todelete (slow)
releases the lock.
We run this procedure when we know we're changing the data in our web application. We then have other views, such as view2 using computed_view1 instead of view1.
Once in a while, we get:
Invalid object name 'dbo.computed_view1'. Could not use view or
function 'dbo.view2 because of binding errors.
I assume this is because we're trying to access dbo.computed_view1 at the same time as it's being renamed. I assume this is a very short period, but the frequency I am seeing this error in my logs makes me wonder if something else might be at play. I'm getting the error many times per day on a site with about a dozen users active throughout the day.
In development, this procedure takes about five seconds given the amount of data in the view. Renaming is instantaneous. In production, it must be taking longer but I don't understand why. I once saw the procedure fail to obtain the exclusive lock within 90 seconds.
Any thoughts on how to fix or a better solution?
Edit: Extra notes on my locking - maybe I'm not doing this right:
BEGIN TRANSACTION
DECLARE #result int
EXEC #result = sp_getapplock #Resource = 'lock_computed_view1', #LockMode = 'Exclusive', #LockTimeout = 90
IF #result NOT IN ( 0, 1 ) -- Only successful return codes
BEGIN
PRINT #result
RAISERROR ( 'Lock failed to acquire...', 16, 1 )
END
ELSE
BEGIN
// rest of the magic
END
EXEC #result = sp_releaseapplock #Resource = 'lock_computed_view1'
COMMIT TRANSACTION
If you're locking and transaction scope is right I would expect other transactions to wait and never see the view missing. This might be a SQL Server idiosyncrasy that I don't know about.
It is often possible to do without dynamic DDL. Here are two ways to do it:
TRUNCATE the computed table and insert into it. This takes an exclusive automatically. No need to rename. All of this is atomic and supports rollback.
Use a staging table with the same schema. Work on that. So far no service interruption at all. Then, SWITCH PARTITION the staging table with the production table. This is quick and atomic. This does not require Enterprise Edition.
With these approaches the problem is solved by just not renaming.
What is the DB2 equivalent of SQL Server's SET NOCOUNT ON?
"From the SQL Server documentation:
SET NOCOUNT ON... Stops the message that shows the count of the number of rows affected by a Transact-SQL statement or stored procedure from being returned as part of the result set...
For stored procedures that contain several statements that do not return much actual data, or for procedures that contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant performance boost, because network traffic is greatly reduced."
my problem is if I update a row in a table, a trigger runs that update another
row in a different table.
In Hibernate I get this error: "Batch update returned unexpected row
count from update; actual row count: 2; expected: 1".
I think because of the trigger DB2 returns 2 instead of 1, what
is correct. However, is there any way to make DB2 to return 1
without removing the trigger or can I disable the check in Hibernate?
How to handle this issue?
Can anyone plz tell "Set NoCount on"(sql server) equivalent in db2?
There is no equivalent to SET NOCOUNT in DB2 because DB2 does not produce any informational messages after a DML statement has completed successfully. Instead, the DB2 driver stores that type of information in a local, connection-specific data structure called the SQL communications area (SQLCA). It is up to the application (or whatever database framework or API the application is using) to decide which SQLCA variables to examine after executing each statement.
In your case, your application has delegated its database interaction to Hibernate, which compares the number of affected rows reported by DB2 in the SQLCA with the number of rows Hibernate expected its UPDATE statement to change. Since Hibernate isn't aware of the AFTER UPDATE trigger you created, it expects the update statement to affect only one row, but the SQLCA shows that two rows were updated (one by Hibernate's update statement, and one by the AFTER UPDATE trigger on that table), so Hibernate throws an exception to complain about the discrepancy.
This leaves you with two options:
Drop the trigger from that table and instead define an equivalent followup action in Hibernate. This is not an ideal solution if other applications that don't use Hibernate are also updating the table in question, but that's the sort of decision a team gets to make when they inflict Hibernate on a database.
Keep the AFTER UPDATE trigger where it is in DB2, and examine your options for defining Hibernate object mappings to determine if there's a way to at least temporarily disable Hibernate's row count verification logic. One approach that looks particularly encouraging is to specify the ResultCheckStyle.NONE option as part of a custom #SQLUpdate annotation.
For SQL Server and Sybase, there appears to be a third option: Hide the activity of an AFTER UPDATE trigger from Hibernate by activating SET NOCOUNT ON within the trigger. Unfortunately, there is no equivalent in DB2 (or Oracle, for that matter) that allows an application to selectively skip certain activities when tallying the number of affected rows.
suppose I fetch an RS, based on certain conditions and start looping though it , then , on certain situations , I update insert or delete records, which may have been part of this RS, using separate prepared statements.
How does this effect the result set ? My inclination is to think that since the Statement which fetched this RS was executed earlier in the process, this RS will now be blind to the changes made by my prepared statements.
Pseudocode :
Preapare Statement ps1
execute ps1 -> get Result Set rs1
loop through rs1
{
Update or delete records using other prepared statements
}
Read Consistency
Oracle guarantees that the set of data seen by a statement is consistent with respect to a single point in time and does not change during statement execution (statement-level read consistency)
That is why, If you have a query such as
insert into t
select * from t;
Oracle will simply duplicate all rows without going into an infinite loop or raising an error.
There are other implications because of this.
1) Oracle reads from the rollback segment to provide you with this read-consistent image of your data. So, if your rollback segments are nor correctly sized, or you commit across fetches, you'll get the "Snapshot too old" error, since your rollback data is no longer available.
Ok , so if that is the case , is it possible to refresh it while making updates ? I mean aside from making the cursor updateable and using the inbuilt functions of the result set.
2) Each query sees the data at the point of time it began. If by refresh you mean refiring the query, then the data you see might be different again, if you do commits in your pl/sql body or within a pl/sql loop or if some other transactions are running in your system concurrently.
It doesn't. The result set of a query/cursor is kept by the database, even if you alter or remove the rows that are the base of this result set. So you are correct, it is blind to changes made after the statement is executed.
I'm working on a web app connected to oracle. We have a table in oracle with a column "activated". Only one row can have this column set to 1 at any one time. To enforce this, we have been using SERIALIZED isolation level in Java, however we are running into the "cannot serialize transaction" error, and cannot work out why.
We were wondering if an isolation level of READ COMMITTED would do the job. So my question is this:
If we have a transaction which involves the following SQL:
SELECT *
FROM MODEL;
UPDATE MODEL
SET ACTIVATED = 0;
UPDATE MODEL
SET ACTIVATED = 1
WHERE RISK_MODEL_ID = ?;
COMMIT;
Given that it is possible for more than one of these transactions to be executing at the same time, would it be possible for more than one MODEL row to have the activated flag set to 1 ?
Any help would be appreciated.
your solution should work: your first update will lock the whole table. If another transaction is not finished, the update will wait. Your second update will guarantee that only one row will have the value 1 because you are locking the table (it doesn't prevent INSERT statements however).
You should also make sure that the row with the RISK_MODEL_ID exists (or you will have zero row with the value '1' at the end of your transaction).
To prevent concurrent INSERT statements, you would LOCK the table (in EXCLUSIVE MODE).
You could consider using a unique, function based index to let Oracle handle the constraint of only having a one row with activated flag set to 1.
CREATE UNIQUE INDEX MODEL_IX ON MODEL ( DECODE(ACTIVATED, 1, 1, NULL));
This would stop more than one row having the flag set to 1, but does not mean that there is always one row with the flag set to 1.
If what you want is to ensure that only one transaction can run at a time then you can use the FOR UPDATE syntax. As you have a single row which needs locking this is a very efficient approach.
declare
cursor c is
select activated
from model
where activated = 1
for update of activated;
r c%rowtype;
begin
open c;
-- this statement will fail if another transaction is running
fetch c in r;
....
update model
set activated = 0
where current of c;
update model
set activated = 1
where risk_model_id = ?;
close c;
commit;
end;
/
The commit frees the lock.
The default behaviour is to wait until the row is freed. Otherwise we can specify NOWAIT, in which case any other session attempting to update the current active row will fail immediately, or we can add a WAIT option with a polling time. NOWAIT is the option to choose to absolutely avoid the risk of hanging, and it also gives us the chance to inform the user that someone else is updating the table, which they might want to know.
This approach is much more scalable than updating all the rows in the table. Use a function-based index as WW showed to enforce the rule that only one row can have ACTIVATED=1.