Oracle Forms - how to speed up POST_QUERY trigger? - sql

This is code I'm dealing with:
declare
--some cursors here
begin
if some_condition = 'N' then
raise form_trigger_failure;
end if;
--some fetches here
end;
It's from post-query trigger and basically my problem is that my block in Oracle forms returns 20k rows and post-query trigger is firing for each of the rows. Execution takes couple of minutes and I want to speed it up to couple of seconds. Data is validated in some_condition (it is a function, but it returns value really fast). If condition isn't met, then form_trigger_failure is raised. Is there any way to speed up this validation without changing the logic? (The same number of rows should be returned, this validation is important)
I've tried to change block properties, but it didn't help me.
Also, when I deleted whole if statement, data was returned really fast, but it wasn't validated and there were returned rows that shouldn't be visible.

Data is validated in some_condition ...
That's OK; but, why would you perform validation in a POST-QUERY trigger? It fetches data that already exist in the database, so it must be valid. Otherwise, why did you store it in the first place?
POST-QUERY should be used to populate non-database items.
Validation should be handled in WHEN-VALIDATE-ITEM or WHEN-VALIDATE-RECORD triggers, not in POST-QUERY.
I suggest you split those two actions. If certain parts of code can/should be shared between those two types of triggers, put it into a procedure (within a form) and call it when appropriate.
By the way, POST-QUERY won't fire for all 20K rows (unless you're buffering that much rows, and - if you do - you shouldn't).
Moreover, saying that the function returns the result really fast - probably, if it runs for a single row. Let it run on 200, 2000, 20000 rows as
select your_function(some_parameters)
from that_table
where rownum < 2; --> change 2 to 200 to 2000 to 20000 and see what happens
On the other hand, what's the purpose in fetching 20.000 rows? Who's going to review that? Are you sure that this is the way that you should be doing it? If so, consider switching to a stored procedure; let it perform those validations within the database, and let the form fetch "clean" data.

Related

Firebird increment a field value in place on update

I'm using Firebird 2.5.9. I have a table of information on a set of hardware impact devices that includes a running counter of the # of times the device has impacted. Each time a device is "fired", the hardware will impact 1 or more times; upon completion of the firing event, that device's row is updated with the timestamp and a result code, and I need to increment the running counter column with the number of impacts for that fire event.
I can do this as a separate query to get the field's current value, increment it and use that new value in the update statement, but that seems like a lot of extra overhead. This sort of scenario can't be that uncommon, so I assume that there's some straightforward way to do this within an update statement, but I don't know what it is. I also realize that I could do this as a stored procedure, but for now I want to just do it in the update statement if possible.
I've done this for now by expanding the existing before-insert trigger to a before-insert-or-update trigger:
CREATE TRIGGER TBIU_RPRS1 FOR RPRS ACTIVE BEFORE INSERT OR UPDATE
AS BEGIN
IF (INSERTING AND NEW.ID IS NULL) THEN NEW.ID = NEXT VALUE FOR SEQ_GLOBAL;
IF (UPDATING) THEN NEW.STRIKES = OLD.STRIKES + NEW.STRIKES;
END;
Running counters, sums, etc used to be called "stored aggregates". They are usually maintained by triggers on events tables. But before using them make sure that a simple view with plain count() is not fast enough for you.

How to discard / ignore one of a stored procedure's many return values

I have a stored procedure that returns 2 values.
In another procedure, I call this (edit: NOT selectable) procedure but only need one of the two returned values.
Is there a way to discard the other value? I'm wondering what is a good practice, and hoping for a small performance gain.
Here is how I call the procedure without error:
CREATE or ALTER procedure my_proc1
as
declare variable v_out1 integer default null;
declare variable v_out2 varchar(10) default null;
begin
execute procedure my_proc2('my_param')
returning_values :v_out1, :v_out2;
end;
That is the only way I found to call this procedure without getting a -607 error 'unsuccessful metadata update request depth exceeded. (Recursive definition?)' whenever I use only one variable v_out1.
So my actual question is: can I avoid creating a v_out2 variable for nothing, as I will never use it (that value is only used in other procedures which also call my_proc2)?
Edit: the stored procedure my_proc2 is actually not selectable. But I made it selectable after all.
Because your stored procedure is selectable, you should call it by SELECT statement, ie
select out1, out2 from my_proc2('my_param')
and in that case you can indeed omit some of the return value(s). However, I wouldn't expect noticeable performance gain as the logic inside the SP which calculates the omitted field is still executed.
If your procedure is not selectable, then creating a wrapper SP is the only way, but again, it woulnd't give any performance gain as the code which does the hard work inside the original SP is still executed.
The answer is made to use text formatting while demonstrating "race conditions" in the multithreading programming (which SQL is) when [ab]using out-of-transaction objects (SQL sequences aka Firebird Generators).
So, the "use case".
Initial condition: table is empty, generator=0.
You start two concurrent transactions, A and B. For ease of imagining you may think those transactions were started from concurrent connections made by two persons working with your program on two networked computers. Though actually it does not matter much, if you open them transactions from one same connection - the scenario would not change a bit. Just for the ease of imagining.
The Tx.A issues UPDATE-OR-INSERT which inserts new row into the table. Doing so it up-ticks the generator. The transaction is not committed yet. Database condition: the table has one invisible (non-committed) row with auto_id=1, the generator = 1.
The Tx.B issues UPDATE-OR-INSERT too which inserts yet another row into the table. Doing so it also up-ticks the generator. The transaction maybe commits now, or maybe later, irrelevant. Database condition: the table has two rows (one or both are invisible (non-committed)) with auto_id=1 and auto_id=2, the generator = 2.
The Tx.A meets some error, throws the exception, DOWNTICKS the generator and rolls back. Database condition: the table has one row with auto_id=2 the generator = 1.
If Tx.B was not committed before, it is committed now. (this "if" just to demonstrate that it does not matter when other transactions would be committed, earlier or later, it only matters that Tx.A downticks the generator after any other transaction upticked it)
So, the final database condition: the table has one committed=visible row with auto_id=2 and the generator = 1.
Any next attempt to add yet one more row would try to up the generator 1+1=2 and then fail to insert new row with PK violation, then it would down the generator to 1 to recreate the faulty condition outlined above.
Your database stuck and without direct intervention by DB Administrator can not have data added further.
The very idea of rolling back the generator is defeating all intentions generators were created for and all expectations about generators behavior that the database and connection libraries and other programmers have.
You just placed a trap on the highway. It is only a matter of time until someone will be caught into it.
Even if you would continue guarding this hack by other hacks for now - wasting a lot of time and attention to do that scrupulously and pervasively - still one unlucky day in the future there would be another programmer, or even you would forget this gory details - and you would start using the generator in standard intended way - and would run into the trap.
Generators were not made to be backtracked during normal work.
existence of primary key is checked in the procedure before doing anything
Yep, that is the first reaction when multithreading programmer meets his first race condition. Let's just add more prior checks.
First few checks indeed can decrease probability of a clash, but it never can alleviate it completely. And the more use your program would see, the more transactions would get opened by more and more concurrent and active users - it is only a matter of time until this somewhat lowered probability would turn out still too much.
Think about it, SQL is about transactions, yet they had to invent and introduce explicitly out-of-transactions device Generator/Sequence is. If there was reliable solution without them - it would be just used instead of creating that so non-SQLish transaction boundary breaking tool.
When you say your SP "checks for PK violation" it is exactly the same as if you would drop the generator altogether and instead just issue "good old"
:new_id = ( select max(auto_id)+1 from MyTable );
By your description you actually do something like that, but in some indirect way. Something like
while exists( select * from MyTable where auto_id = gen_id(MyGen, +1))
do ;
:new_id = gen_id(MyGen, 0);
You may feel, that because you mentioned generators, you somehow overcame the cross-transaction invisibility problem. But you did not, because the very check "if PK was already taken" is done against in-transaction table.
That changes nothing, your two transactions Tx.A and Tx.B would not see each other's records, because they both did not committed yet. Now it only takes some unlucky Tx.C that would fail and downtick the generator to them collide on the same ID.
Or not, you do not even need Tx.C and downticking at all!
Here we bump into the multithreading idea about "atomic operations".
Let's look at it again.
while exists( select * from MyTable where auto_id = gen_id(MyGen, +1))
do ;
:new_id = gen_id(MyGen, 0);
In a single-threaded application that code is okay: you just keep running the generator up until the free slot, then you just query the value without changing it. "What could possibly go wrong?" But in multithreaded environment it is rooks waiting to be stepped over. Example:
Initial condition, table has 100 rows (auto_id goes from 1 to 100), the generator = 100.
Tx.A starts adding the row, upticks the generator in the while loop and exits the loop. It does not yet pass to the second line where local variable gets assigned. Not yet. The generator = 101, rows not added yet.
Tx.B starts adding the row, upticks the generator in the while loop and exits the loop. The generator = 102, rows not added yet.
Tx.A goes to the second line and reads gen_id(MyGen,0) into a variable for new row. While it was 101 out of the loop, it is 102 now!
Tx.B goes to the second line and reads gen_id(MyGen,0) and gets 102 too.
Tx.A and Tx.B both try to insert new row with auto_id=102
RACE CONDITIONS - both Tx.A and Tx.B try to commit their work. One of them succeeds, another fails. Which one? It is not predictable. A lucky one commits, an unlucky one fails.
The failed transaction downticks the generator.
Final condition: the table has 101 rows, the auto_id consistently goes from 1 to 100 and then skips to 102. The generator = 101, which his less than MAX(auto_id)
Now you might want to add more hacks, I mean more prior checks before actually inserting rows and committing. It will make mistakes yet less probable, right? Wrong. The more checks you do - the slower gets the code. The slower gets the code - the greater gets probability, that while one thread runs throw all them checks there happens another thread that interferes and alters the situation that was checked a moment ago.
The fundamental issue with multithreading is that any check is SEPARATE action. And between those actions the situation MAY change. Your procedure may check whatever it wants BEFORE actually inserting the row. It would not warrant much. Because when you finally gets at the row inserting statement, all the checks you did in the PAST are a matter of past. And the situation is potentially already altered. And warrants your checks were giving in the PAST only belong to that past, not to the moment at hands.
And even if you no more look for warranting sure thing, still adding every new check you can not even be sure if doing so you just decreased or increased probability of failure. Because multithreading is a bitch, it is flowing chaotically out of your control.
So, remember the KISS principle. Until proven otherwise - you most probably do not need SP2 at all, you only need one single UPDATE-OR-INSERT statement.
PS. There was a pretty fun game in my school days, it was called Pascal Robots. There are also C Robots I heard and probably implementation for other many languages. With Pascal Robots though came a number of already coded robots, demonstrating different strategies and approaches. Some of them were really thought out in very intrinsic details. And there was one robot which program was PRIMITIVE. It only had two loops: if you do not see an enemy - keep turning your radar around, if you do see an enemy - keep running to it and shooting at it. That was all. What could this idiot do against sophisticated robots having creative attack and defense strategies, flanking maneuvers, optimal distance to maintain by back and forth movements, escape tricks and more? Those sophisticated robots employed very extensive checks and very thought through hacks to be triggered by those checks. So... ...so that primitive idiot was second or maybe third best robot in the shipped set. there was only one or two smarties who could outwit it. With ALL the other robots this lean-and-fast idiot finished them before they could run through all their checks and hacks thrice. That is what multithreading does to programming. It was astonishing to watch those battles, which went so against out single-threaded intuition.

Write to the same row at the same time without locking?

What I need to do is to write to the same row from two different sources (procedures/methods/services).
The first call that comes in creates the row, and the next one just updates it.
This needs to happen without any locking taking place. And if possible I would like to be able to call either source just once (not repeatedly by dealing with locking errors)
Here is kinda what I have now in a third procedure that the others call and just inserts a row (only inserts into the xyz) or returns true if there is a row.
That way it´s just fast and unlikely that both calls arrive at the same time.
IF EXISTS(SELECT * FROM [dbo].[Wait] WHERE xyx= #xyz)
BEGIN
-- The row exists because the other datasource
-- has allready inserted a row with the same xyz
-- UPDATE THE ROW WITH DATA COMING IN
END
ELSE
BEGIN
-- No row with value xyz exists so we INSERT it with
-- the extra data.
END
I know it does´t guaranty no lock. But in my case it´s actually unlikely that both arrive at the same time and even if they would it´s user controlled so they will get an error and will just try again. BUT I wan´t to solve this.
I have been seeing Row Versioning popping up but I´m not sure if that helps or how I should use it.
Have a look at Michael J Swarts' article Mythbusting: Concurrent Update/Insert Solutions. This will show you all possible do's and don'ts. Including the fact that merge actually doesn't do a great job in solving concurrency issues.

Finding Execution time of query using SQL Developer

I am beginner with Oracle DB.
I want to know execution time for a query. This query returns around 20,000 records.
When I see the SQL Developer, it shows only 50 rows, maximum tweakable to 500. And using F5 upto 5000.
I would have done by making changes in the application, but application redeployment is not possible as it is running on production.
So, I am limited to using only SQL Developer. I am not sure how to get the seconds spent for execution of the query ?
Any ideas on this will help me.
Thank you.
Regards,
JE
If you scroll down past the 50 rows initially returned, it fetches more. When I want all of them, I just click on the first of the 50, then press CtrlEnd to scroll all the way to the bottom.
This will update the display of the time that was used (just above the results it will say something like "All Rows Fetched: 20000 in 3.606 seconds") giving you an accurate time for the complete query.
If your statement is part of an already deployed application and if you have rights to access the view V$SQLAREA, you could check for number of EXECUTIONS and CPU_TIME. You can search for the statement using SQL_TEXT:
SELECT CPU_TIME, EXECUTIONS
FROM V$SQLAREA
WHERE UPPER (SQL_TEXT) LIKE 'SELECT ... FROM ... %';
This is the most precise way to determine the actual run time. The view V$SESSION_LONGOPS might also be interesting for you.
If you don't have access to those views you could also use a cursor loop for running through all records, e.g.
CREATE OR REPLACE PROCEDURE speedtest AS
count number;
cursor c_cursor is
SELECT ...;
BEGIN
-- fetch start time stamp here
count := 0;
FOR rec in c_cursor
LOOP
count := count +1;
END LOOP;
-- fetch end time stamp here
END;
Depending on the architecture this might be more or less accurate, because data might need to be transmitted to the system where your SQL is running on.
You can change those limits; but you'll be using some time in the data transfer between the DB and the client, and possibly for the display; and that in turn would be affected by the number of rows pulled by each fetch. Those things affect your application as well though, so looking at the raw execution time might not tell you the whole story anyway.
To change the worksheet (F5) limit, go to Tools->Preferences->Database->Worksheet, and increase the 'Max rows to print in a script' value (and maybe 'Max lines in Script output'). To change the fetch size go to the Database->Advanced panel in the preferences; maybe to match your application's value.
This isn't perfect but if you don't want to see the actual data, just get the time it takes to run in the DB, you can wrap the query to get a single row:
select count(*) from (
<your original query
);
It will normally execute the entire original query and then count the results, which won't add anything significant to the time. (It's feasible it might rewrite the query internally I suppose, but I think that's unlikely, and you could use hints to avoid it if needed).

SQL update working not insert

Ok I am going to do my best describing this. I have a SP which passes in XML and updates and inserts another table. This was working yesterday. All I changed today was loading the temp table with a OPENXML vs xml.nodes. I even changed it back and I am still getting this interesting issue. I have an update and insert in the same transaction. The update works and then the Insert hangs, no error no nothing... going on 9 minutes. Normally takes 10 seconds. No Blocking processes according to master.sys.sysprocesses. The funny thing is the Select of the Insert returns no rows as they are already in the database. The update updates 72438 in
SQL Server Execution Times:
CPU time = 1359 ms, elapsed time = 7955 ms.
ROWS AFFECTED(72438)
I am out of ideas as to what could be causing my issue? Permissions I don't think so? Space I don't think so because a Error would be returned?
queries:
UPDATE [Sales].[dbo].[WeeklySummary]
SET [CountryId] = I.CountryId
,[CurrencyId] = I.CurrencyId
,[WeeklySummaryType] = #WeeklySummaryTypeId
,[WeeklyBalanceAmt] = M.WeeklyBalanceAmt + I.WeeklyBalanceAmt
,[CurrencyFactor] = I.CurrencyFactor
,[Comment] = I.Comment
,[UserStamp] = I.UserStamp
,[DateTimeStamp] = I.DateTimeStamp
FROM
[Sales].[dbo].[WeeklySummary] M
INNER JOIN
#WeeklySummaryInserts I
ON M.EntityId = I.EntityId
AND M.EntityType = I.EntityType
AND M.WeekEndingDate = I.WeekEndingDate
AND M.BalanceId = I.BalanceId
AND M.ItemType = I.ItemType
AND M.AccountType = I.AccountType
and
INSERT INTO [Sales].[dbo].[WeeklySummary]
([EntityId]
,[EntityType]
,[WeekEndingDate]
,[BalanceId]
,[CountryId]
,[CurrencyId]
,[WeeklySummaryType]
,[ItemType]
,[AccountType]
,[WeeklyBalanceAmt]
,[CurrencyFactor]
,[Comment]
,[UserStamp]
,[DateTimeStamp])
SELECT
I.[EntityId]
, I.[EntityType]
, I.[WeekEndingDate]
, I.[BalanceId]
, I.[CountryId]
, I.[CurrencyId]
, #WeeklySummaryTypeId
, I.[ItemType]
, I.[AccountType]
, I.[WeeklyBalanceAmt]
, I.[CurrencyFactor]
, I.[Comment]
, I.[UserStamp]
, I.[DateTimeStamp]
FROM
#WeeklySummaryInserts I
LEFT OUTER JOIN
[Sales].[dbo].[WeeklySummary] M
ON I.EntityId = M.EntityId
AND I.EntityType = M.EntityType
AND I.WeekEndingDate = M.WeekEndingDate
AND I.BalanceId = M.BalanceId
AND I.ItemType = M.ItemType
AND I.AccountType = M.AccountType
WHERE M.WeeklySummaryId IS NULL
UPDATE
Trying the advice here worked for a while I run the following before my stored procedure call
UPDATE STATISTICS Sales.dbo.WeeklySummary;
UPDATE STATISTICS Sales.dbo.ARSubLedger;
UPDATE STATISTICS dbo.AccountBalance;
UPDATE STATISTICS dbo.InvoiceUnposted
UPDATE STATISTICS dbo.InvoiceItemUnposted;
UPDATE STATISTICS dbo.InvoiceItemUnpostedHistory;
UPDATE STATISTICS dbo.InvoiceUnpostedHistory;
EXEC sp_recompile N'dbo.proc_ChargeRegister'
Still stalling at the Insert Statement, which again inserts 0 rows.
There are really only a few things that can be going on, and the trick here is to eliminate them in order, from simplest to most complex.
STEP 1: Hand craft a set of XML to run that will produce exactly one insert and no updates, so you can go "back to basics" as it were and establish that the code is still doing what you expect, and the result is exactly what you expect. This may seem silly or unnecessary but you really need this reality check to start.
STEP 2: Hand craft a set of XML that will produce a medium-size set of inserts, still with no updates. Based on your experience with the routine, try to find something that will run in a 3-4 seconds. Perhaps 5000 rows. Does it continue to behave as expected?
STEP 3: Assuming steps 1 and 2 pass easily, the next most likely problem is TRANSACTION SIZE. If your update hits 74,000 rows in a single statement, then SQL Server must allocate resources to be able to roll back all 74,000 rows in the case of an abort. Generally you should assume the resources (and time) required to maintain a transaction explode exponentially as the row count goes up. So hand-craft one more set of inserts that contains 50,000 rows. You should find it takes dramatically more time. Let it finish. Is it 10 minutes, an hour? If it takes a long time but finishes, you have an issue with TRANSACTION SIZE, the server is choking trying to keep track of everything required to roll back the insert in the event of failure.
STEP 4: Determine if your entire stored procedure is operating within a single implied transaction. If it is, the matter is entirely worse, because SQL Server is tracking together everything required to roll back both the 74,000 updates and the ??? inserts in a single transaction. See this page:
http://msdn.microsoft.com/en-us/library/ms687099(v=vs.85).aspx
STEP 5: If you've got a single implicit transaction, you can either. A) Turn that off, which may help some but will not entirely fix the problem, or B) break the sproc into two separate calls, one for updates, one for inserts, so that at least the two are in separate transactions.
STEP 6: Consider "chunking". This is a technique for avoiding exploding transaction costs. Considering just the INSERT to get us started, you wrap the insert into a loop that begins and commits a transaction inside each iteration, and exits when affected rows is zero. The INSERT is modified so that you pull only the first 1000 rows from the source and insert them (that 1000 number is kind of arbitrary, you may find 5000 produces better performance, you have to experiment a bit). Once the INSERT affects zero rows, there are no more rows to handle and the loop exits.
QUICK EDIT: The "chunking" system works because the complete throughput for a large set of rows looks something like a quadratic. If you execute an INSERT that affects a huge number of rows, the total time for all rows to be handled explodes. If on the other hand you break it up and go row-by-row, the overhead of opening and committing each statement causes the total time for all rows to explode. Somewhere in the middle, when you've "chunked" out 1k rows per statement, the transaction requirements are at their minimum and the overhead of opening and committing the statement is negligible, and the total time for all rows to be handled is a minimum.
I had a problem where the stored proc was actually getting recompiled in the middle of running because it was deleting rows from a temp table. My situation doesn't look like yours, but mine was so odd that reading about it might give you some ideas.
Unexplained SQL Server Timeouts and Intermittent Blocking
I think you should post the full stored proc because the problem doesn't look to be where you think it is.