I am beginner with Oracle DB.
I want to know execution time for a query. This query returns around 20,000 records.
When I see the SQL Developer, it shows only 50 rows, maximum tweakable to 500. And using F5 upto 5000.
I would have done by making changes in the application, but application redeployment is not possible as it is running on production.
So, I am limited to using only SQL Developer. I am not sure how to get the seconds spent for execution of the query ?
Any ideas on this will help me.
Thank you.
Regards,
JE
If you scroll down past the 50 rows initially returned, it fetches more. When I want all of them, I just click on the first of the 50, then press CtrlEnd to scroll all the way to the bottom.
This will update the display of the time that was used (just above the results it will say something like "All Rows Fetched: 20000 in 3.606 seconds") giving you an accurate time for the complete query.
If your statement is part of an already deployed application and if you have rights to access the view V$SQLAREA, you could check for number of EXECUTIONS and CPU_TIME. You can search for the statement using SQL_TEXT:
SELECT CPU_TIME, EXECUTIONS
FROM V$SQLAREA
WHERE UPPER (SQL_TEXT) LIKE 'SELECT ... FROM ... %';
This is the most precise way to determine the actual run time. The view V$SESSION_LONGOPS might also be interesting for you.
If you don't have access to those views you could also use a cursor loop for running through all records, e.g.
CREATE OR REPLACE PROCEDURE speedtest AS
count number;
cursor c_cursor is
SELECT ...;
BEGIN
-- fetch start time stamp here
count := 0;
FOR rec in c_cursor
LOOP
count := count +1;
END LOOP;
-- fetch end time stamp here
END;
Depending on the architecture this might be more or less accurate, because data might need to be transmitted to the system where your SQL is running on.
You can change those limits; but you'll be using some time in the data transfer between the DB and the client, and possibly for the display; and that in turn would be affected by the number of rows pulled by each fetch. Those things affect your application as well though, so looking at the raw execution time might not tell you the whole story anyway.
To change the worksheet (F5) limit, go to Tools->Preferences->Database->Worksheet, and increase the 'Max rows to print in a script' value (and maybe 'Max lines in Script output'). To change the fetch size go to the Database->Advanced panel in the preferences; maybe to match your application's value.
This isn't perfect but if you don't want to see the actual data, just get the time it takes to run in the DB, you can wrap the query to get a single row:
select count(*) from (
<your original query
);
It will normally execute the entire original query and then count the results, which won't add anything significant to the time. (It's feasible it might rewrite the query internally I suppose, but I think that's unlikely, and you could use hints to avoid it if needed).
Related
This is code I'm dealing with:
declare
--some cursors here
begin
if some_condition = 'N' then
raise form_trigger_failure;
end if;
--some fetches here
end;
It's from post-query trigger and basically my problem is that my block in Oracle forms returns 20k rows and post-query trigger is firing for each of the rows. Execution takes couple of minutes and I want to speed it up to couple of seconds. Data is validated in some_condition (it is a function, but it returns value really fast). If condition isn't met, then form_trigger_failure is raised. Is there any way to speed up this validation without changing the logic? (The same number of rows should be returned, this validation is important)
I've tried to change block properties, but it didn't help me.
Also, when I deleted whole if statement, data was returned really fast, but it wasn't validated and there were returned rows that shouldn't be visible.
Data is validated in some_condition ...
That's OK; but, why would you perform validation in a POST-QUERY trigger? It fetches data that already exist in the database, so it must be valid. Otherwise, why did you store it in the first place?
POST-QUERY should be used to populate non-database items.
Validation should be handled in WHEN-VALIDATE-ITEM or WHEN-VALIDATE-RECORD triggers, not in POST-QUERY.
I suggest you split those two actions. If certain parts of code can/should be shared between those two types of triggers, put it into a procedure (within a form) and call it when appropriate.
By the way, POST-QUERY won't fire for all 20K rows (unless you're buffering that much rows, and - if you do - you shouldn't).
Moreover, saying that the function returns the result really fast - probably, if it runs for a single row. Let it run on 200, 2000, 20000 rows as
select your_function(some_parameters)
from that_table
where rownum < 2; --> change 2 to 200 to 2000 to 20000 and see what happens
On the other hand, what's the purpose in fetching 20.000 rows? Who's going to review that? Are you sure that this is the way that you should be doing it? If so, consider switching to a stored procedure; let it perform those validations within the database, and let the form fetch "clean" data.
When type query in SQL Developer, it return data less than a second. When do the same in Oracle APEX it take much more time, over 5 seconds. I go in DEBUG section to see what's wrong, and it return this to me:
-IR binding: "APXWS_MAX_ROW_CNT" value="1000000"
I figure it out, that it returns more than 1.000.000 rows, and that's why is slower. But don't know how to fix it, to get approximately the same time as in SQL Developer?
"Leave the Maximum Row Count property null, so classic reports won't fetch all the way to this number and interactive reports won't introduce the analytic function count(*) over ().
Don't use a Pagination Type with a Z, so classic reports won't fetch all rows and interactive reports again won't introduce count(*) over ()."
source: http://rwijk.blogspot.ca/2016/11/performance-aspects-of-apex-reports.html
(I saved it in the wayback machine too if the link goes away: http://web.archive.org/web/20170706183715/http://rwijk.blogspot.ca/2016/11/performance-aspects-of-apex-reports.html
Put some limits on Maximum Row Count and Maximum Rows per Page can help you to mitigate loading.
You never had same performance as SQL Developer in a web page apex or not.
I am running into a very strange bit of behavior with a query in Oracle The query itself is enormous and quite complex...but is basically the same every time I run it. However, it seems to execute more slowly when returning a smaller result-set. The best example I can give is that if I set this filter on it,
and mgz_query.IsQueryStatus(10001,lqo.query_id)>0
which returns 960 of 12,429 records, I see an execution time of about 1.9 seconds. However, if I change the filter to
and mgz_query.IsQueryStatus(10005,lqo.query_id)>0
which returns 65 of 12,429 records, I see an execution time of about 6.8 seconds. When digging a bit deeper, I found that it seems the smaller result set was performing considerably more buffer gets than the larger result set. This seems completely counter-intuitive to me.
The query this is being run against is roughly 8000 characters long (Unless someone wants it, I'm not going to clutter this post with the entire query), includes 4 'Union All' statements, but otherwise filters primarily on indexes and is pretty efficient, apart from its massive size.
The filter in use is executed via the below function.
Function IsQueryStatus(Pni_QueryStateId in number,
Pni_Query_Id in number) return pls_integer as
vn_count pls_integer;
Begin
select count(1)
into vn_count
from m_query mq
where mq.id = Pni_Query_Id
and mq.state_id = Pni_QueryStateId;
return vn_count;
End;
Any ideas as to what may be causing the smaller result set to perform so much worse than the large result set?
I think you are facing a situation where determining that something is not in the set takes a lot longer than determining if it is in the set. This can occur quite often. For instance, if there is an index on m_query(id), then consider how the where clause might be executed:
(1) The value Pni_Query_Id is looked up in the index. There is no match. Query is done with a value of 0.
(2) There are a bunch of matches. Now, let's fetch the pages where state_id is located and compare to Pni_QueryStateId. Ohh, that's a lot more work.
If that is the case, then having an index on m_query(id, state_id) should help the query.
By the way, this is assuming that the only change is in function call in the where clause. If there are other changes to get fewer rows, you might just be calling this function fewer times.
I have an INSERT query in Oracle 10g that is getting stuck on a "SQL*Net message from dblink" event. It looks like:
INSERT INTO my_table (A, B, C, ...)
SELECT A, B, C, ... FROM link_table#other_system;
I do not see any locks on my_table besides the one from the INSERT I'm trying to do. The SELECT query on link_table#other_system completes without any trouble when run on its own. I only get this issue when I try to do the INSERT.
Does anyone know what could be going on here?
UPDATE
The SELECT returns 4857 rows in ~1.5 mins when run alone. The INSERT was running over an hour with this wait message before I decided to kill it.
UPDATE
I found an error in my methods. I was using a date range to limit the results. The date range I used when testing the SELECT only was before the last OraStats run on the link_table, but the date range that I used when testing the INSERT was after the last OraStats run on the link_table. So, that mislead me to believe there was a problem with the INSERT. Not very scientific of me to do this; my mistake.
SQL*Net message from dblink generally means that your local system is waiting on the network to transfer the data across the network. It's a very normal wait event for this sort of query.
How many rows does the SELECT statement return? How much data (in MB/ GB) does that represent?
When you say that it "completes without any trouble on its own", are you actually fetching all the data? If you're using something like TOAD or SQL Developer, the GUI will generally fetch the first N rows and return to you. That can be very quick but it doesn't imply that the database is done executing the query-- it may take much more time to finish producing all the rows your query is going to return. It's pretty common for people to measure the time required to fetch the first N rows rather than the time to fetch the last row-- your INSERT statement, obviously, can't return until all the rows have been fetched from the remote table.
Are you using a /*+ driving_site(link_table) */ hint to make Oracle perform the joins on the remote server?
If so, that hint will not work with DML, as explained by Jonathan Lewis on this page.
This may be a rare case where running the query just as a SELECT uses a very different plan than running the query as part of an INSERT. (You will definitely want to learn how to generate explain plans in your environment. Most tools have a button to do this.)
As Andras Gabor recommended in the link, you may want to use PL/SQL BULK COLLECT to improve performance. This may be a rare case where PL/SQL will work faster than SQL.
I just want to see whether the data is getting inserted on the table or not..
So i have written like this:
select count(*) from emp;
dbms_lock.sleep(1);
select count(*) from emp;
So that it will sleep for 1 min . Even after sleep if the 1st count and 2nd count are different then the data is getting inserted into the table.
Otherwise the insertions are not happening.
But i have a small doubt regarding this, whether this instance will hang for 1 sec or the whole Database will hangon for 1 sec.
And if it wrong how to implement this.
Only your PL/SQL-Block will be put to sleep. If you want to sleep for a minute, pass 60 (seconds) to sleep.
You can use USER_LOCK.SLEEP if you don't want to grant EXECUTE to DBMS_LOCK which can be more destructive. The arguments are different however you can achieve the same thing.
USER_LOCK.SLEEP
PROCEDURE SLEEP
Argument Name Type In/Out Default?
TENS_OF_MILLISECS NUMBER IN
DBMS_LOCK.SLEEP
PROCEDURE SLEEP
Argument Name Type In/Out Default?
SECONDS NUMBER IN
To be more clear, you don't know whether or not the insertions are not happening; all you know is that the count of committed records didn't change. If you have access to V$TRANSACTION, you can look at USED_UBLK and USED_UREC to verify that a transaction in flight is generating changes.
How are you inserting into the table? Is it batch? OLTP? I don't think what you proposed is pratical, but cannot suggest another way until more information is provided. Give us a little more information on the process.
First of all, sorry for my english :-D
AFAIK select count(*) is wasting resource. I suggest you create a trigger to increment number of rows inserted somewhere else. Your scheduled job then check the number of inserted rows and reset it to zero prior to exiting. In this way you will know whether any rows inserted between runs and how many.
create table emp_stat(inserted int);
insert into emp_stat values(0);
commit;
create trigger emp_trigger
before insert on emp for each row
begin
if ( inserting ) then
update emp_stat set inserted = inserted + 1;
end if;
end;