I just want to see whether the data is getting inserted on the table or not..
So i have written like this:
select count(*) from emp;
dbms_lock.sleep(1);
select count(*) from emp;
So that it will sleep for 1 min . Even after sleep if the 1st count and 2nd count are different then the data is getting inserted into the table.
Otherwise the insertions are not happening.
But i have a small doubt regarding this, whether this instance will hang for 1 sec or the whole Database will hangon for 1 sec.
And if it wrong how to implement this.
Only your PL/SQL-Block will be put to sleep. If you want to sleep for a minute, pass 60 (seconds) to sleep.
You can use USER_LOCK.SLEEP if you don't want to grant EXECUTE to DBMS_LOCK which can be more destructive. The arguments are different however you can achieve the same thing.
USER_LOCK.SLEEP
PROCEDURE SLEEP
Argument Name Type In/Out Default?
TENS_OF_MILLISECS NUMBER IN
DBMS_LOCK.SLEEP
PROCEDURE SLEEP
Argument Name Type In/Out Default?
SECONDS NUMBER IN
To be more clear, you don't know whether or not the insertions are not happening; all you know is that the count of committed records didn't change. If you have access to V$TRANSACTION, you can look at USED_UBLK and USED_UREC to verify that a transaction in flight is generating changes.
How are you inserting into the table? Is it batch? OLTP? I don't think what you proposed is pratical, but cannot suggest another way until more information is provided. Give us a little more information on the process.
First of all, sorry for my english :-D
AFAIK select count(*) is wasting resource. I suggest you create a trigger to increment number of rows inserted somewhere else. Your scheduled job then check the number of inserted rows and reset it to zero prior to exiting. In this way you will know whether any rows inserted between runs and how many.
create table emp_stat(inserted int);
insert into emp_stat values(0);
commit;
create trigger emp_trigger
before insert on emp for each row
begin
if ( inserting ) then
update emp_stat set inserted = inserted + 1;
end if;
end;
Related
This is code I'm dealing with:
declare
--some cursors here
begin
if some_condition = 'N' then
raise form_trigger_failure;
end if;
--some fetches here
end;
It's from post-query trigger and basically my problem is that my block in Oracle forms returns 20k rows and post-query trigger is firing for each of the rows. Execution takes couple of minutes and I want to speed it up to couple of seconds. Data is validated in some_condition (it is a function, but it returns value really fast). If condition isn't met, then form_trigger_failure is raised. Is there any way to speed up this validation without changing the logic? (The same number of rows should be returned, this validation is important)
I've tried to change block properties, but it didn't help me.
Also, when I deleted whole if statement, data was returned really fast, but it wasn't validated and there were returned rows that shouldn't be visible.
Data is validated in some_condition ...
That's OK; but, why would you perform validation in a POST-QUERY trigger? It fetches data that already exist in the database, so it must be valid. Otherwise, why did you store it in the first place?
POST-QUERY should be used to populate non-database items.
Validation should be handled in WHEN-VALIDATE-ITEM or WHEN-VALIDATE-RECORD triggers, not in POST-QUERY.
I suggest you split those two actions. If certain parts of code can/should be shared between those two types of triggers, put it into a procedure (within a form) and call it when appropriate.
By the way, POST-QUERY won't fire for all 20K rows (unless you're buffering that much rows, and - if you do - you shouldn't).
Moreover, saying that the function returns the result really fast - probably, if it runs for a single row. Let it run on 200, 2000, 20000 rows as
select your_function(some_parameters)
from that_table
where rownum < 2; --> change 2 to 200 to 2000 to 20000 and see what happens
On the other hand, what's the purpose in fetching 20.000 rows? Who's going to review that? Are you sure that this is the way that you should be doing it? If so, consider switching to a stored procedure; let it perform those validations within the database, and let the form fetch "clean" data.
What I need to do is to write to the same row from two different sources (procedures/methods/services).
The first call that comes in creates the row, and the next one just updates it.
This needs to happen without any locking taking place. And if possible I would like to be able to call either source just once (not repeatedly by dealing with locking errors)
Here is kinda what I have now in a third procedure that the others call and just inserts a row (only inserts into the xyz) or returns true if there is a row.
That way it´s just fast and unlikely that both calls arrive at the same time.
IF EXISTS(SELECT * FROM [dbo].[Wait] WHERE xyx= #xyz)
BEGIN
-- The row exists because the other datasource
-- has allready inserted a row with the same xyz
-- UPDATE THE ROW WITH DATA COMING IN
END
ELSE
BEGIN
-- No row with value xyz exists so we INSERT it with
-- the extra data.
END
I know it does´t guaranty no lock. But in my case it´s actually unlikely that both arrive at the same time and even if they would it´s user controlled so they will get an error and will just try again. BUT I wan´t to solve this.
I have been seeing Row Versioning popping up but I´m not sure if that helps or how I should use it.
Have a look at Michael J Swarts' article Mythbusting: Concurrent Update/Insert Solutions. This will show you all possible do's and don'ts. Including the fact that merge actually doesn't do a great job in solving concurrency issues.
I am beginner with Oracle DB.
I want to know execution time for a query. This query returns around 20,000 records.
When I see the SQL Developer, it shows only 50 rows, maximum tweakable to 500. And using F5 upto 5000.
I would have done by making changes in the application, but application redeployment is not possible as it is running on production.
So, I am limited to using only SQL Developer. I am not sure how to get the seconds spent for execution of the query ?
Any ideas on this will help me.
Thank you.
Regards,
JE
If you scroll down past the 50 rows initially returned, it fetches more. When I want all of them, I just click on the first of the 50, then press CtrlEnd to scroll all the way to the bottom.
This will update the display of the time that was used (just above the results it will say something like "All Rows Fetched: 20000 in 3.606 seconds") giving you an accurate time for the complete query.
If your statement is part of an already deployed application and if you have rights to access the view V$SQLAREA, you could check for number of EXECUTIONS and CPU_TIME. You can search for the statement using SQL_TEXT:
SELECT CPU_TIME, EXECUTIONS
FROM V$SQLAREA
WHERE UPPER (SQL_TEXT) LIKE 'SELECT ... FROM ... %';
This is the most precise way to determine the actual run time. The view V$SESSION_LONGOPS might also be interesting for you.
If you don't have access to those views you could also use a cursor loop for running through all records, e.g.
CREATE OR REPLACE PROCEDURE speedtest AS
count number;
cursor c_cursor is
SELECT ...;
BEGIN
-- fetch start time stamp here
count := 0;
FOR rec in c_cursor
LOOP
count := count +1;
END LOOP;
-- fetch end time stamp here
END;
Depending on the architecture this might be more or less accurate, because data might need to be transmitted to the system where your SQL is running on.
You can change those limits; but you'll be using some time in the data transfer between the DB and the client, and possibly for the display; and that in turn would be affected by the number of rows pulled by each fetch. Those things affect your application as well though, so looking at the raw execution time might not tell you the whole story anyway.
To change the worksheet (F5) limit, go to Tools->Preferences->Database->Worksheet, and increase the 'Max rows to print in a script' value (and maybe 'Max lines in Script output'). To change the fetch size go to the Database->Advanced panel in the preferences; maybe to match your application's value.
This isn't perfect but if you don't want to see the actual data, just get the time it takes to run in the DB, you can wrap the query to get a single row:
select count(*) from (
<your original query
);
It will normally execute the entire original query and then count the results, which won't add anything significant to the time. (It's feasible it might rewrite the query internally I suppose, but I think that's unlikely, and you could use hints to avoid it if needed).
We're troubleshooting a sort of Sync Framework between two SQL Server databases, in separate servers (both SQL Server 2008 Enterprise 64 bits SP2 - 10.0.4000.0), through linked server connections, and we reached to a point in which we're sort of stuck.
The logic to identify which are the records "pending to be synced" is of course based on ROWVERSION values, including the use of MIN_ACTIVE_ROWVERSION() to avoid dirty reads.
All SELECT operations are encapsulated in SPs on each "source" side. This is a schematic sample of one SP:
PROCEDURE LoaderRetrieve(#LastStamp bigint, #Rows int)
BEGIN
...
(vars handling)
...
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
Select TOP (#Rows) Field1, Field2, Field3
FROM Table
WHERE [RowVersion] > #LastStampAsRowVersionDataType
AND [RowVersion] < #MinActiveVersion
Order by [RowVersion]
END
The approach works just fine, we usually sync records with the expected rate of 600k/hour (job every 30 seconds, batch size = 5k), but at some point, the sync process does not find any single record to be transferred, even though there are several thousand of records with a ROWVERSION value greater than the #LastStamp parameter.
When checking the reason, we've found that the MIN_ACTIVE_ROWVERSION() has a value less than (or slightly greater, just 5 or 10 increments) the #LastStamp being searched. This of course shouldn't be a problem since the MIN_ACTIVE_ROWVERSION() approach was introduced to avoid dirty reads and posterior issues, BUT:
The problem we see in some occasions, during the above scenario occurs, is that the value for MIN_ACTIVE_ROWVERSION() does not change during a long (really long) period of time, like 30/40 minutes, sometimes more than one hour. And this value is by far less than the ##DBTS value.
We first thought this was related to a pending DB transaction not yet committed. As per MSDN definition about the MIN_ACTIVE_ROWVERSION() (link):
Returns the lowest active rowversion value in the current database. A rowversion value is active if it is used in a transaction that has not yet been committed.
But when checking sessions (sys.sysprocesses) with open_tran > 0 during the duration of this issue, we couldn't find any session with a waittime greater than a few seconds, only one or two occurrences of +/- 5 minutes waittime sessions.
So at this point we're struggling to understand the situation: The MIN_ACTIVE_ROWVERSION() does not change during a huge period of time, and no uncommitted transactions with long waits are found within this time frame.
I'm not a DBA and could be the case that we're missing something in the picture to analyze this problem, doing some research on forums and blogs couldn't found any other clue. So far the open_tran > 0 was the valid reason, but under the circumstances I've exposed, it's clear that there's something else and don't know why.
Any feedback is appreciated.
well, I finally find the solution after digging a bit more.
The problem is that we were looking for sessions with a long waittime, but the real deal was to find sessions which have an active batch since a while.
If there's a session where open_tran = 1, to obtain exactly since when this transaction is open (and of course still active, not yet committed), the field last_batch from sys.sysprocesses shall be checked.
Using this query:
select
batchDurationMin= DATEDIFF(second,last_batch,getutcdate())/60.0,
batchDurationSecs= DATEDIFF(second,last_batch,getutcdate()),
hostname,open_tran,* from sys.sysprocesses a
where spid > 50
and a.open_tran >0
order by last_batch asc
we could identify a session with an open tran being active 30+ minutes. And with hostname values and some more checks within the web services (and also using dbcc inputbuffer) we found the responsible process.
So, the final question actually is "there's indeed an active session with an uncommitted transaction", therefore the MIN_ACTIVE_ROWVERSION() does not change. We were just looking processes with the wrong criteria.
Now that we know which process behaves like this, next step will be to improve it.
Hope this results useful to someone else.
I have an INSERT query in Oracle 10g that is getting stuck on a "SQL*Net message from dblink" event. It looks like:
INSERT INTO my_table (A, B, C, ...)
SELECT A, B, C, ... FROM link_table#other_system;
I do not see any locks on my_table besides the one from the INSERT I'm trying to do. The SELECT query on link_table#other_system completes without any trouble when run on its own. I only get this issue when I try to do the INSERT.
Does anyone know what could be going on here?
UPDATE
The SELECT returns 4857 rows in ~1.5 mins when run alone. The INSERT was running over an hour with this wait message before I decided to kill it.
UPDATE
I found an error in my methods. I was using a date range to limit the results. The date range I used when testing the SELECT only was before the last OraStats run on the link_table, but the date range that I used when testing the INSERT was after the last OraStats run on the link_table. So, that mislead me to believe there was a problem with the INSERT. Not very scientific of me to do this; my mistake.
SQL*Net message from dblink generally means that your local system is waiting on the network to transfer the data across the network. It's a very normal wait event for this sort of query.
How many rows does the SELECT statement return? How much data (in MB/ GB) does that represent?
When you say that it "completes without any trouble on its own", are you actually fetching all the data? If you're using something like TOAD or SQL Developer, the GUI will generally fetch the first N rows and return to you. That can be very quick but it doesn't imply that the database is done executing the query-- it may take much more time to finish producing all the rows your query is going to return. It's pretty common for people to measure the time required to fetch the first N rows rather than the time to fetch the last row-- your INSERT statement, obviously, can't return until all the rows have been fetched from the remote table.
Are you using a /*+ driving_site(link_table) */ hint to make Oracle perform the joins on the remote server?
If so, that hint will not work with DML, as explained by Jonathan Lewis on this page.
This may be a rare case where running the query just as a SELECT uses a very different plan than running the query as part of an INSERT. (You will definitely want to learn how to generate explain plans in your environment. Most tools have a button to do this.)
As Andras Gabor recommended in the link, you may want to use PL/SQL BULK COLLECT to improve performance. This may be a rare case where PL/SQL will work faster than SQL.