Oracle queries executed by a session - sql

I am trying to trace the SQL statements executed against a particular database user. I don't have AUDITING enabled and I am using Oracle 11g.
I have the following query :
SELECT
S.MODULE,
SQL_TEXT ,
S.EXECUTIONS
FROM
SYS.V_$SQL S,
SYS.ALL_USERS U
WHERE
S.PARSING_USER_ID=U.USER_ID
AND UPPER(U.USERNAME) IN ('USERNAME')
AND (UPPER(s.MODULE)='APP.EXE')
ORDER BY S.LAST_LOAD_TIME
But if multiple users running the 'APP.EXE' are connected to the same db user, I am not able to understand which OS user executed which query. So I tried to join with V$SESSION view to get the user details.
SELECT
S.MODULE,SQL_TEXT ,SN.OSUSER, SN.MACHINE, S.EXECUTIONS
FROM
SYS.V_$SQL S,
SYS.ALL_USERS U,
V$SESSION SN
WHERE
S.PARSING_USER_ID=U.USER_ID
AND UPPER(U.USERNAME) IN ('USERNAME')
AND (UPPER(S.MODULE)='APP.EXE')
AND S.SQL_ID=SN.SQL_ID
ORDER BY S.LAST_LOAD_TIME
But this doesn't seems to be working(In my case it didn't return any rows)
So, I have the following questions
1) How do I get the queries executed by each session?
2) The EXECUTIONS column of V_$SQL seems to the executions from all the sessions. How do I know the number of times a particular query is executed by a session?
3) How long a record about a query will be stored in V_$SQL? When do Oracle delete it from the view?
Thanking you all in advance,
Pradeep

You're probably not going to get the data that you're looking for without doing more configuration (such as enabling auditing) or making some compromises. What is the business problem you're trying to solve? Depending on the problem, we may be able to help you identify the easiest approach to configuring the database to be able to record the information you're after.
Oracle does not attempt to store anywhere how many times a particular user (and particularly not how many times a particular operating system user) executed a particular query. The SQL_ID in V$SESSION only indicates the SQL_ID that the session is currently executing. If, as I'm guessing, this is a client-server application, it is quite likely that this is NULL 99% of the time because the vast majority of the time, the session is not executing any SQL, it's waiting on the user to do something. The PREV_SQL_ID in V$SESSION is the prior SQL statement that was executed-- that at least won't generally be NULL. But it's only going to have one value, it's not going to have a history of the SQL statements executed by that session.
The V$SQL view is a representation of what is in the SQL shared pool. When a SQL statement ages out of the shared pool, it will no longer be in the V$SQL view. How quickly that happens depends on a multitude of factors-- how frequently someone is executing the statement, how frequently new statements are parsed (which generally depends heavily on whether your applications are using bind variables correctly), how big your shared pool is, etc. Generally, that's going to be somewhere between a few minutes and until the database shuts down.
If you are licensed to use the AWR tables and you are interested in approximations rather than perfectly correct answers, you might be able to get the information you're after by looking at some of the AWR tables. For example, V$ACTIVE_SESSION_HISTORY will capture the SQL statement that each session was actively executing each second. Since this is a client-server application, however, that means that the vast majority of the time, the session is going to be inactive, so nothing will be captured. The SQL statements that do happen to get captured for a session, though, will give you some idea about the relative frequency of different SQL statements. Of course, longer-running SQL statements are more likely to be captured as well since they are more likely to be active on a given instant. If query A and B both execute in exactly the same amount of time and a session was captured executing A 5 times and B 10 times in the last hour, you can conclude that B is executed roughly twice as often as A. And if you know the average execution time of a query, the average probability that the query was captured is going to be the number of seconds that the query executes (a query that executes in 0.5 seconds has a 50% chance of getting captured, one that executes in 0.25 seconds has a 25% chance of getting captured) so you can estimate how often a particular session executed a particular query. That is far from an exact number particularly over shorter time-frames and for queries whose actual execution times are more variable.
The data in V$ACTIVE_SESSION_HISTORY view is generally available for a few hours. It then gets sampled down into the DBA_HIST_ACTIVE_SESS_HISTORY table which cuts the amount of data available by an order of magnitude making any estimates much less accurate. But that data is kept for whatever your AWR retention interval is (by default, that's one week though many sites increase it to 30 or 60 days).

According to oracle documentation
SQL_ADDRESS -Used with SQL_HASH_VALUE to identify the SQL statement that is currently being executed
SQL_HASH_VALUE - Used with SQL_ADDRESS to identify the SQL statement that is currently being executed
Please find the below link for reference
SQL_ADDRESS and HASH VALUE
Kindly modify the SQL below
SELECT S.MODULE, SQL_TEXT, SN.OSUSER, SN.MACHINE, S.EXECUTIONS
FROM SYS.V_$SQL S, SYS.ALL_USERS U, V$SESSION SN
WHERE S.PARSING_USER_ID = U.USER_ID
AND UPPER(U.USERNAME) IN ('USERNAME')
AND (UPPER(S.MODULE) = 'APP.EXE')
AND SN.sql_hash_value = S.hash_value
AND SN.sql_address = S.address
ORDER BY S.LAST_LOAD_TIME

Try this
SELECT l.Sql_Id
FROM v$session s
JOIN v$session_longops l
ON l.sid = s.Sid
AND l.Serial# = s.Serial#
AND l.Start_Time >= s.Logon_Time
WHERE s.Audsid = Sys_Context('USERENV', 'SESSIONID')
I assume you are interested only in log running sql's?

Related

What happens to an SQL SELECT query if the data changes whilst the query is running

We have a large database which is regularly being written to. We have a SELECT query on this DB which takes a couple of minutes to run. If we run this query and during those couple of minutes newer data is inserted/updated, will those changes be picked up by the query (assuming it matches the parameters of the query)? Or will it instead return the results as it was at run time?
You are using Oracle: when executing a query, you will read the data as of a specific point in time to guarantee consistency. By default, that will be the time that the execution started.
Have a read about it in the Database Concepts guide https://docs.oracle.com/en/database/oracle/oracle-database/19/cncpt/introduction-to-oracle-database.html#GUID-4D3C43F5-4EC6-4A21-9E91-8E4F33FE7790 . How Oracle achieves this is one of its features that sets it apart from most other RDBMS.
Each query – no matter what is the underlying database – operates within a TRANSACTION, whether explicit or implied. And, each transaction has a specific "isolation level."
The best strategy is to BEGIN TRANSACTION explicitly, but the reality is that you need to make yourself aware of what your chosen DB-interface is doing behind the scenes. This will give you your answer. "When you ran your query," your software did something.

relationship between PX_SERVERS_EXECUTIONS and USERS OPENING

I see a single sql with 5 distinct SIDs. Now taking the SQL_ID in V$SQL i find that and the USERS_OPENING=5 but the PX_SERVERS_EXECUTION=1. Also for one sql I see UO=1 but PXSE=2. Can any you pls help me in understanding the relationship of SQL_ID, SID, PX_SERVERS_EXECUTION and USERS_OPENING with each other. Thanks in advance :)
Every column in the data dictionary is described in the Database Reference. Each view has its own page, and below are the definitions from the V$SQL page:
SQL_ID - SQL identifier of the parent cursor in the library cache
USERS_OPENING - Number of users that have any of the child cursors open
PX_SERVERS_EXECUTIONS - Total number of executions performed by parallel execution servers (0 when the statement has never been executed in parallel)
To rephrase the definitions and add some details:
SQL_ID is a hash of the text of the SQL statement.
USERS_OPENING is the current number of sessions that have ran the statement (but are not necessarily running it right now).
PX_SERVERS_EXECUTIONS is the total number of parallel servers ever created to process the statement. For example, every time a statement like SELECT /*+ PARALLEL(8) */ ... is executed, the count will increment by probably either 8 or 16. (And it's odd that you have a SQL statement with a value of 1. If only one parallel server was used then parallelism isn't really used at all. That implies something weird happened - maybe the statement was cancelled before it could allocate more than one thread, or maybe only one thread was available and the statement was downgraded.)
And in the page for V$SESSION:
SID - Session identifier

Simple Oracle query started running slow when returning more than 251 results

As the title implies, I have a very simple Oracle query that is returning in 5 seconds when I go beyond returning 251 results. I am using SQL Developer, and attaching using the built in connection utility (there is no facility for an ODBC connection in this application).
The query found here is fast (fast enough) (pa_stu holds roughly 40k rows):
Select * From pa_stu Where rownum < 252;
Oracle returns the data to me in .521 second, according to SQL Developer.
The following query, and ones that pull larger sets of data, are the culprit:
Select * From pa_stu Where rownum < 253;
Oracle returns the data to me for that last one in 5.327 second, according to SQL Developer.
All queries being used for testing have the same explain plan. That is, the filter predicate of ROWNUM<251 (change the 251 to whatever number is being used) and a TABLE ACCESS of FULL.
The results above are consistent, plus, bumping up the evaluated number to about 1000 doubles the result time to roughly 10 seconds (consistently). It is as if some buffering is going on somewhere, and that buffer is too small. Additionally, this is happening on only one of our Oracle servers. The other, more highly used one (holds different data as well) has no problem returning 100's of thousands of records using similar statements.
The databases are controlled by a DBA, and, I have run all of this by her. She does not have a solution. This actually started happening a month or so back, and was not the case many months ago, if that is meaningful. It was just not as noticeable as it is now.
Thank you for any help.

Oracle sql benchmark

I have to benchmark a query - currently I need to know how adding parameter to select result set(FIELD_DATE1) will affect sql execution time. There is administration restrictions in db so I can not use debug. So I wrote a query:
SELECT COUNT(*), MIN(XXXT), MAX(XXXT)
FROM ( select distinct ID AS XXXID, sys_extract_utc(systimestamp) AS XXXT
, FIELD_DATE1 AS XXXUT
from XXXTABLE
where FIELD_DATE1 > '20-AUG-06 02.23.40.010000000 PM' );
Will output of query show real times of query execution
There is a lot to learn when it comes to benchmarking in Oracle. I recommend you to begin with the items below even though It worries me that you might not have enough restrictions in db since some of these features could require extra permissions:
Explain Plan: For every SQL statement, oracle has to create an execution plan, the execution plan defines how to information will be read/written. I.e.: the indexes to use, the join method, the sorts, etc.
The Explain plan will give you information about how good your query is and how it is using the indexes. Learning the concept of a query cost for this is key, so take a look to it.
TKPROF: it's an Oracle tool that allows you to read oracle trace files. When you enable timed statistics in oracle you can trace your sql statements, the result of this traces are put in files; You can read these files with TKPROF.
Among the information TKPROF will let you see is:
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
See: Using SQL Trace and TKPROF
It's possible in this query that SYSTIMESTAMP would be evaluated once, and the same value associated with every row, or that it would be evaluated once for each row, or something in-between. It also possible that all the rows would be fetched from table, then SYSTIMESTAMP evaluated for each one, so you wouldn't be getting an accurate account of the time taken by the whole query. Generally, you can't rely on order of evaluation within SQL, or assume that a function will be evaluated once for each row where it appears.
Generally the way I would measure execution time would be to use the client tool to report on it. If you're executing the query in SQLPlus, you can SET TIMING ON to have it report the execution time for every statement. Other interactive tools probably have similar features.

Postgres: How to fire multiple queries in same time?

I have one procedure which updates record values, and i want to fire it up against all records in table (over 30k records), procedure execution time is from 2 up to 10 seconds, because it depends on network load.
Now i'm doing UPDATE table SET field = procedure_name(paramns); but with that amount of records it takes up to 40 min to process all table.
Now im using 4 different connections witch fork to background and fires query with WHERE clause set to iterate over modulo of row id's to speed this up, ( WHERE id_field % 4 = ) and this works well and cuts down table populate to ~10 mins.
But i want to avoid using cron, shell jobs and multiple connections for this, i know that it can be done with libpq, but is there a way to fire up a query (4 different non-blocking queries) and do not wait till it ends execution, within single connection?
Or if anyone can point me out to some clues on how to write that function, using postgres internals, or simply in C and bound it as a stored procedure?
Cheers Darius
I've got a sure answer for this question - IF you will share with us what your ab workout is!!! I'm getting fat by the minute and I need answers myself...
OK I'll answer anyway.
If you are updating one table, on one database server, in 40 minutes 'single threaded' and in 10 minutes with 4 threads, the bottleneck is not the database server; otherwise, it would get bogged down in I/O. If you are executing a bunch of UPDATES, one call per record, the network round-trip time is killing you.
I'm pretty sure this is the case and not that it's either an I/O bottleneck on the DB or the possibility that procedure_name(paramns); is taking a long time. (If that were the procedure taking 2-10 seconds it would take like 2500 min to do 30K records). The reason I am sure is that starting 4 concurrent processed cuts the time in 1/4. So especially it is not an i/o issue on the DB server.
This might be the one excuse for putting business logic in an SP on the server. Optimization unfortunately means breaking the rules. The consequence is difficult maintenance. but, duh!!
However, the best solution would be to get this set up to use 'bulk update' queries. That might mean you have to take several strange and unintuitive steps such as this:
This will require a lot of modfication if multiple users can run it concurrently.
refactor the system so procedure_name(paramns) can get all the data it needs to process all records via a select statement. May need to use creative joins. If it's an SP of course now you are moving the logic to the client.
Use that have the program create an XML or other importable flat file format with the PK of the record to update, and the new field value or values. Write all the updates to this file instead of executing them on the DB.
have a temp table on the database that matches the layout of this flat file
run an import on the database - clear the temp table and import the file
do an update of a join of the temp table and the table to be updated, e.g., UPDATE mytbl, mytemp WHERE myPK=mytempPK SET myval=mytempnewval (use the right join syntax of course).
You can try some of these things 'by hand' first before you bother coding, to see if it's worth the speed increase.
If possible, you can still put this all in an SP!
I'm not making any guarantees, especially as I look down at my ever-fattening belly, but, this has the potential to melt your update job down to under a minute.
It is possible to update multiple rows at once. Below an example in postgres:
UPDATE
table_name
SET
column_name = temp.column_name
FROM
(VALUES
(<id1>, <value1>),
(<id2>, <value2>),
(<id3>, <value3>)
) AS temp("id", "column_name")
WHERE
table_name.id = temp.id
PHP has some functions for asynchrone queries:
pg_ send_ execute()
pg_ send_ prepare()
pg_send_query()
pg_ send_ query_ params()
No idea about other programming languages, you have to dig into the manuals.
I think you can't. Single connection can handle single query at once. It's described in libpq documentation chapter "Asynchronous Command Processing":
"After successfully calling PQsendQuery, call PQgetResult one or more times to obtain the results. PQsendQuery cannot be called again (on the same connection) until PQgetResult has returned a null pointer, indicating that the command is done."