relationship between PX_SERVERS_EXECUTIONS and USERS OPENING - sql

I see a single sql with 5 distinct SIDs. Now taking the SQL_ID in V$SQL i find that and the USERS_OPENING=5 but the PX_SERVERS_EXECUTION=1. Also for one sql I see UO=1 but PXSE=2. Can any you pls help me in understanding the relationship of SQL_ID, SID, PX_SERVERS_EXECUTION and USERS_OPENING with each other. Thanks in advance :)

Every column in the data dictionary is described in the Database Reference. Each view has its own page, and below are the definitions from the V$SQL page:
SQL_ID - SQL identifier of the parent cursor in the library cache
USERS_OPENING - Number of users that have any of the child cursors open
PX_SERVERS_EXECUTIONS - Total number of executions performed by parallel execution servers (0 when the statement has never been executed in parallel)
To rephrase the definitions and add some details:
SQL_ID is a hash of the text of the SQL statement.
USERS_OPENING is the current number of sessions that have ran the statement (but are not necessarily running it right now).
PX_SERVERS_EXECUTIONS is the total number of parallel servers ever created to process the statement. For example, every time a statement like SELECT /*+ PARALLEL(8) */ ... is executed, the count will increment by probably either 8 or 16. (And it's odd that you have a SQL statement with a value of 1. If only one parallel server was used then parallelism isn't really used at all. That implies something weird happened - maybe the statement was cancelled before it could allocate more than one thread, or maybe only one thread was available and the statement was downgraded.)
And in the page for V$SESSION:
SID - Session identifier

Related

How Can an Always-False PostgreSQL Query Run for Hours?

I've been trying to debug a slow query in another part of our system, and saw this query is active:
SELECT * FROM xdmdf.hematoma AS "zzz4" WHERE (0 = 1)
It has apparently been active for > 8 hours. With that WHERE clause, logically, this query should return zero rows. Why would a SQL engine even bother to evaluate it? Would a query like this be useful for anything, and if so, what could it be?
(xdmdf.hematoma is a view and I would expect SELECT * on it to take ~30 minutes under non-locky conditions.)
This statement:
explain select 1 from xdmdf.hematoma limit 1
(no analyze) has been running for about 10 minutes now.
There are two possibilities:
It takes forever to plan the statement, because you changed some planner settings and the view definition is so complicated (partitioned table?).
This is the unlikely explanation.
A concurrent transaction is holding an ACCESS EXCLUSIVE lock on a table involved in the view definition.
Terminate any such concurrent transactions.

Why cannot a user run two or more select queries concurrently on same table?

While practicing DBMS and SQL using Oracle Database, when I tried to fire 2 select queries on a table the database always wait for the first query to finish executing and keeps the other one in pipeline apparently.
Consider a table MY_TABLE having 1 million records with a column 'id' that holds the serial number of records.
Now my queries are:-
Query #1 - select * from MY_TABLE where id<500001; --I am fetching first 500,000 records here
Query #2 - select * from MY_TABLE where id>500000; --I am fetching next 500,000 records here
Since these are select queries, these must be acquiring a read lock on the table which is a shared lock. Then why this phenomenon happens? Please note the sample space or domain for both queries are mutually exclusive to the best of my knowledge here because of the filters that I applied via where clause and this further aggravates my confusion.
Also, I am visualizing this in form of that, there must be some process which is evaluating my query and then doing a handshake with the memory(i.e. resource) for fetching the result. So, any resource in shared lock mode should be accessible to all process which hold that lock.
Secondly, is there any way to override this behavior or execute multiple select queries concurrently.
Note:- I want to chunk down a particular task(i.e. data of a table) and enhance the speed of my script.
The database doesn't keep queries in a pipeline, it's simply the fact that your client is only sending one query at a time. The database will quite happily run multiple queries against the same data at the same time, e.g. from separate sessions.

Some confusion on the description of read consistency in Oracle

Below is a short brief of read consistency from oracle concepts guide.
What is a sql statement, just one sql? Or Pl/SQL or Store Procedure? Anyone can help provide me one opposite example which can indicates the un-consistency read?
read consistency
A consistent view of data seen by a user. For example, in statement-level read
consistency the set of data seen by a SQL statement remains constant throughout
statement execution.
A "statement" in this context is one DML statement: a single SELECT, INSERT, UPDATE, DELETE, MERGE.
It is not a PL/SQL block. Similarly, multiple executions of the same DML statement (say, within a PL/SQL loop) are separate "statements". If you need consistency over multiple statements or within a PL/SQL block, you can achieve that using SET TRANSACTION ISOLATION LEVEL SERIALIZABLE or SET TRANSACTION READ ONLY. Both introduce limitations.
An opposite example of an inconsistent read would be as follows.
Starting conditions: table BIG_TABLE has 10 million rows.
User A at 10:00:
SELECT COUNT(*) FROM BIG_TABLE;
User B at 10:01:
DELETE FROM BIG_TABLE WHERE ID >= 9000000; -- delete the last million rows
User B at 10:02:
COMMIT;
User A at 10:03: query completes:
COUNT(*)
--------------
9309129
That is wrong. User A should have either gotten 10 million rows or 9 million rows. At no point were there 9309129 committed rows in the table. What has happened is that user A had read 309,129 rows that user B was deleting before Oracle actually processed the deletion (or before the COMMIT). Then, after the user B delete/commit, user A's query stopped seeing the deleted rows and stopped counting them.
This sort of problem is impossible in Oracle, thanks to its implementation of Multiversion Read Consistency.
In Oracle, in the above situation, as it encountered blocks that had rows deleted (and committed) by User B, User A's query would have used the UNDO data reconstruct what those blocks looked like at 10:00 -- the time when user A's query started.
That's basically it -- Oracle statements operate on the a version of the database as it existed as of a single point in time. This point in time is almost always the time when the statement started. There are some exception cases involving updates when that point in time will be moved to a point in time "mid statement". But it is always consistent as of one point in time or another.

Oracle sql benchmark

I have to benchmark a query - currently I need to know how adding parameter to select result set(FIELD_DATE1) will affect sql execution time. There is administration restrictions in db so I can not use debug. So I wrote a query:
SELECT COUNT(*), MIN(XXXT), MAX(XXXT)
FROM ( select distinct ID AS XXXID, sys_extract_utc(systimestamp) AS XXXT
, FIELD_DATE1 AS XXXUT
from XXXTABLE
where FIELD_DATE1 > '20-AUG-06 02.23.40.010000000 PM' );
Will output of query show real times of query execution
There is a lot to learn when it comes to benchmarking in Oracle. I recommend you to begin with the items below even though It worries me that you might not have enough restrictions in db since some of these features could require extra permissions:
Explain Plan: For every SQL statement, oracle has to create an execution plan, the execution plan defines how to information will be read/written. I.e.: the indexes to use, the join method, the sorts, etc.
The Explain plan will give you information about how good your query is and how it is using the indexes. Learning the concept of a query cost for this is key, so take a look to it.
TKPROF: it's an Oracle tool that allows you to read oracle trace files. When you enable timed statistics in oracle you can trace your sql statements, the result of this traces are put in files; You can read these files with TKPROF.
Among the information TKPROF will let you see is:
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
See: Using SQL Trace and TKPROF
It's possible in this query that SYSTIMESTAMP would be evaluated once, and the same value associated with every row, or that it would be evaluated once for each row, or something in-between. It also possible that all the rows would be fetched from table, then SYSTIMESTAMP evaluated for each one, so you wouldn't be getting an accurate account of the time taken by the whole query. Generally, you can't rely on order of evaluation within SQL, or assume that a function will be evaluated once for each row where it appears.
Generally the way I would measure execution time would be to use the client tool to report on it. If you're executing the query in SQLPlus, you can SET TIMING ON to have it report the execution time for every statement. Other interactive tools probably have similar features.

Oracle queries executed by a session

I am trying to trace the SQL statements executed against a particular database user. I don't have AUDITING enabled and I am using Oracle 11g.
I have the following query :
SELECT
S.MODULE,
SQL_TEXT ,
S.EXECUTIONS
FROM
SYS.V_$SQL S,
SYS.ALL_USERS U
WHERE
S.PARSING_USER_ID=U.USER_ID
AND UPPER(U.USERNAME) IN ('USERNAME')
AND (UPPER(s.MODULE)='APP.EXE')
ORDER BY S.LAST_LOAD_TIME
But if multiple users running the 'APP.EXE' are connected to the same db user, I am not able to understand which OS user executed which query. So I tried to join with V$SESSION view to get the user details.
SELECT
S.MODULE,SQL_TEXT ,SN.OSUSER, SN.MACHINE, S.EXECUTIONS
FROM
SYS.V_$SQL S,
SYS.ALL_USERS U,
V$SESSION SN
WHERE
S.PARSING_USER_ID=U.USER_ID
AND UPPER(U.USERNAME) IN ('USERNAME')
AND (UPPER(S.MODULE)='APP.EXE')
AND S.SQL_ID=SN.SQL_ID
ORDER BY S.LAST_LOAD_TIME
But this doesn't seems to be working(In my case it didn't return any rows)
So, I have the following questions
1) How do I get the queries executed by each session?
2) The EXECUTIONS column of V_$SQL seems to the executions from all the sessions. How do I know the number of times a particular query is executed by a session?
3) How long a record about a query will be stored in V_$SQL? When do Oracle delete it from the view?
Thanking you all in advance,
Pradeep
You're probably not going to get the data that you're looking for without doing more configuration (such as enabling auditing) or making some compromises. What is the business problem you're trying to solve? Depending on the problem, we may be able to help you identify the easiest approach to configuring the database to be able to record the information you're after.
Oracle does not attempt to store anywhere how many times a particular user (and particularly not how many times a particular operating system user) executed a particular query. The SQL_ID in V$SESSION only indicates the SQL_ID that the session is currently executing. If, as I'm guessing, this is a client-server application, it is quite likely that this is NULL 99% of the time because the vast majority of the time, the session is not executing any SQL, it's waiting on the user to do something. The PREV_SQL_ID in V$SESSION is the prior SQL statement that was executed-- that at least won't generally be NULL. But it's only going to have one value, it's not going to have a history of the SQL statements executed by that session.
The V$SQL view is a representation of what is in the SQL shared pool. When a SQL statement ages out of the shared pool, it will no longer be in the V$SQL view. How quickly that happens depends on a multitude of factors-- how frequently someone is executing the statement, how frequently new statements are parsed (which generally depends heavily on whether your applications are using bind variables correctly), how big your shared pool is, etc. Generally, that's going to be somewhere between a few minutes and until the database shuts down.
If you are licensed to use the AWR tables and you are interested in approximations rather than perfectly correct answers, you might be able to get the information you're after by looking at some of the AWR tables. For example, V$ACTIVE_SESSION_HISTORY will capture the SQL statement that each session was actively executing each second. Since this is a client-server application, however, that means that the vast majority of the time, the session is going to be inactive, so nothing will be captured. The SQL statements that do happen to get captured for a session, though, will give you some idea about the relative frequency of different SQL statements. Of course, longer-running SQL statements are more likely to be captured as well since they are more likely to be active on a given instant. If query A and B both execute in exactly the same amount of time and a session was captured executing A 5 times and B 10 times in the last hour, you can conclude that B is executed roughly twice as often as A. And if you know the average execution time of a query, the average probability that the query was captured is going to be the number of seconds that the query executes (a query that executes in 0.5 seconds has a 50% chance of getting captured, one that executes in 0.25 seconds has a 25% chance of getting captured) so you can estimate how often a particular session executed a particular query. That is far from an exact number particularly over shorter time-frames and for queries whose actual execution times are more variable.
The data in V$ACTIVE_SESSION_HISTORY view is generally available for a few hours. It then gets sampled down into the DBA_HIST_ACTIVE_SESS_HISTORY table which cuts the amount of data available by an order of magnitude making any estimates much less accurate. But that data is kept for whatever your AWR retention interval is (by default, that's one week though many sites increase it to 30 or 60 days).
According to oracle documentation
SQL_ADDRESS -Used with SQL_HASH_VALUE to identify the SQL statement that is currently being executed
SQL_HASH_VALUE - Used with SQL_ADDRESS to identify the SQL statement that is currently being executed
Please find the below link for reference
SQL_ADDRESS and HASH VALUE
Kindly modify the SQL below
SELECT S.MODULE, SQL_TEXT, SN.OSUSER, SN.MACHINE, S.EXECUTIONS
FROM SYS.V_$SQL S, SYS.ALL_USERS U, V$SESSION SN
WHERE S.PARSING_USER_ID = U.USER_ID
AND UPPER(U.USERNAME) IN ('USERNAME')
AND (UPPER(S.MODULE) = 'APP.EXE')
AND SN.sql_hash_value = S.hash_value
AND SN.sql_address = S.address
ORDER BY S.LAST_LOAD_TIME
Try this
SELECT l.Sql_Id
FROM v$session s
JOIN v$session_longops l
ON l.sid = s.Sid
AND l.Serial# = s.Serial#
AND l.Start_Time >= s.Logon_Time
WHERE s.Audsid = Sys_Context('USERENV', 'SESSIONID')
I assume you are interested only in log running sql's?