Why Oracle "DDL" statements (like "CTAS"), after executed, does not shown in V$SQL view ?
How can get "SQL_ID" of that? I want to use "SQL_ID" in SQl plan baselines. TNX
V$SQL shows just the first 20 charachter for CTAS. It is a bug in Oracle Database 11g. For more details, see: (https://stackoverflow.com/questions/27623646/oracle-sql-text-truncated-to-20-characters-for-create-alter-grant-statements/28087571#28087571\)1.
CTAS operations appear in the v$sql view
SQL> create table t1 as select * from dba_objects ;
Table created.
SQL> select sql_text,sql_id from v$sql where sql_text like '%create table t1 as select%' ;
SQL_TEXT
--------------------------------------------------------------------------------
SQL_ID
-------------
create table t1 as select * from dba_objects
4j5kv6x7cz5r7
select sql_text,sql_id from v$sql where sql_text like '%create table t1 as selec
t%'
5n4xnjkt3vz3h
SQL Plan Baselines are part of SPM ( SQL Plan Management )
A SQL plan baseline. A plan baseline is a set of accepted plans that
the optimizer is allowed to use for a SQL statement. In the typical
use case, the database accepts a plan into the plan baseline only
after verifying that the plan performs well. In this context, a plan
includes all plan-related information (for example, SQL plan
identifier, set of hints, bind values, and optimizer environment) that
the optimizer needs to reproduce an execution plan.
If you are using a CTAS recurrently, I guess you are doing it in batch mode, thus dropping the table and then recreate it afterwards using the mentioned CTAS command. I would rather try to see what is the problem with the SELECT itself of that statement.
But SQL Baselines are more focus for solving queries which plans can be fixed and evolved as the optimizer is not always choosing the best one.
My SQL script has 10 queries in a BEGIN END block. I need to check performance of each query separately for single Run.
Example:
when i run the SQL script
there are 10 queries
so need performance data for each query once it is executed
like
BEGIN
query1
performance data for query one
query2
performance data for query one
END
In SQL*Plus you can SET TIMING ON to get the elapsed time of each query. It's a simple as this:
set timing on
select * from t23
/
select * from t42
/
select * from t69
/
set timing off
But wall-clock timings are a pretty crude measure of performance. Oracle has more to offer.
If you have a friendly DBA (and aren't they all?) get them to grant you the PLUSTRACE role. With this role you can just SET AUTOTRACE ON STATISTICS and get useful information regarding resource consumption for each query. Also you can set it with EXPLAIN to get a Explain Plan after each query too.
Find out more
I want to return the number of records before and after an operation performed in a Stored Procedure. I looked up a function that should have worked for returning the number of rows in a table. But, it ain't working. Any help?
Similar: Please check this link on DBA Stack Exchange
The procedure only consists of Dynamic SQL (execute immediate commands). The code is too large to paste here (and confidential).
The real motive is that I want to know how many records did a table consist of before the insert/delete command (in an execute immediate) and how many records it consisted after the insert/delete operation.
I want to store the logs of the procedure in another table (a kind-of log table) which keeps a track of the number of rows inserted/deleted from the table being operated on.
e.g.
PROCEDURE_NAME OP_TYPE RUN_DATE RECORDS_BEFORE RECORDS_AFTER
Name of the procedure Type of Operation Performed 1103929 1112982
The procedure body.
create or replace procedure vector as
begin
-- select count(*) from some_table
execute immediate 'delete from some_table
where trunc(creation_date) >= trunc(sysdate) - 7';
execute immediate 'insert into log_table values
(''Procedure Name'',''Insert'', sysdate,''....'')';
-- select count(*) from some_table
execute immediate 'insert into some_table ....';
execute immediate 'insert into log_table values
(''Procedure Name'',''Insert'', sysdate,''....'')';
-- select count(*) from some_table
end vector;
Basic requirement: I want the count(*) of some_table to be inserted into the log_table.
what data exactly do you want to get?
If it is the number of rows affected by your command - it should be in SQL%ROWCOUNT (after each individual command you execute. It will not "sum" all the modifications in the procedure, if this is what you need -you'll have to sum it manually after each insert/delete/update).
But, if you want to have the total number of rows in the table - you should run a
SELECT count(*) from TABNAME
before and after the command you executed (with the performance hit of it).
You can also combine the two - run a count() in the beginning of your procedure, and use SQL%ROWCOUNT to count the numbers of rows you modified , and assume the table now has count() - rowcount(of deletes).
DO REMEMBER that the Oracle by default will show you the number of records in the table at the time the count(*) query is being executed (after executing the current transaction commands), so the changes you will see without using the rowcount might include concurrent changes. For more information read about Oracle isolation level http://www.oracle.com/technetwork/issue-archive/2005/05-nov/o65asktom-082389.html .
In addition - there'd might be a concurrent change between the time you ran the count(*) query and the "delete" / "update" clause - so think about the scenarios that might occur in your specific case.
If you want a more detailed / code review - update the relevant part of the procedure / queries you execute.
I have a table with 226 million rows that has a varchar2(2000) column. The first 10 characters are indexed using a functional index SUBSTR("txtField",1,10).
I am running a query such as this:
select count(1)
from myTable
where SUBSTR("txtField",1,10) = 'ABCDEFGHIJ';
The value does not exist in the database so the return in "0".
The explain plan shows that the operation performed is "INDEX (RANGE SCAN)" which I would assume and the cost is 4. When I run this query it takes on average 114 seconds.
If I change the query and force it to not use the index:
select count(1)
from myTable
where SUBSTR("txtField",1,9) = 'ABCDEFGHI';
The explain plan shows the operation will be a "TABLE ACCESS (FULL)" which makes sense. The cost is 629,000. When I run this query it takes on average 103 seconds.
I am trying to understand how scanning an index can take longer than reading every record in the table and performing the substr function on a field.
Followup:
There are 230M+ rows in the table and the query returns 17 rows; I selected a new value that is in the database. Initially I was executing with a value that was not in the database and returned zero rows. It seems to make no difference.
Querying for information on the index yields:
CLUSTERING_FACTOR=201808147
LEAF_BLOCKS=1131660
I am running the query with AUTOTRACE ON and the gather_plan_statistics and will add those results when they are available.
Thanks for all the suggestions.
There's a lot of possibilities.
You need to look at the actual execution plan, though.
You can run the query with the /*+ gather_plan_statistics */ hint, and then execute:
select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
You should also look into running a trace/tkprof to see what is actually happening - your DBA should be able to assist you with this.
How would you go about proving that two queries are functionally equivalent, eg they will always both return the same result set.
As I had a specific query in mind when I was doing this, I ended up doing as #dougman suggested, over about 10% of rows the tables concerned and comparing the results, ensuring there was no out of place results.
The best you can do is compare the 2 query outputs based on a given set of inputs looking for any differences. To say that they will always return the same results for all inputs really depends on the data.
For Oracle one of the better if not best approaches (very efficient) is here (Ctrl+F Comparing the Contents of Two Tables):
http://www.oracle.com/technetwork/issue-archive/2005/05-jan/o15asktom-084959.html
Which boils down to:
select c1,c2,c3,
count(src1) CNT1,
count(src2) CNT2
from (select a.*,
1 src1,
to_number(null) src2
from a
union all
select b.*,
to_number(null) src1,
2 src2
from b
)
group by c1,c2,c3
having count(src1) <> count(src2);
1) Real equivalency proof with Cosette:
Cosette checks (with a proof) if 2 SQL query's are equivalent and counter examples when not equivalent. It's the only way to be absolutely sure, well almost ;) You can even throw in 2 query's on their website and check (formal) equivalence right away.
Link to Cosette:
https://cosette.cs.washington.edu/
Link to article that gives a good explanation of how Cosette works: https://medium.com/#uwdb/introducing-cosette-527898504bd6
2) Or if you're just looking for a quick practical fix:
Try this stackoverflow answer: [sql - check if two select's are equal]
Which comes down to:
(select * from query1 MINUS select * from query2)
UNION ALL
(select * from query2 MINUS select * from query1)
This query gives you all rows that are returned by only one of the queries.
This sounds to me like a an NP complete problem. I'm not sure there is a sure fire way to prove this kind of thing
This is pretty easy to do.
Lets assume your queries are named a and b
a
minus
b
should give you an empty set. If it does not. then the queries return different sets, and the result set shows you the rows that are different.
then do
b
minus
a
that should give you an empty set. If it does, then the queries do return the same sets.
if it is not empty, then the queries are different in some respect, and the result set shows you the rows that are different.
The DBMS vendors have been working on this for a very, very long time. As Rik said, it's probably an intractable problem, but I don't think any formal analysis on the NP-completeness of the problem space has been done.
However, your best bet is to leverage your DBMS as much as possible. All DBMS systems translate SQL into some sort of query plan. You can use this query plan, which is an abstracted version of the query, as a good starting point (the DBMS will do LOTS of optimization, flattening queries into more workable models).
NOTE: modern DBMS use a "cost-based" analyzer which is non-deterministic across statistics updates, so the query planner, over time, may change the query plan for identical queries.
In Oracle (depending on your version), you can tell the optimizer to switch from the cost based analyzer to the deterministic rule based analyzer (this will simplify plan analysis) with a SQL hint, e.g.
SELECT /*+RULE*/ FROM yourtable
The rule-based optimizer has been deprecated since 8i but it still hangs around even thru 10g (I don't know 'bout 11). However, the rule-based analyzer is much less sophisticated: the error rate potentially is much higher.
For further reading of a more generic nature, IBM has been fairly prolific with their query-optimization patents. This one here on a method for converting SQL to an "abstract plan" is a good starting point:
http://www.patentstorm.us/patents/7333981.html
Perhaps you could draw (by hand) out your query and the results using Venn Diagrams, and see if they produce the same diagram. Venn diagrams are good for representing sets of data, and SQL queries work on sets of data. Drawing out a Venn Diagram might help you to visualize if 2 queries are functionally equivalent.
This will do the trick. If this query returns zero rows the two queries are returning the same results. As a bonus, it runs as a single query, so you don't have to worry about setting the isolation level so that the data doesn't change between two queries.
select * from ((<query 1> MINUS <query 2>) UNION ALL (<query 2> MINUS <query 1>))
Here's a handy shell script to do this:
#!/bin/sh
CONNSTR=$1
echo query 1, no semicolon, eof to end:; Q1=`cat`
echo query 2, no semicolon, eof to end:; Q2=`cat`
T="(($Q1 MINUS $Q2) UNION ALL ($Q2 MINUS $Q1));"
echo select 'count(*)' from $T | sqlplus -S -L $CONNSTR
CAREFUL! Functional "equivalence" is often based on the data, and you may "prove" equivalence of 2 queries by comparing results for many cases and still be wrong once the data changes in a certain way.
For example:
SQL> create table test_tabA
(
col1 number
)
Table created.
SQL> create table test_tabB
(
col1 number
)
Table created.
SQL> -- insert 1 row
SQL> insert into test_tabA values (1)
1 row created.
SQL> commit
Commit complete.
SQL> -- Not exists query:
SQL> select * from test_tabA a
where not exists
(select 'x' from test_tabB b
where b.col1 = a.col1)
COL1
----------
1
1 row selected.
SQL> -- Not IN query:
SQL> select * from test_tabA a
where col1 not in
(select col1
from test_tabB b)
COL1
----------
1
1 row selected.
-- THEY MUST BE THE SAME!!! (or maybe not...)
SQL> -- insert a NULL to test_tabB
SQL> insert into test_tabB values (null)
1 row created.
SQL> commit
Commit complete.
SQL> -- Not exists query:
SQL> select * from test_tabA a
where not exists
(select 'x' from test_tabB b
where b.col1 = a.col1)
COL1
----------
1
1 row selected.
SQL> -- Not IN query:
SQL> select * from test_tabA a
where col1 not in
(select col1
from test_tabB b)
**no rows selected.**
You don't.
If you need a high level of confidence that a performance change, for example, hasn't changed the output of a query then test the hell out it.
If you need a really high level of confidence .. then errrm, test it even more.
Massive level's of testing aren't that hard to cobble together for a SQL query. Write a proc which will iterate around a large/complete set of possible paramenters, and call each query with each set of params, and write the outputs to respective tables. Compare the two tables and there you have it.
It's not exactly scientific, which I guess was the OP's question, but I'm not aware of a formal method to prove equivalency.