Way to log data causing errors in Oracle materialized view? - error-handling

We created some materialized views that ran fine against a copy of actual app data. The app does not police its own data. Since then, some of the users may have been either careless or creative in their data entry. Mview now chokes and dies. Error messages indicate we are getting multiple rows returned from one or more functions.
We have been trying to use EXCEPTIONS -- with some success at DBMS_Output for the first row object_id that causes (one of) the functions to fail. It would be better be able to complete a run for the MView, and log the object_ids that cause problems from each function. We haven't succeeded in inserting the exception data into a table.
Platform is Oracle 10g2. I've been trying to cram DML Error Logging into my head. I understand that this should work for BULK data, and I am assuming that creating a materialized view would qualify. WOULD this work for MViews? Is this the best way?

If you're just trying to refresh the materialized view, I don't know of a way to use DML error logging to capture all the problem rows. On the other hand, you could create a table and use DML error logging when you populate the table to capture all the errors that you would encounter refreshing the materialized view.
Potentially, you could populate this table manually and then create a materialized view on this prebuilt table. That may create problems depending on exactly how the materialized view is being used and what sorts of query rewrite is enabled since the table you built will be missing some of the data from the underlying table (the rows written to the error log).
Create the table and the error log
SQL> create table t (
2 col1 number,
3 col2 number
4 );
Table created.
Elapsed: 00:00:00.00
SQL> ed
Wrote file afiedt.buf
1 begin
2 dbms_errlog.create_error_log( 'T', 'T_ERR' );
3* end;
SQL> /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.01
SQL> create function f1
2 return varchar2
3 is
4 begin
5 return 'A';
6 end;
7 /
Function created.
Try to insert 10 rows. 3 will fail because the LEVEL will be a multiple of 3 and the string returned by the function can't be converted into a number
Elapsed: 00:00:00.01
SQL> insert into t( col1, col2 )
2 select level,
3 (case when mod(level,3) = 0
4 then to_number( f1 )
5 else mod(level,3)
6 end)
7 from dual
8 connect by level <= 10
9 log errors into t_err
10 reject limit unlimited;
7 rows created.
Elapsed: 00:00:00.01
SQL> ed
Wrote file afiedt.buf
1 select ora_err_mesg$, col1, col2
2* from t_err
SQL> /
ORA_ERR_MESG$ COL1 COL2
------------------------------ ---------- ----------
ORA-01722: invalid number 3 0
ORA-01722: invalid number 6 0
ORA-01722: invalid number 9 0
Elapsed: 00:00:00.00
Now, use this prebuilt table to create the materialized view
SQL> ed
Wrote file afiedt.buf
1 create materialized view t
2 on prebuilt table
3 as
4 select 1 col1, 1 col2
5* from dual
SQL> /
Materialized view created.
Elapsed: 00:00:00.11

Related

Oracle How can I keep two tables in sync using a trigger that gets activated based on a timestamp field?

I want to do a one-time import/export to copy data from TableA to TableB. However, after the one-time import of data, I will need to keep the tables in sync using a trigger until we are ready to use the new table.
Is it possible to write a trigger which copies data from one table to another depending upon the value of a timestamp field in tableA OR use a condition on systimestamp. The trigger on the attribute is preferred, but if that is not possible, then I can go for systimestamp?
I am a newbie to Oracle and I will be very grateful if someone could help me come up with such a trigger.
TableA
CREATE TABLE user.tableA (
attr1 ... NOT NULL,
attr2 ...,
...,
attrN TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT tableA_PK PRIMARY KEY (attr1) ENABLE
)
user.tableB is the same as user.tableA
Need to come up with a trigger to copy inserts from TableA to TableB.
The code below needs work
CREATE TRIGGER user.sync_from_A_to_B
AFTER INSERT ON user.tableA
FOR EACH ROW
BEGIN
if (systimestamp OR attrN >= to_timestamp('2021-07-31 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
then
INSERT INTO user.tableB
(
attr1,
attr2,
...,
attrN
)
VALUES
(
:new.attr1,
:new.attr1,
:new....,
:new.attrN
)
end if;
END;
/
Convenient way to sync two table with native features is to create materialized view with fast refresh on commit. Essentially it is a table and query definition to fill that table. Oracle then updates target table when refresh occurs (on commit in your case). Then you may drop materialized view and keep the table (so remove the query definition and any dependency on a base table) to rebase your application to it.
SQL> set echo on
SQL>
SQL> create table t_a (
2 id primary key,
3 val,
4 str
5 ) as
6 select level, level, chr(32 + level)
7 from dual
8 connect by level < 4;
Table T_A created.
SQL> select * from t_a;
ID VAL STR
---------- ---------- ----
1 1 !
2 2 "
3 3 #
SQL> create materialized view log on t_a
2 with primary key /*To enable incremental updates*/;
Materialized view log T_A created.
SQL> create materialized view t_b
2 refresh on commit fast
3 as
4 select * from t_a;
Materialized view T_B created.
SQL> select * from t_b;
ID VAL STR
---------- ---------- ----
1 1 !
2 2 "
3 3 #
SQL> insert into t_a values(10, 11, 'W');
1 row inserted.
SQL> update t_a set val = 100 where id = 1;
1 row updated.
SQL> select * from t_b;
ID VAL STR
---------- ---------- ----
1 1 !
2 2 "
3 3 #
SQL> /*Still no changes*/
SQL> commit;
Commit complete.
SQL> select * from t_b;
ID VAL STR
---------- ---------- ----
2 2 "
3 3 #
1 100 !
10 11 W
SQL> /*Looks ok, we're moving to target table*/
SQL> drop materialized view t_b preserve table;
Materialized view T_B dropped.
SQL> drop table t_a;
Table T_A dropped.
SQL> select * from t_b;
ID VAL STR
---------- ---------- ----
2 2 "
3 3 #
1 100 !
10 11 W
I am guessing that a materialized view will not satisfy your need. True, it is much better than trigger enforced synchronization but the crux of the question seems to be until we are ready to use the new table.
This implies at some future time in the source table (tableA) will be discontinued (dropped?). If so then at that time the materialized view refresh becomes invalid.
You, however, do not need an after row trigger, but an after statement trigger. That trigger would insert directly from TableA to TableB.
create trigger user.sync_from_a_to_b
after insert on user.tablea
begin
insert into user.tableb
(
attr1,
attr2,
...,
attrn
)
select
attr1,
attr2,
...,
attrn
from tablea
where attrn >= (select max attrn from tableb);
end;
The primary issue you face is synchronization will require processing Delete and Updates as well. Perhaps your best bet in a two phase approach. Create a
materialized view until your ready to cut over to your new table. Then do your one time cutover directly from the MV to your new table.

DML inside a function call

I have an old client software which has a connected oracle database for persistence. As interface the client software only allows the call of functions and procedures. I have nearly full access to the database, i.e., I can define functions and procedures. Because of the interface, only functions can return values and I cannot use the OUT parameter option of procedures.
Now I simply want to read a value from a table:
SELECT value FROM myTable WHERE id = 42;
And increase the value afterwards:
UPDATE myTable SET value = value + 1 WHERE id = 42;
I could use a function for the select statement and a procedure for the update and call both successively. The problem here is the non-existence of transactions on the client side. Thus, between select and update another thread could get wrong values.
So my question is, how can I use both calls in a transaction without using transactions...
Tried Approaches:
Use anonymous PL/SQL Blocks -> the syntax is not supported by the client.
Put both calls in a single function -> DML is not allowed in a select statement.
PRAGMA AUTONOMOUS_TRANSACTION -> I heard it is a bad thing and should not be used.
You can do DML inside a function as demonstrated below, but I stress - take heed of the other comments. Look at using a sequence (even multiple sequences), because doing DML inside a function is generally a bad idea, because the number of executions of a function call (if called from SQL) is not deterministic. Also, there are scalability issues if used in a high volume. And in a multi-user environment, you need to handle locking/serialization otherwise you'll multiple sessions getting the same integer value returned.
So...after all that, you still want to head this path :-(
SQL> create table t ( x int );
Table created.
SQL> insert into t values (0);
1 row created.
SQL>
SQL> create or replace
2 function f return int is
3 pragma autonomous_transaction;
4 retval int;
5 begin
6 update t
7 set x = x + 1
8 returning x into retval;
9 commit;
10 return retval;
11 end;
12 /
Function created.
SQL>
SQL> select f from dual;
F
----------
1
1 row selected.
SQL> select * from t;
X
----------
1
1 row selected.
SQL> select f from dual;
F
----------
2
1 row selected.
SQL> select * from t;
X
----------
2
1 row selected.
SQL> select f from dual;
F
----------
3
1 row selected.
SQL> select * from t;
X
----------
3
1 row selected.

Setting NLS_SORT variable for a single select only

Good day,
my customer uses an application that was initially designed for MSSQL, which is probably doing case-insensitive searches by default. But the customer uses Oracle and hence, needs some extra tweaking.
So the question is: How can I tell Oracle to make a given SELECT LIKE-Statement search case-insensitive with the following limitations?
ALTER SESSION cannot be used individually (by trigger: maybe)
Other queries from the same session must not be affected
The SELECT-statement cannot be altered
I know about the possibility to set NLS_SORT on system level, but this will basically kill the performance, as all indexes are disabled.
You can use DBMS_ADVANCED_REWRITE to rewrite the SQL into a case-insensitive version.
Subtly changing queries like this can be confusing and can make troubleshooting and tuning difficult. The package also has some limitations that may make it impractical, such as not supporting bind variables.
1. Sample Schema
SQL> drop table test1;
Table dropped.
SQL> create table test1(a varchar2(100));
Table created.
SQL> insert into test1 values ('case INSENSITIVE');
1 row created.
SQL> commit;
Commit complete.
2. The query is initially case-sensitive and matches 0 rows
SQL> select count(*) total from test1 where a like '%case insensitive%';
TOTAL
----------
0
3. Create rewrite equivalence - add a LOWER function
SQL> begin
2 sys.dbms_advanced_rewrite.declare_rewrite_equivalence(
3 name => 'case_insensitive_1',
4 source_stmt => q'[select count(*) total from test1 where a like '%case insensitive%']',
5 destination_stmt => q'[select count(*) total from test1 where lower(a) like '%case insensitive%']',
6 validate => false
7 );
8 end;
9 /
PL/SQL procedure successfully completed.
4. Now the same query is case-insensitive and matches 1 row
SQL> alter session set query_rewrite_integrity = trusted;
Session altered.
SQL> select count(*) total from test1 where a like '%case insensitive%';
TOTAL
----------
1

How can I write a PL/SQL procedure to copy tables and contents from another account

I need to write a PL/SQL procedure to create tables that match the ones in another account(I have access to that account). They need to have same columns and types. Also, they need to be filled with the same data
Help me!
EDIT:
SQL> CREATE OR REPLACE PROCEDURE MakeTables
2 AS
3 BEGIN
4 EXECUTE IMMEDIATE
5 'CREATE TABLE Table1 AS (SELECT * FROM ANOTHER_ACCT.Table1);
6 CREATE TABLE Table2 AS (SELECT * FROM ANOTHER_ACCT.Table2);
7 CREATE TABLE Table3 AS (SELECT * FROM ANOTHER_ACCT.Table3);
8 CREATE TABLE Table4 AS (SELECT * FROM ANOTHER_ACCT.Table4)';
9 END;
10 /
Procedure created.
But when I run this I get this error:
SQL> BEGIN
2 MakeTables;
3 END;
4 /
BEGIN
*
ERROR at line 1:
ORA-00911: invalid character
ORA-06512: at "BS135.MAKETABLES", line 4
ORA-06512: at line 2
When you say, another "account", do you mean, another "user/schema"? If so, this can be simple. Go read/google about "oracle create table as select". This lets you create a table from a select statement, so you could issue a statement such as
create table new_table as select * from other_schema.old_table
You don't need any PL/SQL unless you wanted to automate the process for creating many tables. Then you could query the data dictionaries as a driver.
(also, please read on how to ask proper questions here: https://stackoverflow.com/questions/how-to-ask )

force subquery resolution first

I'm creating a query which uses 2 embedded server functions multiple times.
Problem: the functions search through a decently large table, and they take a long time to execute.
Goal: Use a subquery as if it were a table so that I can reference columns without running the function to generate the column more than once.
Example Pseudocode:
Select general.column1, general.column2, general.column1-general.column2
from (select package.function1('I take a long time') column1,
package.function2('I take even longer') column2,
normal_column
from bigtable) general;
When I run my code general.column1 will reference the function in the statement of column1, not the data returned by it (which is ultimately what I'm after).
I'm fairly new to SQL, so any help is appreciated and if you need more info, I'll do my best to provide it.
Thanks!
I suggest you tu use the subquery factoring. The first subquery will be executed only once and then used through the rest of he query.
WITH function_result AS
(SELECT package.function1('I take a long time') column1
, package.function2('I take even longer') column2
FROM dual)
SELECT function_result.column1
, function_result.column2
, function_result.column1 - function_result.column2
, bigtable.normal_column
FROM bigtable
In general what you want to do is in this case is take advatage of scalar subquery caching.
i.e. put:
Select general.column1, general.column2, general.column1-general.column2
from (select (select package.function1('I take a long time') from dual) column1,
(select package.function2('I take even longer') from dual) column2,
normal_column
from bigtable) general;
delcaring the function as deterministic too helps if it is deterministic.
a small example:
SQL> create or replace function testfunc(i varchar2)
2 return varchar2
3 is
4 begin
5 dbms_application_info.set_client_info(userenv('client_info')+1 );
6 return 'hi';
7 end;
8 /
Function created.
now lets test a call to the function like you have:
SQL> exec dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.
SQL> set autotrace traceonly
SQL> select *
2 from (select testfunc(owner) a
3 from all_objects);
57954 rows selected.
SQL> select userenv('client_info') from dual;
USERENV('CLIENT_INFO')
----------------------------------------------------------------
57954
the function was called 57954 times (once per row). now lets use scalar caching:
SQL> exec dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.
SQL> select *
2 from (select (select testfunc(owner) from dual) a
3 from all_objects);
57954 rows selected.
SQL> select userenv('client_info') from dual;
USERENV('CLIENT_INFO')
----------------------------------------------------------------
178
178 calls instead of 57k!
in your case you've only shown that you have a literal and no input that is varying per row (if this is the case, the number of calls after using scalar caching should be 1).
if we add deterministic:
SQL> create or replace function testfunc(i varchar2)
2 return varchar2 deterministic
3 is
4 begin
5 dbms_application_info.set_client_info(userenv('client_info')+1 );
6 return 'hi';
7 end;
8 /
Function created.
SQL> exec dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.
SQL> select *
2 from (select (select testfunc(owner) from dual) a
3 from all_objects);
57954 rows selected.
SQL> select userenv('client_info') from dual;
USERENV('CLIENT_INFO')
----------------------------------------------------------------
55
now down to 55. in 11g we have result_cache which we can put in place of deterministic, which would reduce the calls on subsequant runs to 0 calls.