PL/SQL embedded insert into table that may not exist - sql

I much prefer using this 'embedded' style inserts in a pl/sql block (opposed to the execute immediate style dynamic sql - where you have to delimit quotes etc).
-- a contrived example
PROCEDURE CreateReport( customer IN VARCHAR2, reportdate IN DATE )
BEGIN
-- drop table, create table with explicit column list
CreateReportTableForCustomer;
INSERT INTO TEMP_TABLE
VALUES ( customer, reportdate );
END;
/
The problem here is that oracle checks if 'temp_table' exists and that it has the correct number of colunms and throws a compile error if it doesn't exist.
So I was wondering if theres any way round that?! Essentially I want to use a placeholder for the table name to trick oracle into not checking if the table exists.
EDIT:
I should have mentioned that a user is able to execute any 'report' (as above). A mechanism that will execute an arbitrary query but always write to the temp_table ( in the user's schema). Thus each time the report proc is run it drops the temp_table and recreates it with, most probably, a different column list.

You could use a dynamic SQL statement to insert into the maybe-existent temp_table, and then catch and handle the exception that occurs when the table doesn't exist.
Example:
execute immediate 'INSERT INTO '||TEMP_TABLE_NAME||' VALUES ( :customer, :reportdate )' using customer, reportdate;
Note that having the table name vary in a dynamic SQL statement is not very good, so if you ensure the table names stay the same, that would be best.

Maybe you should be using a global temporary table (GTT). These are permanent table structures that hold temporary data for an Oracle session. Many different sessions can insert data into the same GTT, and each will only be able to see their own data. The data is automatically deleted either on COMMIT or when the session ends, according to the GTT's definition.
You create the GTT (once only) like this:
create globabal temporary table my_gtt
(customer number, report_date date)
on commit delete/preserve* rows;
* delete as applicable
Then your programs can just use it like any other table - the only difference being it always begins empty for your session.

Using GTTs are much preferable to dropping/recreating tables on the fly - if your application needs a different structure for each report, I strongly suggest you work out all the different structures that each report needs, and create separate GTTs as needed by each, instead of creating ordinary tables at runtime.
That said, if this is just not feasible (and I've seen good examples when it's not, e.g. in a system that supports a wide range of ad-hoc requests from users), you'll have to go with the EXECUTE IMMEDIATE approach.

Related

Temp table doesn't store updated values

I've been trying to create a temp table and update it but when I go to view the temp table, it doesn't show any of the updated rows
declare global temporary table hierarchy (
code varchar(5)
description varchar(30);
INSERT INTO session.hierarchy
SELECT code, 30_description
FROM table1
WHERE code like '_....';
SELECT *
FROM session.hierarchy;
This is a frequently asked question.
When using DGTT with Db2 (declare global temporary table), you need to know that the default is to discard all rows after a COMMIT action. That is the reason the table appears to be empty after you insert - the rows got deleted if autocommit is enabled. If that is not what you want, you should use the on commit preserve rows clause when declaring the table.
It is also very important to the with replace option when creating stored procedures, this is often the most friendly for development and testing, and it is not the default. Otherwise, if the same session attempts to repeat the declaration of the DGTT the second and subsequent attempts will fail because the DGTT already exists.
It can also be interesting for problem determination sometimes to use on rollback preserve rows but that is less often used.
When using a DGTT, one of the main advantages is that you can arrange for the population of the table (inserts, updates ) to be unlogged which can give a great performance boost if you have millions of rows to add to the DGTT.
Suggestion is therefore:
declare global temporary table ... ( )...
not logged
on commit preserve rows
with replace;
For DPF installations, also consider using distribute by hash (...) for best performance.

Delete tables in BigQuery Cloud Console

I am trying to delete data from Bigquery tables and facing a challenge. At the moment only one date partitioned table drops/deletes at a time. Based on some research and docs on google, I understand I need to use DML operations.
Below are the commands that I used for deletion and it doesn’t work
1.delete from bigquery-project.dataset1.table1_*
2.drop table bigquery-project.dataset1.table1_*;
3.delete from bigquery-project.dataset1.table1_* where _table_suffix='20211117';
The third query works for me and it deletes only for that particular date.
For the 1 and 2 queries, I’ve got a exception saying “Illegal operation (write) on meta-table bigquery-project.dataset1.table1_”
How would I go about deleting over 300 date partitioned tables in one go?
Thanks in advance.
You can go the route of creating a stored procedure to generate your statements in this scenario
CREATE OR REPLACE PROCEDURE so_test.so_delete_table()
BEGIN
DECLARE fqtn STRING;
FOR record IN
(
select concat(table_catalog,".",table_schema,".",table_name) as tn
from so_test.INFORMATION_SCHEMA.TABLES
where table_name like 'test_delete_%'
)
DO
SET fqtn=record.tn;
-- EXECUTE IMMEDIATE format('TRUNCATE TABLE %s',fqtn);
EXECUTE IMMEDIATE format('DROP TABLE %s',fqtn);
END FOR;
END
call so_test.so_delete_table();
In the above I query for the tables I would like to remove records from, then pass that to the appropriate statement. In your scenario I could not tell if you were wanting to remove records or the entire table so I included logic for both depending on the scenario.
This could also be modified to take in a table prefix and pass that to the for loop where clause fairly simply.
Alternatively you could just perform the select statement in the for loop, copy the results into a sheet and construct the appropriate DML statements, copy back into the console and execute.

SQL Server temp table check performance

IF OBJECT_ID('tempdb..#mytesttable') IS NOT NULL
DROP TABLE #mytesttable
SELECT id, name
FROM
INTO #mytesttable mytable
Question: is it good to check temp table existence (e.g: OBJECT_ID('tempdb..#mytesttable') ) when I first time create this temp table inside a procedure?
What will be the best practice in terms of performance?
It is good practice to check if table exists or not. and it doesn't have any performance impact. anyhow, temp tables automatically drops once procedure execution completes in sql server. sql server appends unique num with temp table name. so if same procedure executes more than once at same time also , it will not cause any issue. it depends on session. all procs executions have different session.
Yes, it is a good practice. I am always doing this at the beginning of the routine. In SQL Server 2016 + objects can DIE (Drop If Exists), so you can simplify the call:
DROP TABLE IF EXISTS #mytesttable;
Also, even at the end of the routine you temporary objects are destroyed and even the SQL Engine gives unique names to temporary tables, there is still another behavior to consider.
If you are naming your temporary table with same name, when nested procedure calls are involved (one stored procedure calls another) it is possible to get an error or corrupt your data. This is due to the fact that temporary table are visible in the current execution scope - so, #table in one procedure will be visible in the second called procedure (so you can modify its data or drop it).

Trouble understanding what to do PLSQL

We're learning Cursors in class and the first problem I have I believe I understand which asks:
Set up an inventory table and a transaction table that has sales, returns, and purchases (the transaction table should have a code with S for sales, R for returns and P for purchases). Create scripts to create and insert data into these tables.
I believe I just set up a Table and produce some PLSQL so that I'll be able to input into each row. The following question is worded a bit more ambiguously saying
Using the tables you created in the fourth problem, process the transactions and determine the impact on inventory. Display information that gives the original inventory and the inventory after the sales, returns and purchases have been processed. You need to use cursors.
How would I break this problem down? Thank you for any help.
EDIT: I believe I'm also to be using PROCEDURES with the cursors but still just as lost.
I assume you have a tool to run SQL commands in and you are familiar with how to use it. If not you will need to get that tool first. Something like the sqlplus client if you want to do it all command line style, or something like Oracle's SQL Developer if you want the friendly GUI.
Create the tables. Google "oracle create table examples". These will be DDL statements and do not need a commit. You should create at least two constraints. One for a primary key and one for a foreign key that references the primary key. Google "oracle contraint examples".
Insert data into the tables. Google "oracle insert exemples". The constraints should do their job and make sure you don't insert orphaned data. A COMMIT is necessary to make the inserted data permanent.
Create the procedure. Google "oracle stored procedure examples". This will process your transactions. You will use the S,P, or R codes to determine adding or subtracting from inventory. You will be using your CURSOR here and most likely a FOR LOOP. You might use the DBMS.OUTPUT.PRINTLN statement to output the transaction information. You will find that thinking through the logic flow of the procedure will influence how your tables are designed in step 1. As a result, you may end up dropping and recreating your tables and constraints several times.
Run your procedure. You may want to avoid a commit in the procedure until you have tested it and are getting the results you want back. This way you can execute a ROLLBACK statement to put the data back into the state it was before the procedure ran (edit, run, rollback; edit, run, rollback; etc.)
Output results. If you didn't get all your output requirements done in the procedure, you will need to get that from your tables using the SQL SELECT statement after the procedure runs.
Good luck.
Generally folks do not like to answer these kinds of questions because they are obviously generated by a need to complete a school assignment. I won't complete your assignment, but will answer your question about CURSORS.
Curses are used in PL/Sql procedures, functions and packages to store a pointer to the result of a query. For example:
-- You cannot use a REF CURSOR directly, you need to create variable that is a REF CURSOR TYPE
-- I use a Type to declare a REF CURSOR
Type MyDataCursor is REF CURSOR;
-- And then declare a variable of that type
pDataOut MyDataCursor;
-- You then can use the cursor in code
Open pDataOut FOR Select * from sometable
This will retrieve data from a table, you can now access that data in several ways. The most common is to fetch the data and store it in a structure that matches the table row structure. For example:
FETCH MyDataCursor INTO MyDataRowStructure;
The structure would mimic all the data columns in one row of the table, for example:
-- This particular structure is a TYPE of RECORD
TYPE MyDataRowStructure IS RECORD (
ID NUMBER,
Name NUMBER,
Address VARCHAR2(50),
City VARCHAR2(50),
State Varchar2(2),
ZIP Varchar2(5)
);
Lastly you would want to loop through all the rows of data retrieved from the table and do something with them. For example:
--You start with the first fetch
FETCH MyDataCursor INTO MyDataRowStructure;
-- Now loop through all the rows
WHILE MyDataCursor %FOUND LOOP
-- Do some stuff here with the first row, then fetch the next row ....
FETCH MyDataCursor INTO MyDataRowStructure;
END LOOP;
This should get you started.

Can dynamic SQL be called from a trigger in Oracle?

I have a dozen tables of whom I want to keep the history of the changes. For every one I created a second table with the ending _HISTO and added fields modtime, action, user.
At the moment before I insert, modify or delete a record in this tables I call ( from my delphi app ) a oracle procedure that copies the actual values to the histo table and then do the operation.
My procedure generates a dynamic sql via DBA_TAB_COLUMNS and then executes the generated ( insert into tablename_histo ( fields s ) select fields, sysdate, 'acition', userid from table_name
I was told that I can not call this procedure from a trigger because it has to select the table the trigger is triggered on. Is this true ? Is it possible to implement what I need ?
Assuming you want to maintain history using triggers (rather than any of the other methods of tracking history data in Oracle-- Workspace Manager, Total Recall, Streams, Fine_Grained Auditing etc.), you can use dynamic SQL in the trigger. But the dynamic SQL is subject to the same rules that static SQL is subject to. And even static SQL in a row-level trigger cannot in general query the table that the trigger is defined on without generating a mutating table exception.
Rather than calling dynamic SQL from your trigger, however, you can potentially write some dynamic SQL that generates the trigger in the first place using the same data dictionary tables. The triggers themselves would statically refer to :new.column_name and :old.column_name. Of course, you would have to either edit the trigger or re-run the procedure that dynamically creates the trigger when a new column gets added. Since you, presumably, need to add the column to both the main table and the history table, however, this generally isn't too big of a deal.
Oracle does not allow a trigger to execute a SELECT against the table on which the trigger is defined. If you try it you'll get the dreaded "mutating table" error (ORA-04091), and while there are ways to get around that error they add a lot of complexity for little value. If you really want to build a dynamic query every time your table is updated (IMO this is a bad idea from the standpoint of performance - I find that metadata queries are often slow, but YMMV) it should end up looking something like
strAction := CASE
WHEN INSERTING THEN 'INSERT'
WHEN UPDATING THEN 'UPDATE'
WHEN DELETING THEN 'DELETE'
END;
INSERT INTO TABLENAME_HISTO
(ACTIVITY_DATE, ACTION, MTC_USER,
old_field1, new_field1, old_field2, new_field2)
VALUES
(SYSDATE, strAction, USERID,
:OLD.field1, :NEW.field1, :OLD.field2, :NEW.field2)
Share and enjoy.