Update a Temporary Table before showing? - sql

I'd like to find out how to update a temporary table before I show the query. This is to avoid making permanent changes to the database.
So far I got the following:
WITH
new_salary AS
(SELECT ID,NAME,DEPT_NAME,SALARY FROM INSTRUCTOR WHERE DEPT_NAME='Comp. Sci.')
SELECT
*
FROM
new_salary
WHERE
DEPT_NAME='Comp. Sci.';
Now here is where it ends. I want to update this temporary table and show the updated version of that table as to avoid changing the actual database. All my attempts at using the UPDATE clause have failed so I am kind of dumbfounded :/
This part I am currently trying to do is not part of homework. It's just me who doesn't want to have to re-do the database over and over.
How would I go about doing this?

I guess you have two options:
You make a procedure, which first checks whether it needs to update the table. After calling that you execute the query.
You create a pipelined function, which does the checking and returning of the data. You could integrate this into the select like this (pipelined function name called pipelined_function_name):
select *
from table(pipelined_function_name)
;

Related

Delete tables in BigQuery Cloud Console

I am trying to delete data from Bigquery tables and facing a challenge. At the moment only one date partitioned table drops/deletes at a time. Based on some research and docs on google, I understand I need to use DML operations.
Below are the commands that I used for deletion and it doesn’t work
1.delete from bigquery-project.dataset1.table1_*
2.drop table bigquery-project.dataset1.table1_*;
3.delete from bigquery-project.dataset1.table1_* where _table_suffix='20211117';
The third query works for me and it deletes only for that particular date.
For the 1 and 2 queries, I’ve got a exception saying “Illegal operation (write) on meta-table bigquery-project.dataset1.table1_”
How would I go about deleting over 300 date partitioned tables in one go?
Thanks in advance.
You can go the route of creating a stored procedure to generate your statements in this scenario
CREATE OR REPLACE PROCEDURE so_test.so_delete_table()
BEGIN
DECLARE fqtn STRING;
FOR record IN
(
select concat(table_catalog,".",table_schema,".",table_name) as tn
from so_test.INFORMATION_SCHEMA.TABLES
where table_name like 'test_delete_%'
)
DO
SET fqtn=record.tn;
-- EXECUTE IMMEDIATE format('TRUNCATE TABLE %s',fqtn);
EXECUTE IMMEDIATE format('DROP TABLE %s',fqtn);
END FOR;
END
call so_test.so_delete_table();
In the above I query for the tables I would like to remove records from, then pass that to the appropriate statement. In your scenario I could not tell if you were wanting to remove records or the entire table so I included logic for both depending on the scenario.
This could also be modified to take in a table prefix and pass that to the for loop where clause fairly simply.
Alternatively you could just perform the select statement in the for loop, copy the results into a sheet and construct the appropriate DML statements, copy back into the console and execute.

Retrieve Script used in "Create Table As" Statement

We have a table in our Oracle Database that was created from an actual script.
Ex:
Create Table AS (Select * from table).
I was hoping to recover the original script the table was created from as the data is quite old in the table, but needs this created table needs to be refreshed. This table is created with data from another live table in our database, so if there is a way to refresh this without the original query - I'm open ears. Any solutions are welcomed!
Thanks!
I suppose you could also do a column by column comparison of this table against all others to see which one (if any) matches it. Of course, this would only be a guess.
It would require that object to actually be a materialized view instead of a table. Otherwise you are probably left off with exploring logs. Beyond that I doubt there is any way to recover the original select statement used to create that table.

Can dynamic SQL be called from a trigger in Oracle?

I have a dozen tables of whom I want to keep the history of the changes. For every one I created a second table with the ending _HISTO and added fields modtime, action, user.
At the moment before I insert, modify or delete a record in this tables I call ( from my delphi app ) a oracle procedure that copies the actual values to the histo table and then do the operation.
My procedure generates a dynamic sql via DBA_TAB_COLUMNS and then executes the generated ( insert into tablename_histo ( fields s ) select fields, sysdate, 'acition', userid from table_name
I was told that I can not call this procedure from a trigger because it has to select the table the trigger is triggered on. Is this true ? Is it possible to implement what I need ?
Assuming you want to maintain history using triggers (rather than any of the other methods of tracking history data in Oracle-- Workspace Manager, Total Recall, Streams, Fine_Grained Auditing etc.), you can use dynamic SQL in the trigger. But the dynamic SQL is subject to the same rules that static SQL is subject to. And even static SQL in a row-level trigger cannot in general query the table that the trigger is defined on without generating a mutating table exception.
Rather than calling dynamic SQL from your trigger, however, you can potentially write some dynamic SQL that generates the trigger in the first place using the same data dictionary tables. The triggers themselves would statically refer to :new.column_name and :old.column_name. Of course, you would have to either edit the trigger or re-run the procedure that dynamically creates the trigger when a new column gets added. Since you, presumably, need to add the column to both the main table and the history table, however, this generally isn't too big of a deal.
Oracle does not allow a trigger to execute a SELECT against the table on which the trigger is defined. If you try it you'll get the dreaded "mutating table" error (ORA-04091), and while there are ways to get around that error they add a lot of complexity for little value. If you really want to build a dynamic query every time your table is updated (IMO this is a bad idea from the standpoint of performance - I find that metadata queries are often slow, but YMMV) it should end up looking something like
strAction := CASE
WHEN INSERTING THEN 'INSERT'
WHEN UPDATING THEN 'UPDATE'
WHEN DELETING THEN 'DELETE'
END;
INSERT INTO TABLENAME_HISTO
(ACTIVITY_DATE, ACTION, MTC_USER,
old_field1, new_field1, old_field2, new_field2)
VALUES
(SYSDATE, strAction, USERID,
:OLD.field1, :NEW.field1, :OLD.field2, :NEW.field2)
Share and enjoy.

Debugging sub-queries in TSQL Stored Procedure

How do I debug a complex query with multiple nested sub-queries in SQL Server 2005?
I'm debugging a stored procedure and trigger in Visual Studio 2005. I'd like to be able to see what the results of these sub-queries are, as I feel that this is where the bug is coming from. An example query (slightly redacted) is below:
UPDATE
foo
SET
DateUpdated = ( SELECT TOP 1 inserted.DateUpdated FROM inserted )
...
FROM
tblEP ep
JOIN tblED ed ON ep.EnrollmentID = ed.EnrollmentID
WHERE
ProgramPhaseID = ( SELECT ...)
Visual Studio doesn't seem to offer a way for me to Watch the result of the sub query. Also, if I use a temporary table to store the results (temporary tables are used elsewhere also) I can't view the values stored in that table.
Is there anyway that I can add a watch or in some other way view these sub-queries? I would love it if there was some way to "Step Into" the query itself, but I imagine that wouldn't be possible.
Ok first I would be leary of using subqueries in a trigger. Triggers should be as fast as possible, so get rid of any correlated subqueries which might run row by row instead of in a set-based fashion. Rewrite to joins. If you only want to update records based on what was in the inserted table, then join to it. Also join to the table you are updating. Exactly what are you trying to accomplish with this trigger? It might be easier to give advice if we understood the business rule you are trying to implement.
To debug a trigger this is what I do.
I write a script to:
Do the actual insert to the table
without the trigger on on it
Create a temp table named #inserted
(and/or one named #deleted)
Populate the table as I would expect
the inserted table in the trigger to
be populated from the insert you do.
Add the trigger code (minus the
create or alter trigger parts)
substituting #inserted every time I
reference inserted. (if you plan to
run multiple times until you are
ready to use it in a trigger throw
it in an explicit transaction and
rollback after checking your
results.
Add a query to check the table(s)
you are changing with the trigger for
the values you wanted to change.
Now if you need to add debug
statements to see what is happening
between steps, you can do so.
Run making changes until you get the
results you want.
Once you have the query working as
you expect it to, it is easy to take
the # signs off inserted and use it
to create the body of the trigger.
This is what I usually do in this type of scenerio:
Print out the exact sqls getting generated by each subquery
Then run each of then in the Management Studio as suggested above.
You should check if different parts are giving you the right data you expect.

rewriting query in PostgreSQL

After a db schema change, what was a column is now computed in stored procedure. Is it possible to make this change seamless for application programs?
So that when a program sends a query like
SELECT id,
value
FROM table
...it instead gets a result of
SELECT id,
compute_value() AS value
FROM table
I thought I could use a RULE, but it is not possible to create SELECT rule on existing table.
So the only other option seems to me to create a new table and a view with the name of the existing one. Which, because of the need for INSERT/UPDATE triggers for the view is too complicated. Then I'd rather update all the client applications.
If you know you want to return value, you use a function rather than a stored procedure. Then you'd reference it like:
SELECT id,
your_function_name(parameter) AS value
FROM TABLE
There's an example under "SQL Functions on Composite Types" in the documentation.
Creating a view using the statement above is ideal if your application needs the computed value constantly, otherwise I wouldn't bother.