Delete tables in BigQuery Cloud Console - google-bigquery

I am trying to delete data from Bigquery tables and facing a challenge. At the moment only one date partitioned table drops/deletes at a time. Based on some research and docs on google, I understand I need to use DML operations.
Below are the commands that I used for deletion and it doesn’t work
1.delete from bigquery-project.dataset1.table1_*
2.drop table bigquery-project.dataset1.table1_*;
3.delete from bigquery-project.dataset1.table1_* where _table_suffix='20211117';
The third query works for me and it deletes only for that particular date.
For the 1 and 2 queries, I’ve got a exception saying “Illegal operation (write) on meta-table bigquery-project.dataset1.table1_”
How would I go about deleting over 300 date partitioned tables in one go?
Thanks in advance.

You can go the route of creating a stored procedure to generate your statements in this scenario
CREATE OR REPLACE PROCEDURE so_test.so_delete_table()
BEGIN
DECLARE fqtn STRING;
FOR record IN
(
select concat(table_catalog,".",table_schema,".",table_name) as tn
from so_test.INFORMATION_SCHEMA.TABLES
where table_name like 'test_delete_%'
)
DO
SET fqtn=record.tn;
-- EXECUTE IMMEDIATE format('TRUNCATE TABLE %s',fqtn);
EXECUTE IMMEDIATE format('DROP TABLE %s',fqtn);
END FOR;
END
call so_test.so_delete_table();
In the above I query for the tables I would like to remove records from, then pass that to the appropriate statement. In your scenario I could not tell if you were wanting to remove records or the entire table so I included logic for both depending on the scenario.
This could also be modified to take in a table prefix and pass that to the for loop where clause fairly simply.
Alternatively you could just perform the select statement in the for loop, copy the results into a sheet and construct the appropriate DML statements, copy back into the console and execute.

Related

Oracle is dropping my table before select statement finishes

At work we have a large Oracle SQL query designed to output two select statements based off of analysis and table combining in prior scripts. At the end of these select statements we truncate the temp tables that were created. My issue is that the tables are getting truncated before the select statement has time to run, resulting in 0 output for both queries and empty tables that now need the whole process to be run over again to populate the tables correctly. This is something I'm trying to help automate but I'm stuck on how to get Oracle to wait for the select statement to finish processing before triggering the truncate. Very simply it looks like:
Select * from temp;
Truncate Table temp;
commit;
TRUNCATE is a DDL sentence in Oracle which means it can modify the structure of tables and databases, instead of using TRUNCATE why don't you try to change it for a simple DELETE which is a simple DML sentence in Oracle that just change the data.
What you describe can only ne the case if the 2 select statements (one followed by a truncate) are running in separate sessions. Run in the same session and the problem would be solved although you may lose out on a performance benefit of runnng them in parallel.

Trouble understanding what to do PLSQL

We're learning Cursors in class and the first problem I have I believe I understand which asks:
Set up an inventory table and a transaction table that has sales, returns, and purchases (the transaction table should have a code with S for sales, R for returns and P for purchases). Create scripts to create and insert data into these tables.
I believe I just set up a Table and produce some PLSQL so that I'll be able to input into each row. The following question is worded a bit more ambiguously saying
Using the tables you created in the fourth problem, process the transactions and determine the impact on inventory. Display information that gives the original inventory and the inventory after the sales, returns and purchases have been processed. You need to use cursors.
How would I break this problem down? Thank you for any help.
EDIT: I believe I'm also to be using PROCEDURES with the cursors but still just as lost.
I assume you have a tool to run SQL commands in and you are familiar with how to use it. If not you will need to get that tool first. Something like the sqlplus client if you want to do it all command line style, or something like Oracle's SQL Developer if you want the friendly GUI.
Create the tables. Google "oracle create table examples". These will be DDL statements and do not need a commit. You should create at least two constraints. One for a primary key and one for a foreign key that references the primary key. Google "oracle contraint examples".
Insert data into the tables. Google "oracle insert exemples". The constraints should do their job and make sure you don't insert orphaned data. A COMMIT is necessary to make the inserted data permanent.
Create the procedure. Google "oracle stored procedure examples". This will process your transactions. You will use the S,P, or R codes to determine adding or subtracting from inventory. You will be using your CURSOR here and most likely a FOR LOOP. You might use the DBMS.OUTPUT.PRINTLN statement to output the transaction information. You will find that thinking through the logic flow of the procedure will influence how your tables are designed in step 1. As a result, you may end up dropping and recreating your tables and constraints several times.
Run your procedure. You may want to avoid a commit in the procedure until you have tested it and are getting the results you want back. This way you can execute a ROLLBACK statement to put the data back into the state it was before the procedure ran (edit, run, rollback; edit, run, rollback; etc.)
Output results. If you didn't get all your output requirements done in the procedure, you will need to get that from your tables using the SQL SELECT statement after the procedure runs.
Good luck.
Generally folks do not like to answer these kinds of questions because they are obviously generated by a need to complete a school assignment. I won't complete your assignment, but will answer your question about CURSORS.
Curses are used in PL/Sql procedures, functions and packages to store a pointer to the result of a query. For example:
-- You cannot use a REF CURSOR directly, you need to create variable that is a REF CURSOR TYPE
-- I use a Type to declare a REF CURSOR
Type MyDataCursor is REF CURSOR;
-- And then declare a variable of that type
pDataOut MyDataCursor;
-- You then can use the cursor in code
Open pDataOut FOR Select * from sometable
This will retrieve data from a table, you can now access that data in several ways. The most common is to fetch the data and store it in a structure that matches the table row structure. For example:
FETCH MyDataCursor INTO MyDataRowStructure;
The structure would mimic all the data columns in one row of the table, for example:
-- This particular structure is a TYPE of RECORD
TYPE MyDataRowStructure IS RECORD (
ID NUMBER,
Name NUMBER,
Address VARCHAR2(50),
City VARCHAR2(50),
State Varchar2(2),
ZIP Varchar2(5)
);
Lastly you would want to loop through all the rows of data retrieved from the table and do something with them. For example:
--You start with the first fetch
FETCH MyDataCursor INTO MyDataRowStructure;
-- Now loop through all the rows
WHILE MyDataCursor %FOUND LOOP
-- Do some stuff here with the first row, then fetch the next row ....
FETCH MyDataCursor INTO MyDataRowStructure;
END LOOP;
This should get you started.

Insert or Update user-defined table type in a stored procedure

I'm currently working on a project where I insert or update a lot of data frequently to a remote database. The data volume is around 50 sets of data with 500-800 rows each, that goes into the same table.
Currently I have a stored procedure that I call for every row to insert or update (simplified for easier read):
ALTER PROCEDURE stat_memberstat_upsert
...
AS
BEGIN
UPDATE Memberstats ...
if(##ROWCOUNT = 0)
BEGIN
INSERT INTO Memberstats ...
END
END
This works, but as you can see it mounts to a lot of calls to the same stored procedure (worst case around 100,000 calls). I'm looking into User-defined table type, which sound like a good solution, because it decreases the calls to the database server, with a more bulk like structure. The problem is that when I look at the solutions, tutorials and documentations I find that no one mentions a way to do a insert/update routine with the table type; it's either insert or update.
Is there a way, when working with table types, to do a insert/update call?
Alternatively I have thought about two workaround solutions:
1: Using cursor
I could use a cursor to iterate through the table type value and call the stat_memberstat_upsert procedure above for each row. This will not prevent the many calls to the procedure, but since the calls are done from a local stored procedure the speed might increase.
How to do ForEach on user defined table type in SQL Server stored procedure? (answer "Why not use a cursor ???")
2: Pre validate data
Second solution is to retrieve the already inserted rows primary keys, validate them against the incoming data and sort them into 2 tables, where one is for inserts and the other one is for updates. Then execute both tables to the database. This means that I need to encapsulate this in a transaction so the table will not change during the time is take to validate and execute the insert and update.
Would any of these be a good solution?
Both the solution might slow down further.
You can simply have insert statements into a table
ALTER PROCEDURE stat_memberstat_upsert
...
AS
INSERT INTO Memberstats_temp ...
END
and have a batch process which will run at low traffic time to update/insert the Memberstats table from Memberstats_temp table after that truncating this temp table. This wouldn't be a solution if you need real time update to the table.

Can dynamic SQL be called from a trigger in Oracle?

I have a dozen tables of whom I want to keep the history of the changes. For every one I created a second table with the ending _HISTO and added fields modtime, action, user.
At the moment before I insert, modify or delete a record in this tables I call ( from my delphi app ) a oracle procedure that copies the actual values to the histo table and then do the operation.
My procedure generates a dynamic sql via DBA_TAB_COLUMNS and then executes the generated ( insert into tablename_histo ( fields s ) select fields, sysdate, 'acition', userid from table_name
I was told that I can not call this procedure from a trigger because it has to select the table the trigger is triggered on. Is this true ? Is it possible to implement what I need ?
Assuming you want to maintain history using triggers (rather than any of the other methods of tracking history data in Oracle-- Workspace Manager, Total Recall, Streams, Fine_Grained Auditing etc.), you can use dynamic SQL in the trigger. But the dynamic SQL is subject to the same rules that static SQL is subject to. And even static SQL in a row-level trigger cannot in general query the table that the trigger is defined on without generating a mutating table exception.
Rather than calling dynamic SQL from your trigger, however, you can potentially write some dynamic SQL that generates the trigger in the first place using the same data dictionary tables. The triggers themselves would statically refer to :new.column_name and :old.column_name. Of course, you would have to either edit the trigger or re-run the procedure that dynamically creates the trigger when a new column gets added. Since you, presumably, need to add the column to both the main table and the history table, however, this generally isn't too big of a deal.
Oracle does not allow a trigger to execute a SELECT against the table on which the trigger is defined. If you try it you'll get the dreaded "mutating table" error (ORA-04091), and while there are ways to get around that error they add a lot of complexity for little value. If you really want to build a dynamic query every time your table is updated (IMO this is a bad idea from the standpoint of performance - I find that metadata queries are often slow, but YMMV) it should end up looking something like
strAction := CASE
WHEN INSERTING THEN 'INSERT'
WHEN UPDATING THEN 'UPDATE'
WHEN DELETING THEN 'DELETE'
END;
INSERT INTO TABLENAME_HISTO
(ACTIVITY_DATE, ACTION, MTC_USER,
old_field1, new_field1, old_field2, new_field2)
VALUES
(SYSDATE, strAction, USERID,
:OLD.field1, :NEW.field1, :OLD.field2, :NEW.field2)
Share and enjoy.

PL/SQL embedded insert into table that may not exist

I much prefer using this 'embedded' style inserts in a pl/sql block (opposed to the execute immediate style dynamic sql - where you have to delimit quotes etc).
-- a contrived example
PROCEDURE CreateReport( customer IN VARCHAR2, reportdate IN DATE )
BEGIN
-- drop table, create table with explicit column list
CreateReportTableForCustomer;
INSERT INTO TEMP_TABLE
VALUES ( customer, reportdate );
END;
/
The problem here is that oracle checks if 'temp_table' exists and that it has the correct number of colunms and throws a compile error if it doesn't exist.
So I was wondering if theres any way round that?! Essentially I want to use a placeholder for the table name to trick oracle into not checking if the table exists.
EDIT:
I should have mentioned that a user is able to execute any 'report' (as above). A mechanism that will execute an arbitrary query but always write to the temp_table ( in the user's schema). Thus each time the report proc is run it drops the temp_table and recreates it with, most probably, a different column list.
You could use a dynamic SQL statement to insert into the maybe-existent temp_table, and then catch and handle the exception that occurs when the table doesn't exist.
Example:
execute immediate 'INSERT INTO '||TEMP_TABLE_NAME||' VALUES ( :customer, :reportdate )' using customer, reportdate;
Note that having the table name vary in a dynamic SQL statement is not very good, so if you ensure the table names stay the same, that would be best.
Maybe you should be using a global temporary table (GTT). These are permanent table structures that hold temporary data for an Oracle session. Many different sessions can insert data into the same GTT, and each will only be able to see their own data. The data is automatically deleted either on COMMIT or when the session ends, according to the GTT's definition.
You create the GTT (once only) like this:
create globabal temporary table my_gtt
(customer number, report_date date)
on commit delete/preserve* rows;
* delete as applicable
Then your programs can just use it like any other table - the only difference being it always begins empty for your session.
Using GTTs are much preferable to dropping/recreating tables on the fly - if your application needs a different structure for each report, I strongly suggest you work out all the different structures that each report needs, and create separate GTTs as needed by each, instead of creating ordinary tables at runtime.
That said, if this is just not feasible (and I've seen good examples when it's not, e.g. in a system that supports a wide range of ad-hoc requests from users), you'll have to go with the EXECUTE IMMEDIATE approach.