I am trying to run SQL on Oracle live and I give only one input but the output is printed multiple times, I don't know why this is happening.
You have executed it two times. First time table has been created and a row has been inserted.
Second time table creation failed but a row has been inserted again.
And your select query is correctly showing both rows.
Drop the table and execute the query again.
The SQL state is maintained between runs.
The first time you execute the script, it creates the table and inserts a row into the table and then you select the single row.
The second time you execute the script, it tries to create the table but finds that it already exists and raises an ORA-00955: name is already used by an existing object exception and then will insert the row into the table (so you now will have two rows in the table) and then you select both rows.
If you were to run it again, it will fail to create the table again, insert another row and show you three rows... and so on.
Instead of re-running the entire script (or adding DROP TABLE statements in), what you can do is delete the CREATE TABLE and INSERT statements after they have been run the first time and just edit the SELECT statement to give the output you want.
If you want to see the entire history for the session then click on the "Actions" button and then "View Session".
If you want to drop all the changes and re-run the script from the beginning then use "Actions" then "Reset Session".
Related
I am trying to delete data from Bigquery tables and facing a challenge. At the moment only one date partitioned table drops/deletes at a time. Based on some research and docs on google, I understand I need to use DML operations.
Below are the commands that I used for deletion and it doesn’t work
1.delete from bigquery-project.dataset1.table1_*
2.drop table bigquery-project.dataset1.table1_*;
3.delete from bigquery-project.dataset1.table1_* where _table_suffix='20211117';
The third query works for me and it deletes only for that particular date.
For the 1 and 2 queries, I’ve got a exception saying “Illegal operation (write) on meta-table bigquery-project.dataset1.table1_”
How would I go about deleting over 300 date partitioned tables in one go?
Thanks in advance.
You can go the route of creating a stored procedure to generate your statements in this scenario
CREATE OR REPLACE PROCEDURE so_test.so_delete_table()
BEGIN
DECLARE fqtn STRING;
FOR record IN
(
select concat(table_catalog,".",table_schema,".",table_name) as tn
from so_test.INFORMATION_SCHEMA.TABLES
where table_name like 'test_delete_%'
)
DO
SET fqtn=record.tn;
-- EXECUTE IMMEDIATE format('TRUNCATE TABLE %s',fqtn);
EXECUTE IMMEDIATE format('DROP TABLE %s',fqtn);
END FOR;
END
call so_test.so_delete_table();
In the above I query for the tables I would like to remove records from, then pass that to the appropriate statement. In your scenario I could not tell if you were wanting to remove records or the entire table so I included logic for both depending on the scenario.
This could also be modified to take in a table prefix and pass that to the for loop where clause fairly simply.
Alternatively you could just perform the select statement in the for loop, copy the results into a sheet and construct the appropriate DML statements, copy back into the console and execute.
Using Pentaho and Spoon. i have a combination of transforms that
Creates a new table name variable (TABLENAME)
In a separate transformation I'm attempting to use this table name
first step creates a list of column names (generate rows step)
second step uses these table rows and the table name from the previous transformation (Table output step)
When i click the SQL button on the Table Output Step, it returns the correct create table statement. (although it doesn't replace the placeholder, it does return a CREATE TABLE....)
when i attempt to run these two transformations in-line, it complains that the table with my generated name does not exist. It appears that instead it must be trying to INSERT into this generated table, when the sql button returns a CREATE.
how do i make pentaho create a table with a generated name stored in a variable?
The Table Output step doesn't create the table, the SQL button is like a help button, it only allows you to copy the DML to create the table and paste it to wherever you want to run it, but that step doesn't run the create table order.
You'll need a separate transformation or job to create the table before inserting into it. For jobs there's an action to run SQL statements, one similar step for transformations.
Well, I would like to copy an entire table that has like 400,000,000 rows in it, but using a where condition, to another table on another database with the same structure.
I let this command executing this night:
INSERT INTO db2.tb1 SELECT * from db1.tb1 where column=xxx;
But nothing has copied since then. What I'm doing wrong?
EDIT
When I isolate the execution this is what happens:
EDIT2
Setting autocommit to 1:
I'm having a little trouble with understanding functions and triggers in sql. I didn't post the code of procedure chkInsertAritcle but let's say it returns NEW if it managed to make change and NULL if it didn't.
So my question is about the trigger. If I put AFTER INSERT does that means that it will complete INSERT without depending on the return value? And what
happens with the rest of the rows?
Next question is if I put BEFORE INSERT, in what order does code runs?
Thanks!
CREATE TRIGGER ArticleIns
AFTER INSERT ON ListOfArticles
FOR EACH ROW
EXECUTE PROCEDURE chkInsertArticle();
First all BEFORE triggers run in alphabetical order, then the operation is performed, then all AFTER triggers run in alphabetical order.
Each operation sees the result of the previous one as input, and if any trigger returns NULL, processing for that row stops. So if a BEFORE trigger returns NULL, the DML operation won't take place.
This happens independently for each row affected by the triggering DML statement.
So if the trigger runs before insert, then the code runs before the data is inserted into the row and constraints are checked. So for example you might want to add a timestamp before the data is committed to the database,
If it runs after then the data is already present in the table and all constraints have been checked. This is usually where you want to trigger another process based on the row data, maybe update another table, send an e-mail etc.
In your example, the data will be in the database before your procedure runs. So if your procedure modifies the row data, it needs to be in the database.
Is there any type of product where I can write a SQL statement to select from one table and then insert into another database (The other database is out in the cloud). Also, it needs to be able to check to see if that record exists and then update the row if anything has changed. Then it will need to run every 10-30 minutes to check to see what has changed or if new records have been added.
The source database and the ending database have a different schema (if that matters?) I've been looking, but it seams that only products out there are ones that will just copy one table and insert into a table with the same schema.