I'm new to snowflake and just tried creating a stored procedure to read deltas from a stream table A and execute two merge statements into tables B & C, but I keep getting this error message "Multiple SQL statements in a single API call are not supported; use one API call per statement instead."
Here's my stored procedure. The idea is to call it from a task when system$stream_has_data('tableA')
create or replace procedure write_delta()
return varchar
language sql as
$$
begin transaction;
-- 1/2. load table B
merge into tableB as dest
using tableA as src
on src.id = dest.id
when not matched
then insert (id, value) values( src.id, src.value);
-- 2/2. load tableC
merge into tableC as dest
using tableA as src
on src.id = dest.id
when not matched
then insert (id, value) values( src.id, src.value);
commit;
$$;
Snowflake SQL stored procedures take two forms.
Form 1: A single SQL statement. This is mostly useful when calling from a task to execute a single statment.
Form 2: A Snowflake Scripting block. The following script shows your code converted to a stored procedure with a scripting block:
create or replace table table1 (id int, value string);
create or replace table tableB like table1;
create or replace table tableC like table1;
create or replace stream tableA on table table1;
insert into table1(id, value) values (1, 'A'), (2, 'B');
create or replace procedure myprocedure()
returns varchar
language sql
as
$$
begin
begin transaction;
-- 1/2. load table B
merge into tableB as dest
using tableA as src
on src.id = dest.id
when not matched
then insert (id, value) values( src.id, src.value);
-- 2/2. load tableC
merge into tableC as dest
using tableA as src
on src.id = dest.id
when not matched
then insert (id, value) values( src.id, src.value);
commit;
end;
$$
;
call myprocedure();
select * from tableB;
select * from tableC;
You can get more information on how to write them here: https://docs.snowflake.com/en/developer-guide/snowflake-scripting/index.html
If you want to execute multiple statements, you'll need to run the stored procedure using Snowflake Scripting, JavaScript, Java, or Python.
Related
Assuming I have two tables final table and table_1, I want to use the the newest values from table_1 and insert them with a trigger in the final_table with every INSERT ON table_1. When I create the triggerfunction inserttrigger() as shown in the example and create the trigger, I get the newest value times the number of rows in table_1. How to write the trigger proper that I get only the single newest record in table1?
Doing:
-- Create tables and inserting example values
CREATE TABLE final_table(id INTEGER, value_fin INTEGER);
CREATE TABLE table_1(id INTEGER, value INTEGER);
INSERT INTO table_1 VALUES(1, 200), (2,203), (3, 209);
-- Create Triggerfunction
CREATE OR REPLACE FUNCTION inserttrigger()
RETURNS TRIGGER AS
$func$
BEGIN
INSERT INTO final_table
SELECT latest.id, latest.value
FROM (SELECT NEW.id, NEW.value FROM table_1) AS latest;
RETURN NEW;
END;
$func$ language plpgsql;
-- Create Trigger
CREATE TRIGGER final_table_update BEFORE INSERT ON table_1
FOR EACH ROW EXECUTE PROCEDURE inserttrigger() ;
--Insert example values
INSERT INTO table_1 VALUES(4, 215);
Results in:
SELECT * FROM final_table
id | value_fin
4 215
4 215
4 215
But should look like:
id | value_fin
4 215
While:
CREATE TRIGGER final_table_update BEFORE INSERT ON table_1
EXECUTE PROCEDURE inserttrigger() ;
Results in:
ERROR: record "new" is not assigned yet
DETAIL: The tuple structure of a not-yet-assigned record is indeterminate.
I would recommend the VALUES() syntax:
CREATE OR REPLACE FUNCTION inserttrigger()
RETURNS TRIGGER AS
$func$
BEGIN
INSERT INTO final_table VALUES(NEW.id, NEW.value);
RETURN NEW;
END;
$func$ language plpgsql;
CREATE TRIGGER final_table_update BEFORE INSERT ON table_1
FOR EACH ROW EXECUTE PROCEDURE inserttrigger();
Note that you could also get the same behavior with a common-table-expression and the returning syntax, which avoids the need for a trigger:
with t1 as (
insert into table_1(id, value_fin) values(4, 215)
returning id, value_fin
)
insert into final_table(id, value) select id, value_fin from t1
I am using MariaDB as my database and I am running into some issues creating a trigger. The code is below:
CREATE TRIGGER `trigger` BEFORE INSERT ON `table_1` FOR EACH ROW
BEGIN
IF NOT CONTAINS(`db`.`table_2`.`item`, NEW.item) THEN
INSERT INTO `db`.`table_2` (`item`, `item_2`, `item_3`) VALUES (NEW.item, "foo",
"bar");
END IF;
END
The issue is that table_2 is in the database "db" along with table_1 but when this code is run, it gives me the error below:
SQL Error (1109): Unknown table 'table_2' in field list
I am very confused by this because it seems like I should be able to do this by everything I am reading. All I am trying to do is force an insert in one table to cause an insert into another table if the condition is not met.
I am guessing that you intend:
CREATE TRIGGER `trigger` BEFORE INSERT ON `table_1` FOR EACH ROW
BEGIN
IF NOT EXISTS (SELECT 1 FROM db.table_2 t2 WHERE CONTAINS(t2.item, NEW.item) THEN
INSERT INTO `db`.`table_2` (`item`, `item_2`, `item_3`)
VALUES (NEW.item, 'foo', 'bar');
END IF;
END;
We have a huge Oracle SQL query in our project which is using many views and tables for source data.
Is there any way to find the list of rows fetched from each source table by this big query when I run it?
Basically what we are trying to do is to create the bare minimum number of rows in the source tables so that the outer most big query returns at least a single record.
I have tried to run the smaller queries individually. But it is really time consuming and tedious. So I was wondering if there is a smarter way of doing this.
You can use Fine Grained Access Control. Here is how you might do it:
Step 1: Create a table to hold the list of rowids (i.e., the results you are looking for in this exercise)
CREATE TABLE matt_selected_rowids ( row_id rowid );
Step 2: Create a FGAC handler that will add a predicate whenever a base table is selected.
CREATE OR REPLACE PACKAGE matt_fgac_handler IS
FUNCTION record_rowid ( p_rowid rowid ) RETURN NUMBER DETERMINISTIC;
FUNCTION add_rowid_predicate (d1 varchar2, d2 varchar2 ) RETURN VARCHAR2;
END matt_fgac_handler;
CREATE OR REPLACE PACKAGE BODY matt_fgac_handler IS
FUNCTION record_rowid ( p_rowid rowid ) RETURN NUMBER DETERMINISTIC IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO matt_selected_rowids( row_id ) values ( p_rowid );
COMMIT;
RETURN -1;
END record_rowid;
FUNCTION add_rowid_predicate (d1 varchar2, d2 varchar2 ) RETURN VARCHAR2 IS
BEGIN
RETURN 'matt_fgac_handler.record_rowid (rowid) = -1';
END add_rowid_predicate;
END matt_fgac_handler;
Step 3: Add a policy to each base table used by your view (you can get the list by using DBA_DEPENDENCIES recursively, or just doing an explain plan and eyeballing it).
E.g.,
CREATE TABLE matt_table ( a number, b varchar2(80) );
CREATE INDEX matt_table_n1 on matt_table(a);
insert into matt_table values (1,'A');
insert into matt_table values (2,'B');
insert into matt_table values (3,'C');
insert into matt_table values (3,'D');
insert into matt_table values (3,'E');
insert into matt_table values (3,'F');
insert into matt_table values (4,'G');
insert into matt_table values (4,'H');
COMMIT;
BEGIN
DBMS_RLS.ADD_POLICY('APPS','MATT_TABLE','record_rowids_policy', NULL, 'matt_fgac_handler.add_rowid_predicate', 'select');
END;
So, at this point, whenever a user selects from my table, Oracle is automatically going to add a condition that calls my record_rowid function for each rowid.
For example:
delete from matt_selected_rowids;
SELECT /*+ INDEX */ * FROM matt_table where a = 2;
-- This gives your the rowids selected...
select r.row_id, o.object_name from matt_selected_rowids r left join dba_objects o on o.object_id =dbms_rowid.rowid_object(row_id);
I have a list of 100k ids in a file. I want to iterate through these ids:
for each id, check if id is in a table:
If it is, update its updated_date flag
If not, add a new record (id, updated_date)
I have researched and found MERGE clause. The downside is, MERGE requires the ids to be in a table. I am only allowed to create a temporary table if necessary.
Can anyne point me in the right direction? It must be a script that I can run on my database, not in code.
merge into MyTable x
using ('111', '222', all my ids) b
on (x.id = b.id)
when not matched then
insert (id, updated_date) values (b.id, sysdate)
when matched then
update set x.updated_date = sysdate;
EDIT: I am now able to use a temporary table if it's my only option.
Given that you say you can't create a temporary table, one way might be to convert your list of ids into a set of union all'd selects, eg:
123,
234,
...
999
becomes
select 123 id from dual union all
select 234 id from dual union all
...
select 999 id from dual
You could then use that in your merge statement:
merge into MyTable x
using (select 123 id from dual union all
select 234 id from dual union all
...
select 999 id from dual) b
on (x.id = b.id)
when not matched then insert (id, updated_date) values (b.id, sysdate)
when matched then update set x.updated_date = sysdate;
If you've really got 100k ids, it might take a while to parse the statement, however! You might want to split up the queries and have several merge statements.
One other thought - is there not an existing GTT that you could "borrow" to store your data?
If you have access to the file from your Oracle server then you can define an external table, which will allow you to read from the file using SQL.
The syntax is based on SQL*Loader, and it's maybe not something you'd want to do for a casual job, more of a recurring task.
Alternatively you could use SQL*Loader itself to load it into a table, or even ODBC from a Microsoft Access or similar database.
Another option is to run 100,000 inserts. You can make this perform much better by taking each 100 or so inserts and wrapping them in an anonymous block, which saves roundtrips to the database, so instead of:
insert into tmp values(1);
insert into tmp values(12);
insert into tmp values(145);
insert into tmp values(234);
insert into tmp values(245);
insert into tmp values(345);
....
insert into tmp values(112425);
use ...
begin
insert into tmp values(1);
insert into tmp values(12);
insert into tmp values(145);
insert into tmp values(234);
...
insert into tmp values(245);
end;
/
begin
insert into tmp values(345);
...
insert into tmp values(112425);
end;
/
If it was a regular task I'd definitely go for an external table though.
I have a trigger that I want to insert the same random value into two tables. How do I do this?
CREATE TRIGGER insertTrigger AFTER INSERT ON TableAB
BEGIN
INSERT INTO TableA(id, num) VALUES(RANDOM(), 1);
INSERT INTO TableB(id, num) VALUES(??, 1);
END;
I am not really using Random, but my own custom sqlite function which essentially does the same thing, but I need to remember that value to insert into TableB. How do I do that?
SQLite has no such thing as variables, but you could read the value from the record that you had just inserted into the first table:
CREATE TRIGGER insertTrigger
AFTER INSERT ON TableAB
BEGIN
INSERT INTO TableA(id, num) VALUES(RANDOM(), 1);
INSERT INTO TableB(id, num) SELECT id, 1
FROM TableA
WHERE rowid = last_insert_rowid();
END;