How to add error handling using SQL in snowflake - sql

I've written below stored procedure code using SQL in snowflake which truncates a table and load new data into that table by first copying the data from a source, applying a bit of processing on it and loading into the target table which we truncated.
I've added nested Begin and End statements in order to try to add error handling functionality along with If Else statements but none of them worked. I want to first test if copy data was successful if yes than code should run second insert statements which basically brings data to staging where we refine the data where I want to add second check which checks if the rows were added successfully. Lastly we copy into the target table after all the checks are passed.
CREATE OR REPLACE PROCEDURE DEV_NMC_ITEM_AND_PSYCHOMETRIC_DM.STAGE2B."SP_N_1Test"("STAGE_S3" VARCHAR(16777216), "STAGE_OUTPUT" VARCHAR(16777216))
RETURNS VARCHAR(16777216)
LANGUAGE SQL
EXECUTE AS CALLER
AS '
DECLARE
Stage_nm_s3 STRING;
begin
truncate table "STAGE2A"."T001_IRF_STUDENT_FORM_S3";
execute immediate ''COPY INTO "STAGE2A"."T001_IRF_STUDENT_FORM_S3"
FROM ( select
a bunch of columns
from #stage2a.''||:STAGE_S3||'')
pattern= ''''.*_IRF_.*\\\\.csv''''
file_format = (type=csv, skip_header=1 )'';
begin
Insert into "STAGE2B"."T011_IRF_STUDENT_FORM_V001" (
a bunch of columns
SELECT
a bunch of columns
from "STAGE2A"."V001_IRF_STUDENT_FORM_T001";
begin
execute immediate ''copy into #stage2a.''||:STAGE_OUTPUT||''/T001_IRF_STUDENT_FORM_S3
from (SELECT
a bunch of columns
from "STAGE2B"."T011_IRF_STUDENT_FORM_V001")
file_format = ( format_name = F_CSV type=csv compression = none)
header = True
SINGLE = FALSE
OVERWRITE = TRUE
max_file_size=524288000 '';
return ''Load process completed for IRF_STUDENT_FORM_S3'';
end;
end;
end;
';```

I'm afraid you will need to wrap up your SQL statements into Javascript-syntax stored procedure to use Try/Catch block.
Here's some more explanation on that topic: Error handling for stored procedures

Related

how can I print results from sql procedure?

I'm writing a procedure to count rows in every table in my database. It so far looks like this:
create or replace procedure count_database_rows()
dynamic result sets 1
P1: begin atomic
DECLARE stmt CHAR(40);--
FOR v1 AS
c1 CURSOR FOR
SELECT TABLE_SCHEMA, TABLE_NAME FROM sysibm.tables
DO
SET stmt = 'SELECT COUNT(*) FROM '||TABLE_SCHEMA||'.'||TABLE_NAME;--
PREPARE s FROM stmt;--
EXECUTE s;--
END FOR;--
end P1
~
however, when I run it:
db2 -ntd~ -f script.sql > dump.csv
all I'm getting is:
DB20000I The SQL command completed successfully.
how can I print all results instead?
Just for demonstration. I assume, that it's some educational task, and it's Db2 for LUW.
For non-DPF Db2 for LUW systems only
--#SET TERMINATOR #
CREATE OR REPLACE FUNCTION COUNT_DATABASE_ROWS()
RETURNS TABLE (P_TABSCHEMA VARCHAR(128), P_TABNAME VARCHAR(128), P_ROWS BIGINT)
BEGIN
DECLARE L_STMT VARCHAR(256);
DECLARE L_ROWS BIGINT;
FOR V1 AS
SELECT TABSCHEMA, TABNAME
FROM SYSCAT.TABLES
WHERE TYPE IN ('T', 'S')
FETCH FIRST 10 ROWS ONLY
DO
SET L_STMT = 'SET ? = (SELECT COUNT(*) FROM "'||V1.TABSCHEMA||'"."'||V1.TABNAME||'")';
PREPARE S FROM L_STMT;
EXECUTE S INTO L_ROWS;
PIPE(V1.TABSCHEMA, V1.TABNAME, L_ROWS);
END FOR;
RETURN;
END#
SELECT * FROM TABLE(COUNT_DATABASE_ROWS())#
For any Db2 for LUW systems
A little bit tricky for DPF systems, but doable as well. We have to wrap the code which is not allowed in the inlined compound statement into the stored procedure.
--#SET TERMINATOR #
CREATE OR REPLACE PROCEDURE COUNT_DATABASE_ROWS_DPF(OUT P_DOC XML)
READS SQL DATA
BEGIN
DECLARE L_STMT VARCHAR(256);
DECLARE L_ROWS BIGINT;
DECLARE L_NODE XML;
SET P_DOC = XMLELEMENT(NAME "DOC");
FOR V1 AS
SELECT TABSCHEMA, TABNAME
FROM SYSCAT.TABLES
WHERE TYPE IN ('T', 'S')
FETCH FIRST 10 ROWS ONLY
DO
SET L_STMT = 'SET ? = (SELECT COUNT(*) FROM "'||V1.TABSCHEMA||'"."'||V1.TABNAME||'")';
PREPARE S FROM L_STMT;
EXECUTE S INTO L_ROWS;
SET L_NODE = XMLELEMENT
(
NAME "NODE"
, XMLELEMENT(NAME "TABSCHEMA", V1.TABSCHEMA)
, XMLELEMENT(NAME "TABNAME", V1.TABNAME)
, XMLELEMENT(NAME "ROWS", L_ROWS)
);
SET P_DOC = XMLQUERY
(
'transform copy $mydoc := $doc modify do insert $node as last into $mydoc return $mydoc'
passing P_DOC as "doc", L_NODE as "node"
);
END FOR;
END#
CREATE OR REPLACE FUNCTION COUNT_DATABASE_ROWS_DPF()
RETURNS TABLE (P_TABSCHEMA VARCHAR(128), P_TABNAME VARCHAR(128), P_ROWS BIGINT)
BEGIN ATOMIC
DECLARE L_DOC XML;
CALL COUNT_DATABASE_ROWS_DPF(L_DOC);
RETURN
SELECT *
FROM XMLTABLE ('$D/NODE' PASSING L_DOC AS "D" COLUMNS
TYPESCHEMA VARCHAR(128) PATH 'TABSCHEMA'
, TABNAME VARCHAR(128) PATH 'TABNAME'
, LENGTH BIGINT PATH 'ROWS'
);
END#
-- Usage. Either CALL or SELECT:
CALL COUNT_DATABASE_ROWS_DPF(?)#
SELECT * FROM TABLE(COUNT_DATABASE_ROWS_DPF())#
If your Db2-server runs on Linux/Unix/Windows then you can use the DBMS_OUT.PUT_LINE function to send diagnostic output from SQL routines to the console. The idea is that in your routine, you assign to a variable some text (example, the table name and its count), then call DBMS_OUTPUT.PUT_LINE(...) to cause that text to appear on the console. The disadvantage of this approach is that the output will only appear once the routine has completed. This is often not what you want, sometimes you want to see the row-counts as they become available, so consider instead alternative approaches, as shown below.
To see DBMS_OUTPUT.PUT_LINE output with the Db2 CLP (or db2cmd.exe) you first need to use set serveroutput on before calling the procedure.
But for simple stuff like this, a stored procedure might be unsuitable, because you can use the CLP to do the work in two steps after connecting to the database. This is often more convenient for scripting purposes. The idea is that you make a file to generate the queries, which when you run with CLP creates a second file, and you execute the second file to get the desired results.
Example
Create file gen_counts.sql containing the query that generates the real queries, for example gen_counts.sql might contain
select 'select count(*) from '||rtrim(tabschema)||'.'||rtrim(tabname)||' with ur;'
from syscat.tables;
Then you can do these steps:
db2 connect to $database
db2 -txf gen_counts.sql > count_queries.sql
db2 -tvf count_queries.sql > count_results.txt
Note that the output file (in this case count_results.txt) is readable via another shell session while the script continues to run. You can also pipe the output to concurrent jobs if required.
However, experienced DBAs might avoid row-counting all tables in this manner, and might choose instead to ensure that the runstats are always up-to-date for all tables, and accept recent estimates of row counts, which are visible in SYSCAT.TABLES.CARD once runstats are completed. If the stats are up to date, the CARD count is often good enough for many purposes. If exact counts are required, they are often valid only for a specific timestamp if the database is live.

Is it possible to pass variable tables through procedures in SQL DEV?

set serveroutput on;
CREATE OR REPLACE PROCEDURE test_migrate
(
--v_into_table dba_tables.schema#dbprd%TYPE,
--v_from_table dba_tables.table#dbprd%TYPE,
v_gid IN NUMBER
)
IS
BEGIN
select * INTO fx.T_RX_TXN_PLAN
FROM fx.T_RX_TXN_PLAN#dbprd
WHERE gid = v_gid;
--and schema = v_into_table
--and table = v_from_table;
COMMIT;
END;
I thought that SELECT * INTO would create a table in the new database from #dbprd. However, the primary issue is just being able to set these as variables and the goal is to EXEC(INTO_Table,FROM_Table,V_GID) to run the above code.
Error(9,19): PLS-00201: identifier 'fx.T_RX_TXN_PLAN' must be
declared  Error(10,5): PL/SQL: ORA-00904: : invalid identifier
If your goal is to copy data from table in "another" database into a table that resides in "this" database (regarding database link you used), then it it INSERT INTO, not SELECT INTO.
For example:
CREATE OR REPLACE PROCEDURE test_migrate (v_gid in number)
IS
BEGIN
insert into fx.t_rx_txn_plan (col1, col2, ..., coln)
select col1, col2, ..., coln
from fx.t_rx_txn_plan#dbprod
where gid = v_gid;
END;
Last sentence you wrote looks like you'd want to make it dynamic, i.e. pass table names and v_gid (whatever that might be; looks like all tables that should be involved into this process have it). That isn't a simple task.
If you plan to use insert into select * from, that's OK but not for production system. What if someone alters a table and adds (or drops) a column or two? Your procedure will automatically fail. Correct way to do it is to enumerate all columns involved, but that requires fetching data from user_tab_columns (or all_ or dba_ version of the same), which complicates it even more.
Therefore, if you want to move data from here to there, why don't you do it using Data Pump Export & Import? Those utilities are designed for such a purpose, and will do the job better than your procedure. At least, I think so.
This way you should be returning a row. If so, add an OUT type parameter to the procedure with
CREATE OR REPLACE PROCEDURE test_migrate(
--v_into_table dba_tables.schema#dbprd%TYPE,
--v_from_table dba_tables.table#dbprd%TYPE,
i_gid IN NUMBER,
o_RX_TXN_PLAN OUT fx.T_RX_TXN_PLAN#dbprd%rowtype
) IS
BEGIN
SELECT *
INTO RT_RX_TXN_PLAN
FROM fx.T_RX_TXN_PLAN#dbprd
WHERE id = v_gid;
--and schema = v_into_table
--and table = v_from_table;
END;
and call the procedure such as
declare
v_rx_txn_plan fx.T_RX_TXN_PLAN#dbprd%rowtype;
v_gid number:=5345;
begin
test_migrate(v_gid => v_gid, rt_rx_txn_plan => v_rx_txn_plan);
dbms_output.put_line(v_rx_txn_plan.col1);
dbms_output.put_line(v_rx_txn_plan.col2);
end;
to print out the returning values for some columns of the table. to be able to create a new table from this, not SELECT * INTO ... syntax, but
CREATE TABLE T_RX_TXN_PLAN AS
SELECT *
INTO RT_RX_TXN_PLAN
FROM fx.T_RX_TXN_PLAN#dbprd
WHERE ...
is used.
But neither of the cases to issue a COMMIT since there's no DML exists within them.
To create a table you must use the CREATE TABLE statement, and to use any DDL statement in PL/SQL you have to use EXECUTE IMMEDIATE:
CREATE OR REPLACE PROCEDURE test_migrate
(
v_gid IN NUMBER
)
IS
BEGIN
EXECUTE IMMEDIATE 'CREATE TABLE FX.T_RX_TXN_PLAN AS
SELECT *
FROM fx.T_RX_TXN_PLAN#dbprd
WHERE gid = :GID'
USING IN v_gid;
END;

SQL Update Statement based on Procedure in SAP HANA

I'm creating an update statement that generate SHA256 for table columns based on table's name
1st Step: I created a procedure that get the table columns, concatenate it all in one columns, then format to a desired format.
-- Procedure code : Extract table's columns list, concatenate it and format it
Create procedure SHA_PREP (in inp1 nvarchar(20))
as
begin
SELECT concat(concat('hash_sha256(',STRING_AGG(A, ', ')),')') AS Names
FROM (
SELECT concat('to_varbinary(IFNULL("',concat(COLUMN_NAME,'",''0''))')) as A
FROM SYS.TABLE_COLUMNS
WHERE SCHEMA_NAME = 'SCHEMA_NAME' AND TABLE_NAME = :inp1
AND COLUMN_NAME not in ('SHA')
ORDER BY POSITION
);
end;
/* Result of this procedures :
hash_sha256(
to_varbinary("ID"),to_varbinary(IFNULL("COL1",'0')),to_varbinary(IFNULL("COL2",'0')) )
*/
-- Update Statement needed
UPDATE "SCHEMA_NAME"."TABLE_NAME"
SET "SHA" = CALL "SCHEMA_NAME"."SHA_PREP"('SCHEMA_NAME')
WHERE "ID" = 99 -- a random filter
The solution by #SonOfHarpy technically works but has several issues, namely:
unnecessary use of temporary tables
overly complicated string assignment approach
use of fixed system table schema (SYS.TABLE_COLUMNS) instead of PUBLIC synonym
wrong data type and variable name for the input parameter
An improved version of the code looks like this:
create procedure SHA_PREP (in TABLE_NAME nvarchar(256))
as
begin
declare SQL_STR nvarchar(5000);
SELECT
'UPDATE "SCHEMA_NAME"."TABLE_NAME" SET "SHA"= hash_sha256(' || STRING_AGG(A, ', ') || ')'
into SQL_STR
FROM (
SELECT
'TO_VARBINARY(IFNULL("'|| "COLUMN_NAME" ||'",''0''))' as A
FROM TABLE_COLUMNS
WHERE
"SCHEMA_NAME" = 'SCHEMA_NAME'
AND "TABLE_NAME" = :TABLE_NAME
AND "COLUMN_NAME" != 'SHA'
ORDER BY POSITION
);
-- select :sql_str from dummy; -- this is for debugging output only
EXECUTE IMMEDIATE (:SQL_STR);
end;
By changing the CONCAT functions to the shorter || (double-pipe) operator, the code becomes a lot easier to read as the formerly nested function calls are now simple chained concatenations.
By using SELECT ... INTO variable the whole nonsense with the temporary table can be avoided, again, making the code easier to understand and less prone to problems.
The input parameter name now correctly reflects its meaning and mirrors the HANA dictionary data type for TABLE_NAME (NVARCHAR(256)).
The procedure now consists of two commands (SELECT and EXECUTE IMMEDIATE) that each performs an essential task of the procedure:
Building a valid SQL update command string.
Executing the SQL command.
I removed the useless line-comments but left a debugging statement as a comment in the code, so that the SQL string can be reviewed without having to execute the command.
For that to work, obviously, the EXECUTE... line needs to be commented out and the debugging line has to be uncommented.
What's more worrying than the construction of the solution is its purpose.
It looks as if the SHA column should be used as a kind of shorthand row-data fingerprint. The UPDATE approach certainly handles this as an after-thought activity but leaves the "finger-printing" for the time when the update gets executed.
Also, it takes an essential part of the table design (that the SHA column should contain the fingerprint) away from the table definition.
An alternative to this could be a GENERATED COLUMN:
create table test (aaa int, bbb int);
alter table test add (sha varbinary (256) generated always as
hash_sha256(to_varbinary(IFNULL("AAA",'0'))
, to_varbinary(IFNULL("BBB",'0'))
)
);
insert into test (aaa, bbb) values (12, 32);
select * from test;
/*
AAA BBB SHA
12 32 B6602F58690CA41488E97CD28153671356747C951C55541B6C8D8B8493EB7143
*/
With this, the "generator" approach could be used for table definition/modification time, but all the actual data handling would be automatically done by HANA, whenever values get changed in the table.
Also, no separate calls to the procedure will ever be necessary as the fingerprints will always be current.
I find a solution that suits my need, but maybe there's other easier or more suitable approchaes :
I added the update statement to my procedure, and inserted all the generated query into a temporary table column, the excuted it using EXECUTE IMMEDIATE
Create procedure SHA_PREP (in inp1 nvarchar(20))
as
begin
/* ********************************************************** */
DECLARE SQL_STR VARCHAR(5000);
-- Create a temporary table to store a query in
create local temporary table #temp1 (QUERY varchar(5000));
-- Insert the desirable query into the QUERY column (Temp Table)
insert into #temp1(QUERY)
SELECT concat('UPDATE "SCHEMA_NAME"."TABLE_NAME" SET "SHA" =' ,concat(concat('hash_sha256(',STRING_AGG(A, ', ')),')'))
FROM (
SELECT concat('to_varbinary(IFNULL("',concat(COLUMN_NAME,'",''0''))')) as A
FROM SYS.TABLE_COLUMNS
WHERE SCHEMA_NAME = 'SCHEMA_NAME' AND TABLE_NAME = :inp1
AND COLUMN_NAME not in ('SHA')
ORDER BY POSITION
);
end;
/* QUERY : UPDATE "SCHEMA_NAME"."TABLE_NAME" SET "SHA" =
hash_sha256(to_varbinary("ID"),to_varbinary(IFNULL("COL1",'0')),to_varbinary(IFNULL("COL2",'0'))) */
SELECT QUERY into SQL_STR FROM "SCHEMA_NAME".#temp1;
--Excuting the query
EXECUTE IMMEDIATE (:SQL_STR);
-- Dropping the temporary table
DROP TABLE "SCHEMA_NAME".#temp1;
/* ********************************************************** */
end;
Any other solution or improvement are well welcomed
Thank you

DB2 Select Statement error after for loop in stored procedure

I've written a stored procedure which uses a for loop to execute a query for a list of views. It generates a dynamic sql statement for each view inside the for loop and then executes it, which inserts output into a declared temporary table.
The for loop works perfectly and it runs without errors, however if I add a select statement after the END FOR; to get the final output from the temporary table I get the error below. Does anyone have any ideas please?
Error 16/07/2018 10:43:41 0:00:00.007 DB2 Database Error: ERROR [42601] [IBM][DB2/AIX64] SQL0104N An unexpected token "select *" was found following "1; END FOR; ". Expected tokens may include: "<call>". LINE NUMBER=31. SQLSTATE=42601
SQL Code:
BEGIN
DECLARE SQLTEXT varchar(500);
DECLARE GLOBAL TEMPORARY TABLE SESSION.AS_USAGE_RESULTS(
temp table columns
);
FOR v as cur1 cursor for
select distinct viewname,viewschema
from syscat.VIEWS
DO
SET SQLTEXT = 'Dynamic Insert into temp table here'
PREPARE s1 FROM SQLTEXT;
EXECUTE s1;
END FOR;
select *
from SESSION.AS_USAGE_RESULTS;
DROP TABLE SESSION.AS_USAGE_RESULTS;
END
Your mistake is that if you wish to return a result-set from session.as_usage_results, then you must declare a cursor for its select, and open that cursor then end the sproc. This is a FAQ. There are examples in the IBM Db2 Server SAMPLES directory and in the Db2 Knowledge Center.
Inside the sproc, you can either use SELECT ... INTO, or use a select within a cursor, or use a SELECT as part of a SET statement.
You should not drop the session table in the procedure in case the result-set won't be consumed before the table gets dropped. Either drop the session table elsewhere or use an alternative design.
In your example you don't need cursor cur1, so below I show a stilted artificial example of what your might mean. It is artificial because you can see that the session table is also redundant for this example, but it shows the use of the cursor for the result-set.
--#SET TERMINATOR #
create or replace procedure dynproc1
language sql
specific dynproc1
dynamic result sets 1
BEGIN
DECLARE v_sqltext varchar(2000);
DECLARE c1 cursor with return to client for s1;
DECLARE GLOBAL TEMPORARY TABLE SESSION.AS_USAGE_RESULTS ( viewname varchar(128), viewschema varchar(128) );
insert into session.as_usage_results(viewname, viewschema) select viewname, viewschema from syscat.views;
set v_sqltext = 'select * from session.as_usage_results';
prepare s1 from v_sqltext;
open c1;
END
#

Update from stored procedure return

I have a stored procedure that I want to run on every row in a table that matches a where clause, the procedure already exists on the server and is used in other places so it cannot be modified for these changes.
The stored procedure returns a scalar value, I need to store this value in a column in the table, I've tried using the update:
UPDATE tbl SET tbl.Quantity =
EXEC checkQuantity #ProductID = tbl.ProductID, #Quantity = tbl.Quantity
FROM orders tbl WHERE orderNumber = #orderNumber
But this of course doesn't work, is there a way to do this without multiple queries, reading the line info, running the proc in a loop then updating the original line?
No there is no way to do this without multiple queries. This is one of the few scenarios where a cursor or loop is necessary.
Unless you can replace your stored procedure with a user-defined function, which can be run in the context of a single query.