Converting SQL Server Query to Oracle - sql

I have the SQL Server query shown below and I am trying to get it to work on Oracle. I have done some looking but am new to Oracle so I am not even sure what I should be looking for. I am hoping that I can make the a query to run adhoc not necessarily a procedure. The basic concept is the following:
Create a temporary table to hold data to be totaled.
Create a date table to get a list of dates.
Build a dynamic query to insert data into the temporary table.
Execute the dynamic query.
Select the summary from the temporary table.
Drop the temporary table
Please remember, I am new to Oracle and I have done some looking. I know that the variables and aliases must be formatted differently and can get the date table but I am not sure the proper Oracle way to create and execute dynamic queries where the table name is the dynamic part. I can create the dynamic string with the correct table name but don't know how to execute it. I have seen some examples but none of them seem to make sense to me for what I am trying to do.
-- Oracle query for dates
with dt (d) as (
select last_day(add_months(sysdate,-2))+1 + rownum - 1
from all_objects
where rownum <= sysdate-last_day(add_months(sysdate,-2))+1+1
)
select 'insert into #tt (cnt, eem, ers, sts) (
select count(1), eem_id, ers_id, sts_id
from event_error' || to_char(D, 'ddmmyy') || ' eve
group by sts_id, eem_id, ers_id); ' "QRY"
from dt;
What I have done in the past is create a bash script which would do the looping through each date and then used the script to summarize the output. This time however I am trying to learn something and I know that there has to be a way to do this in SQL in Oracle.
I appreciate any help or assistance and hope I have explained this well enough.
-- Working SQL Server query
-- declare variables
declare #query varchar(max);
-- create temporary table
create table #tt(cnt int, eem int, ers int, sts int);
-- get a list of dates to process
with dt (d) as
(
select dateadd(month, datediff(month, 0, getdate())-1, 0) as d
union all
select dateadd(dd, 1, d)
from dt
where dateadd(dd, 1, d) <= getdate()
)
-- build the dynamic query
select distinct
#query = stuff ((select 'insert into #tt (cnt, eem, ers, sts) (
select count(1), eem_id, ers_id, sts_id
from event_error' + replace(convert(varchar(5), d, 103), '/', '') + right(year(d), 2) + ' (nolock) eve
group by sts_id, eem_id, ers_id); '
from dt for xml path, type).value(N'.[1]',N'nvarchar(max)')
, 1, 1, N'')
from dt;
-- to execute the dynamic query
execute (#query);
-- query the temporary table
select
[Stream] = sts.sts_name,
[Count] = sum(eve.cnt),
[Error Status] = ers.ers_name,
[Error Number] = eem.eem_error_no,
[Error Text] = eem.eem_error_text
from
#tt eve
inner join
event_error_message eem on eem.eem_id = eve.eem
inner join
error_status ers on ers.ers_id = eve.ers
inner join
stream_stage sts on sts.sts_id = eve.sts
group by
sts.sts_name, eem.eem_error_no, eem.eem_error_text, ers.ers_name
order by
sts.sts_name, eem.eem_error_no, ers.ers_name;
-- drop the temporary table
drop table #tt;

So as I expected, after fighting this all day and finally giving up and asking for help I have an answer. The query below works however if you have improvements or constructive criticism please share as I said, I am trying to learn.
-- create the temporary table
create global temporary table my_tt (
cnt number
, sts number
, eem number
, ers number
)
on commit delete rows;
declare
V_TABL_NM ALL_TABLES.TABLE_NAME%TYPE;
V_SQL VARCHAR2(1024);
begin
for GET_TABL_LIST in (
with dt (d) as (
select last_day(add_months(sysdate,-2))+1 + rownum -1
from all_objects
where rownum <= sysdate-last_day(add_months(sysdate,-2))
)
select 'event_error' || to_char(D, 'ddmmyy') TABLE_NAME from dt
) loop
V_TABL_NM := GET_TABL_LIST.TABLE_NAME;
V_SQL := 'insert into my_tt select count(1), sts_id, eem_id, ers_id from ' || V_TABL_NM || ' group by sts_id, eem_id, ers_id';
execute immediate V_SQL;
end loop;
end;
/
-- the slash is important for the above statement to complete
select
sts.sts_name "Stream"
, sum(eve.cnt) "Count"
, ers.ers_name "Error Status"
, eem.eem_error_no "Error Number"
, eem.eem_error_text "Error Text"
from my_tt eve
inner join event_error_message eem
on eem.eem_id = eve.eem
inner join error_status ers
on ers.ers_id = eve.ers
inner join stream_stage sts
on sts.sts_id = eve.sts
group by sts.sts_name, eem.eem_error_no, eem.eem_error_text, ers.ers_name
order by sts.sts_name, eem.eem_error_no, ers.ers_name;
-- drop the temporary table
drop table my_tt purge;

Related

SQL Insert Set of Values Optimized

The goal is to create a table with sample data in Teradata for a year. A way to do it, is to copy the first 6 entries of similar data and just alter the timestamp 365 times (for my usecase).
I didn't know better and wrote a procedure
REPLACE PROCEDURE stackoverflow()
BEGIN
DECLARE i INTEGER DEFAULT 0;
DELETE FROM TestTable;
WHILE i < 365 DO
INSERT INTO TestTable
SELECT
TestName
,Name_DT + i
FROM
(SELECT TOP 6
*
FROM TestTable2
WHERE Name_DT = '2021-12-15') AS sampledata;
SET i = i + 1;
END WHILE;
END;
This works, but is awfully slow. Also the IT department doesn't want us to use procedures. Is there a better way to achieve the same result without a procedure?
The generic way to get repeated data is a CROSS JOIN:
SELECT
TestName
,calendar_date
FROM
( SELECT TOP 6 *
FROM TestTable2
WHERE Name_DT = DATE '2015-12-15'
) AS sampledata
CROSS JOIN
( SELECT calendar_date
FROM sys_calendar.CALENDAR
WHERE calendar_date BETWEEN DATE '2011-12-15'
AND DATE '2011-12-15' + 364
) AS cal
;
In your case there's Teradata's proprietary EXPAND ON syntax to create time series:
SELECT TestName, Begin(pd)
FROM
( SELECT TOP 6 *
FROM TestTable2
WHERE Name_DT = DATE '2015-12-15'
) AS sampledata
-- create one row per day in the range
EXPAND ON PERIOD(Name_DT, Name_DT +365) AS pd

Use column if it exists, another if doesn't in SQL Server

I have a number of SQL Server databases (different versions from 2012 to 2019). The schema in each one is very similar but not exactly the same. For example, there's table ORDERS, which has about 50 columns - and one column is called differently in two different databases:
in DB1: select p_user from orders
in DB2: select userpk from orders
Note that I showed two databases above, but there are actually more than 20 - some are DB1 type, the others are DB2 type
I can't do much about these differences - they are historic - and changing the schema to match is not an option.
I want to be able to run the same SQL statement against all of these databases at once. I'd like to write the query in such a way that it would use one column if it exists and another if it doesn't. For example:
select
case
when COL_LENGTH('orders', 'p_user') IS NOT NULL
then
orders.p_user
else
orders.userpk
end
from orders
This unfortunately doesn't work, as SQL server seems to try to evaluate both results regardless of whether the condition is true or false. The same thing happens if I use IIF function.
If I simply run
select
case
when COL_LENGTH('orders', 'p_user') IS NOT NULL
then
'orders.p_user'
else
'orders.userpk'
end
then I do get the correct string, which means my condition is correct.
How can I formulate the SQL statement to use one or the other column based on whether the first one exists?
If you can't change anything then your best (and maybe only) option is to use dynamic SQL. A query will only compile if all parts can be resolved at compile time (before anything runs) - which is why e.g. this will not compile:
IF COL_LENGTH('orders', 'p_user') IS NOT NULL THEN
select p_user from orders
ELSE
select userpk as p_user from orders
END
But this will work:
DECLARE #SQL NVARCHAR(MAX)
IF COL_LENGTH('orders', 'p_user') IS NOT NULL THEN
SET #SQL = 'select p_user from orders'
ELSE
SET #SQL = 'select userpk as p_user from orders'
END
EXEC (#SQL)
Fix your tables by adding a computed column:
alter table db1..orders
add statuspk as (p_status);
(Or choose the other name.)
Then, your queries will just work without adding unnecessary complication to queries.
create table orders1(colA int, colB int, colABC int);
insert into orders1 values(1, 2, 3);
go
create table orders2(colA int, colB int, colKLM int);
insert into orders2 values(5, 6, 7);
go
create table orders3(colA int, colB int, colXYZ int);
insert into orders3 values(10, 11, 12);
go
select colA, colB, vcolname as [ABC_KLM_XYZ]
from
(
select *,
(select o.* for xml path(''), elements, type).query('
/*[local-name() = ("colABC", "colKLM", "colXYZ")][1]
').value('.', 'int') as vcolname
from orders1 as o
) as src;
select colA, colB, vcolname as [ABC_KLM_XYZ]
from
(
select *,
(select o.* for xml path(''), elements, type).query('
/*[local-name() = ("colABC", "colKLM", "colXYZ")][1]
').value('.', 'int') as vcolname
from orders2 as o
) as src;
select colA, colB, vcolname as [ABC_KLM_XYZ]
from
(
select *,
(select o.* for xml path(''), elements, type).query('
/*[local-name() = ("colABC", "colKLM", "colXYZ")][1]
').value('.', 'int') as vcolname
from orders3 as o
) as src;
go
drop table orders1
drop table orders2
drop table orders3
go
I ended up using dynamic sql like so:
declare #query nvarchar(1000)
set #query = concat(
'select count(distinct ', (case when COL_LENGTH('orders', 'p_user') IS NOT NULL then 'orders.p_user' else 'orders.userpk' end), ')
from orders'
);
execute sp_executesql #query
This solved my immediate issue.

Adjusting Month Specific SQL Query to Iterate Across all Months Greater than Base Month

I've inherited a query that has parameters which specify pulls data for a single desired month. The extract then gets manually added to previous month's extract in Excel. I'd like to eliminate the manual portion by adjusting the existing query to iterate across all months greater than a given base month, then (if this is what makes most sense) unioning the individual "final" outputs.
My attempt was to add the entire block of code for each specific month to the existing code, and then run it together. The idea was that I'd just paste in a new block each new month. I knew this was very inefficient, but I don't have the luxury of learning how to do it efficiently, so if it worked I'd be happy.
I ran into problems because the existing query has two subqueries which then are used to create a final table, and I couldn't figure out how to retain the final table at the end of the code so that it could be referenced in a union later (fwiw, I was attempting to use a Select Into for that final table).
with eligibility_and_customer_type AS
(SELECT DISTINCT ON(sub_id, mbr_sfx_id)
sub_id AS subscriber_id
, mbr_sfx_id AS member_suffix_id
, src_mbr_key
, ctdv.cstmr_typ_cd
, gdv.grp_name
FROM adw_common.cstmr_typ_dim_vw ctdv
JOIN adw_common.mbr_eligty_by_mo_fact_vw
ON ctdv.cstmr_typ_key = mbr_eligty_by_mo_fact_vw.cstmr_typ_key
AND mbr_eligty_yr = '2018'
AND mbr_eligty_mo = '12'
JOIN adw_common.prod_cat_dim_vw
ON prod_cat_dim_vw.prod_cat_key = mbr_eligty_by_mo_fact_vw.prod_cat_key
AND prod_cat_dim_vw.prod_cat_cd = 'M'
JOIN adw_common.mbr_dim_abr
ON mbr_eligty_by_mo_fact_vw.mbr_key = mbr_dim_abr.mbr_key
JOIN consumer.facets_xref_abr fxf
ON mbr_dim_abr.src_mbr_key = fxf.source_member_key
JOIN adw_common.grp_dim_vw gdv
ON gdv.grp_key=mbr_eligty_by_mo_fact_vw.grp_key),
facets_ip as
(select distinct cl.meme_ck
FROM gpgen_cr_ai.cmc_clcl_claim_abr cl
/* LEFT JOIN gpgen_cr_ai.cmc_clhp_hosp_abr ch
ON cl.clcl_id = ch.clcl_id*/
LEFT JOIN gpgen_cr_ai.cmc_cdml_cl_line cd
ON cl.clcl_id = cd.clcl_id
WHERE cd.pscd_id = '21'
/*AND ch.clcl_id IS NULL*/
AND cl.clcl_cur_sts NOT IN ('91','92')
AND cl.clcl_low_svc_dt >= '20181201'
and cl.clcl_low_svc_dt <= '20181231'
group by 1)
select distinct c.meme_ck,
e.cstmr_typ_cd,
'201812' as Yearmo
from facets_ip c
left join eligibility_and_customer_type e
on c.meme_ck = e.src_mbr_key;
The code above has date parameters that get updated when necessary.
The final output would be a version of the final table created above, but with results corresponding to, say, 201801 - present.
If you provide:
DDL of the underlying tables
Sample Data of the underlying tables
Expected resultset
DBMS you are using
, then one would be able to provide the best solution here.
Without knowing them, and as you said you only care about dynamically looping through each month, here is one way you can utilize your code to loop it through in SQL Server. Please fill the variable #StartDate and #EndDate values and provide proper datatype for meme_ck and cstmr_typ_cd.
IF OBJECT_ID ('tempdb..#TempTable', N'U') IS NOT NULL
BEGIN
DROP TABLE #TempTable
END
CREATE TABLE #TempTable
(
meme_ck <ProvideProperDataTypeHere>
,cstmr_typ_cd <ProvideProperDataTypeHere>
,Yearmo VARCHAR(10)
)
DECLARE #StartDate DATE = '<Provide the first day of the start month>'
DECLARE #EndDate DATE = '<Provide the end date inclusive>'
WHILE #StartDate <= #EndDate
BEGIN
DECLARE #MonthEndDate DATE = CASE WHEN DATEADD(DAY, -1, DATEADD(MONTH, 1, #StartDate)) <= #EndDate THEN DATEADD(DAY, -1, DATEADD(MONTH, 1, #StartDate)) ELSE #EndDate END
DECLARE #MonthYear VARCHAR(6) = LEFT(CONVERT(VARCHAR(8), #StartDate, 112), 6)
--This is your code which I am not touching without not knowing any detail about it. Just feeding the variables to make it dynamic
;with eligibility_and_customer_type AS
(SELECT DISTINCT ON(sub_id, mbr_sfx_id)
sub_id AS subscriber_id
, mbr_sfx_id AS member_suffix_id
, src_mbr_key
, ctdv.cstmr_typ_cd
, gdv.grp_name
FROM adw_common.cstmr_typ_dim_vw ctdv
JOIN adw_common.mbr_eligty_by_mo_fact_vw
ON ctdv.cstmr_typ_key = mbr_eligty_by_mo_fact_vw.cstmr_typ_key
AND mbr_eligty_yr = CAST(YEAR(#StartDate) AS VARCHAR(10)) -- NO need to cast if mbr_eligty_yr is an Integer
AND mbr_eligty_mo = CAST(MONTH(#StartDate) AS VARCHAR(10)) -- NO need to cast if mbr_eligty_yr is an Integer
JOIN adw_common.prod_cat_dim_vw
ON prod_cat_dim_vw.prod_cat_key = mbr_eligty_by_mo_fact_vw.prod_cat_key
AND prod_cat_dim_vw.prod_cat_cd = 'M'
JOIN adw_common.mbr_dim_abr
ON mbr_eligty_by_mo_fact_vw.mbr_key = mbr_dim_abr.mbr_key
JOIN consumer.facets_xref_abr fxf
ON mbr_dim_abr.src_mbr_key = fxf.source_member_key
JOIN adw_common.grp_dim_vw gdv
ON gdv.grp_key=mbr_eligty_by_mo_fact_vw.grp_key),
facets_ip as
(select distinct cl.meme_ck
FROM gpgen_cr_ai.cmc_clcl_claim_abr cl
/* LEFT JOIN gpgen_cr_ai.cmc_clhp_hosp_abr ch
ON cl.clcl_id = ch.clcl_id*/
LEFT JOIN gpgen_cr_ai.cmc_cdml_cl_line cd
ON cl.clcl_id = cd.clcl_id
WHERE cd.pscd_id = '21'
/*AND ch.clcl_id IS NULL*/
AND cl.clcl_cur_sts NOT IN ('91','92')
AND cl.clcl_low_svc_dt BETWEEN #StartDate AND #MonthEndDate
group by 1)
INSERT INTO #TempTable
(
meme_ck
,cstmr_typ_cd
,Yearmo
)
select distinct c.meme_ck,
e.cstmr_typ_cd,
#MonthYear as Yearmo
from facets_ip c
left join eligibility_and_customer_type e
on c.meme_ck = e.src_mbr_key;
SET #StartDate = DATEADD(MONTH, 1, #StartDate)
END
SELECT * FROM #TempTable;
I don't have enough information on your tables to really create an optimal solution. The solutions I am providing just have a single parameter (table name) and for your solution, you will need to pass in an additional parameter for the date filter.
The idea of "looping" is not something you'll need to do in Greenplum. That is common for OLTP databases like SQL Server or Oracle that can't handle big data very well and have to process smaller amounts at a time.
For these example solutions, a table is needed with some data in it.
CREATE TABLE public.foo
(id integer,
fname text,
lname text)
DISTRIBUTED BY (id);
insert into foo values (1, 'jon', 'roberts'),
(2, 'sam', 'roberts'),
(3, 'jon', 'smith'),
(4, 'sam', 'smith'),
(5, 'jon', 'roberts'),
(6, 'sam', 'roberts'),
(7, 'jon', 'smith'),
(8, 'sam', 'smith');
Solution 1: Learn how functions work in the database. Here is a quick example of how it would work.
Create a function that does the Create Table As Select (CTAS) where you pass in a parameter.
Note: You can't execute DDL statements in a function directly so you have to use "EXECUTE" instead.
create or replace function fn_test(p_table_name text) returns void as
$$
declare
v_sql text;
begin
v_sql :='drop table if exists ' || p_table_name;
execute v_sql;
v_sql := 'create table ' || p_table_name || ' with (appendonly=true, compresstype=quicklz) as
with t as (select * from foo)
select * from t
distributed by (id)';
execute v_sql;
end;
$$
language plpgsql;
Execute the function with a simple select statement.
select fn_test('foo3');
Notice how I pass in a table name that will be created when you execute the function.
Solution 2: Use psql variables
Create a sql file name "test.sql" with the following contents.
drop table if exists :p_table_name;
create table :p_table_name with (appendonly=true, compresstype=quicklz) as
with t as (select * from foo)
select * from t
distributed by (id);
Next, you execute psql and pass in the variable p_table_name.
psql -f test.sql -v p_table_name=foo4
psql:test.sql:1: NOTICE: table "foo4" does not exist, skipping
DROP TABLE
SELECT 8

Error unexpected token in stored procedure Advantage Database SQL

I have A stored procedure that is giving me an unexpected Token; ORDER expecting semicolon when I have this statement near the end when I try to execute it.
select year from #temp where year is not null ORDER BY year DESC;
if I remove the ORDER BY year DESC; the procedure works correctly.
I've tried every way possible to sort the resulting table in descending order. I'm fairly new to SQL so I'm sure its something simple. TIA.
// --------- full stored procedure ------ //
ALTER PROCEDURE GetYearForExhaustCatalog
(
CatCodeString Memo,
Year CHAR ( 4 ) OUTPUT
)
BEGIN
/*
EXECUTE PROCEDURE GetYearForExhaustCatalog('(e.catalogcode= ''2182'')');
EXECUTE PROCEDURE GetYearForExhaustCatalog('');
*/
DECLARE #CatCodeString string;
DECLARE #SQL string;
#CatCodeString = (SELECT CatCodeString FROM __input);
if #CatCodeString IS NULL or #CatCodeString = '' then
select e2.year,
(SELECT top 1 e2.year
FROM eenginecatalog e LEFT JOIN exhaustengine e2 ON e2.app_no=e.app_no)
as year
into #temp
from Exhaustengine e2;
select year from #temp where year is not null
GROUP BY year
ORDER BY year DESC;
else
#SQL =
'select e2.year, '+
'(SELECT top 1 e2.year '+
'FROM eenginecatalog e LEFT JOIN exhaustengine e2 ON e2.app_no=e.app_no and '+
#CatCodeString +' ) '+
'as year '+
'into #temp '+
'from Exhaustengine e2; '+
'select year from #temp where year is not null '+
'GROUP BY year '+
'ORDER BY year DESC ';
execute immediate #SQL;
end;
insert into __output
select year from #temp where year is not null ORDER BY year;
drop table #temp;
END;
It seems that ADS does not like the ORDER BY clause when inserting into the special __output table.
This does not work as well:
CREATE PROCEDURE MyProcedure(Year INTEGER OUTPUT)
BEGIN
CREATE TABLE #tmp ("Year" INTEGER);
INSERT INTO #tmp ("Year") VALUES (2019);
INSERT INTO #tmp ("Year") VALUES (2017);
INSERT INTO #tmp ("Year") VALUES (2018);
INSERT INTO
__output
SELECT
"Year"
FROM #tmp
ORDER BY
"Year";
DROP TABLE #tmp;
END;
It fails with the same error message you got:
poQuery: Error 7200: AQE Error: State = 42000; NativeError = 2117; [SAP][Advantage SQL Engine]Unexpected token: ORDER -- Expecting semicolon. -- Location of error in the SQL statement is: 269 (line: 15 column: 1)
As a workaround you can create another temporary table that has the result sorted:
CREATE PROCEDURE MyProcedure(Year INTEGER OUTPUT)
BEGIN
CREATE TABLE #tmp ("Year" INTEGER);
INSERT INTO #tmp ("Year") VALUES (2019);
INSERT INTO #tmp ("Year") VALUES (2017);
INSERT INTO #tmp ("Year") VALUES (2018);
SELECT
*
INTO #sorted
FROM #tmp
ORDER BY
"Year"
;
INSERT INTO
__output
SELECT
"Year"
FROM #sorted;
DROP TABLE #sorted;
DROP TABLE #tmp;
END;
This works without errors and the data is sorted.
The __output table is not the culprit. In SQL standard, ORDER BY is not allowed in sub-queries in general. The reason is that sorting a set of rows should have no effect during the internal resolution of a query. It is only useful in the final result. In pure SQL sense, the only action that guarantees the result is sorted is the ORDER BY on the final result.
If you follow this logic, the possible alternative is not to try to put the data into the __output table in sorted order but to make final output sorted with the following:
SELECT * FROM (EXECUTE PROCEDURE MyProcedure(inParam)) t ORDER BY t.year

How to execute dynamic SQL in Teradata

Is there any way to submit dynamically generated SQL to Teradata? I've written a query that will create the code to denormalize a table. Right now, I am pulling the code down to my client (SAS) and resubmitting it in a second step. I am not familiar with either Teradata macros or procedures; would something like that work?
To illustrate, I have a table defined like this:
create multiset table MYTABLE
( RECID integer generated always as identity
( start with 1
increment by 1
minvalue -2147483647
maxvalue 2147483647
no cycle )
, SNAP_DATE date format 'YYYY/MM/DD'
, EMAIL_DATE date format 'YYYY/MM/DD'
, FREQ integer
)
unique primary index ( RECID )
The table is populated every day (SNAP_DATE) and is used to monitor changes to an email_date in another table. The following query returns the code that I can run to create my denormalized view:
select RUN_THIS
from (
select RUN_THIS, RN
from (
select 'select EMAIL_DATE ' (varchar(100)) as RUN_THIS
, 0 (int) as RN
) x
union all
select ', sum( case when SNAP_DATE = date '''
|| (SNAP_DATE (format 'yyyy-mm-dd') (char(10)) )
|| ''' then FREQ else 0 end ) as D'
|| (SNAP_DATE (format 'yyyymmdd') (char(8)) ) as RUN_THIS
, row_number() over ( partition by 1 order by SNAP_DATE ) as RN
from ( select distinct SNAP_DATE
from MYTABLE
where SNAP_DATE > current_date - 30) t1
union all
select RUN_THIS, RN
from (
select 'from MYTABLE group by 1 order by 1;' as RUN_THIS
, 10000 as RN
) y
) t
order by RN
I export the result of the above query to a file on my client, then turn around and submit that file back to Teradata. I'm hoping there is some way to store this complete definition in some Teradata object so it can be executed directly.
You may find success putting this in a stored procedure using the DBC.SysExecSQL command.
Here is an overly simplified example of a stored procedure in Teradata. Obviously in production would want an error handler defined to address things like invalid database objects. Furthermore, you could return the SQLSTATE back as a parameter to test for whether the stored procedure completed successfully or not.
CREATE PROCEDURE SYSDBA.CommentDatabase
(
IN P_Database VARCHAR(30),
IN P_Comment VARCHAR(255),
OUT MSG
)
MAIN: --Label
BEGIN
DECLARE P_SQL_TEXT VARCHAR(4000);
SET P_SQL_TEXT='COMMENT ON DATABASE '||P_DATABASE||' AS '''||P_COMMENT||'''';
CALL dbc.SysExecSQL (:P_SQL_TEXT);
SET MSG = 'Database '||P_DBNAME||' commented successfully!';
END;