VBA: translate parsed structure to PostgreSQL - vba

I'm looking to translate parsed structure to PostgreSQL. Hopefully, I am asking this correctly.
Is there code out there to do this already?
For more color, the need arose from this question/answer:
https://dba.stackexchange.com/questions/162784/postgresql-translating-user-defined-calculations-into-executable-calculation-in
Note this part of the question: "Use an off-the-shelf solution that can translate the parsed structure to SQL. Most languages have something that can do this, like SQL::Abstract. If not, you gotta create it."
Edit: We are using PostgreSQL 9.3.5, if it matters.

Probably query is too complicate, but it does what do you need :-)
Parametrs:
cmd - just your structure to parsing
op - possible operations
tables - jsonb object for translating table names from short form to full (probably you just mean than 'b' -> 'bbg' and 'p' -> 'pulls' instead 'bp' -> 'bbg_pulls'). I run this query on 9.6 and use jsonb. You can change it to just json for 9.3
WITH q AS (
WITH param AS (
SELECT '[bp2][-1]/[bp5]'::text AS cmd,
'+-/%*'::text AS op,
'{"bp": "bbg_pools"}'::jsonb AS tables
), precmd AS (
SELECT btrim(replace(translate(cmd, '[]', ',,'), ',,', ','), ',') AS precmd
FROM param
), split AS (
SELECT i,
split_part(precmd, ',', i) AS part
FROM (
SELECT generate_series(1, length(precmd) - length(translate(precmd, ',', '')) + 1) AS i,
precmd
FROM precmd
) AS a
) SELECT *,
CASE
WHEN part ~ ('^[' || op || ']$') THEN
' ) ' || part || ' ( '
WHEN tables->>(translate(part, '0123456789', '')) != '' THEN
'select val from '::text || (tables->>(translate(part, '0123456789', '0'))) || ' where id = ' || translate(part, translate(part, '0123456789', '0'), '')
WHEN part ~ '^[-]?[0-9]*$' THEN
' and val_date = (CURRENT_TIMESTAMP + (''' || part|| ' day'')::interval)::date '
ELSE
' ERROR '
END AS res
FROM param, precmd, split
ORDER BY i
)
SELECT 'SELECT (' || string_agg(res, ' ') || ')'
FROM q;
Some explanation (for better understanding you can try run query with SELECT * FROM q instead aggregating).
param CTE is just your paramters. In precmd I prepare cmd to split on parts and in split I do it.
Result of this query is:
SELECT (select val from bbg_pools where id = 2 and val_date = (CURRENT_TIMESTAMP + ('-1 day')::interval)::date ) / ( select val from bbg_pools where id = 5)

Related

Convert ` LISTAGG` TO `XMLAGG`

How to convert LISTAGG with case statements to XMLAGG equivalent, so as to avoid the concatenation error.
#ECHO ${cols_2 ||32767||varchar2}$ --Declare variable
SELECT LISTAGG( 'MAX(CASE WHEN CATEGORY = '''||CATEGORY||''' THEN "'||"LEVEL"||'" END) AS "'||"LEVEL"||'_'||CATEGORY||'"' , ',' )
WITHIN GROUP( ORDER BY CATEGORY, "LEVEL" DESC )
INTO cols_2
FROM (
SELECT DISTINCT "LEVEL", CATEGORY
FROM temp
);
I tried this and I'm getting an error saying missing keyword
#ECHO ${cols_2 ||32767||varchar2}$ --Declare variable
select rtrim (
xmlagg (xmlelement (e, 'MAX(CASE WHEN CATEGORY = '''||CATEGORY||''' THEN "'||"LEVEL"||'" END) AS "'||LEVEL||'_'||CATEGORY||'"', ',') order by 1,2 desc).extract (
'//text()'),
', ')
INTO cols_2
FROM (
SELECT DISTINCT "LEVEL", CATEGORY
temp
);
I have tried this an declared the cols_2 as clob type :-
SELECT DBMS_XMLGEN.CONVERT (
RTRIM (
XMLAGG (XMLELEMENT (
e,
'MAX(CASE WHEN CATEGORY = '''
|| CATEGORY
|| ''' THEN "'
|| "LEVEL"
|| '" END) AS "'
|| "LEVEL"
|| '_'
|| CATEGORY
|| '"',
',')
ORDER BY 1, DESC).EXTRACT('//text()').getclobval(),','),1)
', '),
1)
INTO cols_2
FROM (SELECT DISTINCT "LEVEL", CATEGORY
FROM temp);
Yet my issue is not resolved ,Im getting an error while trying to execute it as a procedure like :-
Error in concatenation of `LISTAGG` function[Not a duplicate question]
You are getting the missing keyword error because you are most likely attempting to run the second query as a standalone query instead of in a PL/SQL block. When you are doing that, you have to remove your into cols_2 clause. That is your immediate issue that should resolve your error.
Also, based on your prior question, using the XML functions will escape your ' and " characters so you will want to make sure to unescape them back to their original characters so you can use them in your dynamic sql query like this:
SELECT DBMS_XMLGEN.CONVERT (
RTRIM (
XMLAGG (XMLELEMENT (
e,
'MAX(CASE WHEN CATEGORY = '''
|| CATEGORY
|| ''' THEN "'
|| "LEVEL"
|| '" END) AS "'
|| "LEVEL"
|| '_'
|| CATEGORY
|| '"',
',')
ORDER BY 1, 2 DESC).EXTRACT ('//text()'),
', '),
1)
--INTO cols_2
FROM (SELECT DISTINCT "LEVEL", CATEGORY
FROM temp);

Optimize long running select query against Oracle database

I'm not an DBA expert, we have an existing Oracle query to extract data for a particular day , the problem we've is if the business volume for a day is extremly large, the query takes 8+ hours and timedout. We cannot do optimization inside the database itself, then how do we usually handle extreme case like this? I've pasted the query below with content masked to show the SQL structure, looking for advises on how to optimizae this query or any alternative way to avoid timeout.
WHENEVER SQLERROR EXIT 1
SET LINESIZE 9999
SET ECHO OFF
SET FEEDBACK OFF
SET PAGESIZE 0
SET HEADING OFF
SET TRIMSPOOL ON
SET COLSEP ","
SELECT co.cid
|| ',' || DECODE(co.cid,'xxxxx','xxxx',null,'N/A','xxxxx')
|| ',' || d.name
|| ',' || ti.rc
|| ',' || DECODE(cf.side_id,1,'x',2,'xx',5,'xx','')
|| ',' || cf.Quantity
|| ',' || cf.price
|| ',' || TO_CHAR(time,'YYYY-mm-dd hh24:mi:ss')
|| ',' || DECODE(co.capacity_id,1,'xxxx',2,'xxxx','')
|| ',' || co.type
|| ',' || cf.id
|| ',' || CASE
WHEN (cf.account_id = xxx OR cf.account_id = xxx) THEN SUBSTR(cf.tag, 1, INSTR(cf.tag, '.')-1) || '_' || ti.ric || '_' || DECODE(cf.side_id,1,'xx',2,'xx',5,'xx','')
WHEN INSTR(cf.clientorder_id, '#') > 0 THEN SUBSTR(cf.clientorder_id, 1, INSTR(cf.clientorder_id, '#')-1)
ELSE cf.clientorder_id
END
|| ',' || co.tag
|| ',' || t.description
|| ',' || CASE
WHEN cf.id = xxx THEN 'xxxx'
ELSE (SELECT t.name FROM taccount t WHERE t.account_id = cf.account_id)
END as Account
FROM clientf cf, tins ti, thistory co, tdk d, tra t
WHERE cf.sessiondate = TO_DATE('xxxxxx','YYYYMMDD')
AND cf.orderhistory_id = co.orderhistory_id
AND cf.reporttype_id = 1
AND ti.inst_id = cf.inst_id
AND (ti.rc LIKE '%.xx' or ti.rc LIKE '%.xx' or ti.rc LIKE '%.xx' )
AND d.de_id = t.de_id
AND t.tr_id = co.tr_id
AND nvl(co.type_id,0) <> 3
AND cf.trid not in (SELECT v2.pid FROM port v2 WHERE v2.sessiondate = cf.sessiondate AND v2.exec_id = 4)
ORDER BY co.cid, time, cf.quantity;
I would firstly talk to the people who need the output of this query and ask them about the report and each individual column. Sometimes, some columns are not needed any more, sometimes the whole report. 8+ hours runtime is a good bargaining point ;-)
Next, I would put the original query to one side and start build an test query from scratch, bit by bit, for instance starting with clientf, taking all it's columns in the WHERE clause:
SELECT *
FROM clientf SAMPLE (0.1) cf
WHERE cf.sessiondate = TO_DATE('xxxxxx','YYYYMMDD')
AND cf.reporttype_id = 1;
If that's ok, I'd increase the sample size until 99%. If the runtime is already to long, you might suggest an index on clientf.sessiondate (or may be on clientf.reporttype_id, but that's unlikely helpful as it looks like to have too few distinct values).
Once that is done, I'd join the first table:
SELECT *
FROM clientf SAMPLE (0.1) cf
WHERE cf.sessiondate = TO_DATE('xxxxxx','YYYYMMDD')
AND cf.reporttype_id = 1
AND cf.trid NOT IN (SELECT v2.pid
FROM port v2
WHERE v2.sessiondate = cf.sessiondate
AND v2.exec_id = 4);
I'd compare NOT IN and WHERE NOT EXISTS, not expecting much differences.
Then I'd join the next table (prefering personally ANSI syntax), again starting with a small sample, again adding it's columns to the where clause:
SELECT *
FROM clientf SAMPLE (0.1) cf
FROM thistory co
ON cf.orderhistory_id = co.orderhistory_id
WHERE cf.sessiondate = TO_DATE('xxxxxx','YYYYMMDD')
AND cf.reporttype_id = 1
AND nvl(co.type_id,0) <> 3
AND cf.trid NOT IN (SELECT v2.pid
FROM port v2
WHERE v2.sessiondate = cf.sessiondate
AND v2.exec_id = 4);
I'd play around replacing nvl(co.type_id,0)<>3 with (co.type_id <>3 OR co.type_id IS NULL), monitoring carefully that the result is logically the same.
And so on...

oracle translate function is giving error to convert in number

I have a query where I need to remove the first and last quote from a string to use it in clause. When I run the following query ::
with t as (
select '1,2,3' x from dual)
select translate(x, ' '||chr(39)||chr(34), ' ' ) from t
it gives the result > 1,2,3
But when I run the following query ::
select * from care_topic_templates where care_topic_id in (
with t as (
select '1,2,3' x from dual)
select translate(x, ' '||chr(39)||chr(34), ' ' ) from t
);
it gives this error > ORA-01722: invalid number.
Because you are comparing an integer id to a string, which looks like '1,2,3' -- and this string cannot be converted to an integer, even after the strange substitutions using translate(). Strings are not lists.
You can do what you want using like and a correlated subquery:
select *
from care_topic_templates
where exists (select 1
from (select '1,2,3' as x from dual) x
where ',' || x || ',' like '%,' || care_topic_id || ',%'
);
Or, in your case:
select *
from care_topic_templates
where exists (select 1
from (select '1,2,3' as x from dual) x
where ',' || translate(x, ' '||chr(39)||chr(34), ' ') || ',' like '%,' || care_topic_id || ',%'
);
This is following the logic of your query. There are other ways to express this logic.

How to convert two rows into key-value json object in postgresql?

I have a data model like the following which is simplified to show you only this problem (SQL Fiddle Link at the bottom):
A person is represented in the database as a meta table row with a name and with multiple attributes which are stored in the data table as key-value pair (key and value are in separate columns).
Expected Result
Now I would like to retrieve all users with all their attributes. The attributes should be returned as json object in a separate column. For example:
name, data
Florian, { "age":23, "color":"blue" }
Markus, { "age":24, "color":"green" }
My Approach
Now my problem is, that I couldn't find a way to create a key-value pair in postgres. I tried following:
SELECT
name,
array_to_json(array_agg(row(d.key, d.value))) AS data
FROM meta AS m
JOIN (
SELECT d.fk_id, d.key, d.value AS value FROM data AS d
) AS d
ON d.fk_id = m.id
GROUP BY m.name;
But it returns this as data column:
[{"f1":"age","f2":24},{"f1":"color","f2":"blue"}]
Other Solutions
I know there is the function crosstab which enables me to turn the data table into a key as column and value as row table. But this is not dynamic. And I don't know how many attributes a person has in the data table. So this is not an option.
I could also create a json like string with the two row values and aggregate them. But maybe there is a nicer solution.
And no, it is not possible to change the data-model because the real data model is already in use of multiple parties.
SQLFiddle
Check and test out the fiddle i've created for this question:
http://sqlfiddle.com/#!15/bd579/14
Use the aggregate function json_object_agg(key, value):
select
name,
json_object_agg(key, value) as data
from data
join meta on fk_id = id
group by 1;
Db<>Fiddle.
The function was introduced in Postgres 9.4.
I found a way to return crosstab data with dynamic columns. Maybe rewriting this will be better to suit your needs:
CREATE OR REPLACE FUNCTION report.usp_pivot_query_amount_generate(
i_group_id INT[],
i_start_date TIMESTAMPTZ,
i_end_date TIMESTAMPTZ,
i_interval INT
) RETURNS TABLE (
tab TEXT
) AS $ab$
DECLARE
_key_id TEXT;
_text_op TEXT = '';
_ret TEXT;
BEGIN
-- SELECT DISTNICT for query results
FOR _key_id IN
SELECT DISTINCT at_name
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start BETWEEN i_start_date AND i_end_date
AND group_id = ANY (i_group_id)
AND interval_type_id = i_interval
LOOP
-- build function_call with datatype of column
IF char_length(_text_op) > 1 THEN
_text_op := _text_op || ', ' || _key_id || ' NUMERIC(20,2)';
ELSE
_text_op := _text_op || _key_id || ' NUMERIC(20,2)';
END IF;
END LOOP;
-- build query with parameter filters
_ret = '
SELECT * FROM crosstab(''SELECT date_start, at.at_name, cda.amount ct
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start between $$' || i_start_date::TEXT || '$$ AND $$' || i_end_date::TEXT || '$$
AND interval_type_id = ' || i_interval::TEXT || '
AND group_id = ANY (ARRAY[' || array_to_string(i_group_id, ',') || '])
ORDER BY date_start'')
AS ct (date_start timestamptz, ' || _text_op || ')';
RETURN QUERY
SELECT _ret;
END;
$ab$ LANGUAGE 'plpgsql';
Call the function to get the string, then execute the string. I think I tried executing it in the function, but it didn't work well.
I ran into the same problem when I needed to update some JSON and remove a few elements in my database. This query below worked well enough for me, as it preserves the string quotes but does not add them to numbers.
select
'{' || substr(x.arr, 3, length(x.arr) - 4) || '}'
from
(
select
replace(replace(cast(array_agg(xx) as varchar), '\"', '"'), '","', ', ') as arr
from
(
select
elem.key,
elem.value,
'"' || elem.key || '":' || elem.value as xx
from
quote q
cross join
json_each(q.detail::json -> 'bQuoteDetail'-> 'quoteHC'->0) as elem
where
elem.key != 'allRiskItems'
) f
) x

Dynamic CTE's as part of a SProc in DB2/400

I'm trying to write a SProc in db2/400 in a V7R2 environment which creates a CTE based on the parameters passed. I then need to perform a recursive query on the CTE.
I'm running issues into creating and executing the dynamic CTE.
According to http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/db2/rbafzpreph2.htm
the prepare statement does not work with the WITH or SELECT statements directly.
I tried to wrap both the dynamic CTE and dynamic SELECT in a VALUES INTO and manage to successfully prepare the statement. The issue then comes when I try to execute the statement.
I get an error code of SQL0518 which is defined here (CTRL+F for 'SQL0518' to jump down): http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm?info/rzala/rzalamsg.html (NOTE*: This link is for V5R2 but the error code and text portion of my error is exact to the error listed here with the same code. So I'm sure the error code remained the same between versions)
From the 3 recovery suggestions listed, the second seems unlikely to be the case since my execute is the very next line after my prepare. Suggestion 3 also seems unlikely because there is no use of commit or rollback. So I am inclined to believe suggestion 1 applies to my particular case. However, I do not understand how to take the suggested steps.
If &1 identifies a prepared SELECT or DECLARE PROCEDURE statement, a different prepared statement must be named in the EXECUTE statement.
Am I supposed to have two prepare statements for the same execute? Syntactically how would this look?
Here is the code for my SProc for reference:
CREATE OR REPLACE PROCEDURE DLLIB/G_DPIVOT# (
IN TABLE_NAME CHAR(12) CCSID 37 DEFAULT '' ,
IN PIVOT CHAR(12) CCSID 37 DEFAULT '' ,
IN PIVOTFLD CHAR(12) CCSID 37 DEFAULT '' ,
IN "VALUE" DECIMAL(10, 0) DEFAULT 0 ,
INOUT LIST CHAR(5000) CCSID 37 )
LANGUAGE SQL
SPECIFIC DLLIB/G_DPIVOT#
NOT DETERMINISTIC
READS SQL DATA
CALLED ON NULL INPUT
CONCURRENT ACCESS RESOLUTION DEFAULT
SET OPTION ALWBLK = *ALLREAD ,
ALWCPYDTA = *OPTIMIZE ,
COMMIT = *NONE ,
DECRESULT = (31, 31, 00) ,
DFTRDBCOL = *NONE ,
DYNDFTCOL = *NO ,
DYNUSRPRF = *USER ,
SRTSEQ = *HEX
BEGIN
DECLARE STMT1 VARCHAR ( 1000 ) ;
SET STMT1 = 'WITH DETAILS ( ' || TRIM ( PIVOT ) || ' , ' || TRIM ( PIVOTFLD ) || ' , CURR , PREV ) AS ( ' ||
'SELECT ' || TRIM ( PIVOT ) || ' ,' || TRIM ( PIVOTFLD ) || ',' ||
' ROW_NUMBER ( ) OVER ( PARTITION BY ' || TRIM ( PIVOT ) || ' ORDER BY ' || TRIM ( PIVOTFLD ) || ' ) AS CURR ,' ||
' ROW_NUMBER ( ) OVER ( PARTITION BY ' || TRIM ( PIVOT ) || ' ORDER BY ' || TRIM ( PIVOTFLD ) || ' ) - 1 AS PREV' ||
' FROM ' || TRIM ( TABLE_NAME ) ||
' WHERE ' || TRIM ( PIVOT ) || ' = ' || TRIM ( VALUE ) ||
' GROUP BY ' || TRIM ( PIVOT ) || ' , ' || TRIM ( PIVOTFLD ) || ' )' ||
' VALUES( SELECT MAX ( TRIM ( L '','' FROM CAST ( SYS_CONNECT_BY_PATH ( ' || TRIM ( PIVOTFLD ) || ' , '','' ) AS CHAR ( 5000 ) ) ) )' ||
' FROM DETAILS ' ||
' START WITH CURR = 1 ' ||
' CONNECT BY NOCYCLE ' || TRIM ( PIVOT ) || ' = PRIOR ' || TRIM ( PIVOT ) || ' AND PREV = PRIOR CURR) INTO ?' ;
--SET LIST = STMT1; -- If I execute the value of LIST in interactive SQL everything is as expected (minus the VALUES INTO ofcourse)
PREPARE S1 FROM STMT1 ;
EXECUTE S1 USING LIST; -- If I comment this I don't get an error, but I also don't get a return value in LIST)
END ;
Any assistance is appreciated.
EDIT 1: I am trying to create a SProc (which I will use to create a UDF) which has 5 parameters. I am trying to pivot a single field spanning across multiple records so the values are returned as a comma delimited string. I want to make this dynamic though so I can re-use it for many situations. An example call would be: CALL DLLIB.G_DPIVOT#(TABLE, PIVOT, PIVOTFLD, VALUE, LIST); Where TABLE is the name of the table I want to pivot, PIVOT is the commonality between records (FK), PIVOTFLD is the field I want to condense to a single string, VALUE is the FK value I want to use to pivot on, and LIST is the OUT parameter which would contain the resulting string. You can read more about a non-dynamic implementation here: http://www.mcpressonline.com/sql/techtip-combining-multiple-row-values-into-a-single-row-with-sql-in-db2-for-i.html
The use is for when I have a header table which has a one-to-many relationship with another table. I'll then be able to summarize all the values of a particular field in the "many" table based on the PK/FK relationship.
EDIT 2:
Here is a recent attempt which I think I manage to successfully create the CTE using EXECUTE IMMEDIATE and am now trying to just perform a simple select on it. I'm trying to make use of DB2 cursors but get at error at the "C2" on the line DECLARE C2 CURSOR FOR S2;. I don't have much experience with DB2 cursors but believe I am using them in the correct way.
DECLARE STMT1 VARCHAR ( 1000 ) ;
DECLARE STMT2 VARCHAR ( 1000 ) ;
SET STMT1 = 'WITH DETAILS ( ' || TRIM ( PIVOT ) || ' , ' || TRIM ( PIVOTFLD ) || ' , CURR , PREV ) AS ( ' ||
'SELECT ' || TRIM ( PIVOT ) || ' ,' || TRIM ( PIVOTFLD ) || ',' ||
' ROW_NUMBER ( ) OVER ( PARTITION BY ' || TRIM ( PIVOT ) || ' ORDER BY ' || TRIM ( PIVOTFLD ) || ' ) AS CURR ,' ||
' ROW_NUMBER ( ) OVER ( PARTITION BY ' || TRIM ( PIVOT ) || ' ORDER BY ' || TRIM ( PIVOTFLD ) || ' ) - 1 AS PREV' ||
' FROM ' || TRIM ( TABLE_NAME ) ||
' WHERE ' || TRIM ( PIVOT ) || ' = ' || TRIM ( VALUE ) ||
' GROUP BY ' || TRIM ( PIVOT ) || ' , ' || TRIM ( PIVOTFLD ) || ' )';
EXECUTE IMMEDIATE STMT1;
SET STMT2 = "SELECT * FROM DETAILS";
PREPARE S2 FROM STMT2;
DECLARE C2 CURSOR FOR S2;
OPEN C2;
FETCH C2 INTO LIST;
CLOSE C2;
Does anyone see anything wrong with these changes?
Here is the exact error message (excluding suggestion text):
SQL State: 42601
Vendor Code: -104
Message: [SQL0104] Token C2 was not valid. Valid tokens: GLOBAL.
EDIT 3 (Final SProc):
#user2338816 for all of the help. See his post for an explanation of the solution, but here is the final SProc for reference:
CREATE PROCEDURE DLLIB/G_DPIVOT# (
IN TABLE_NAME CHAR(12) CCSID 37 DEFAULT '' ,
IN PIVOT CHAR(12) CCSID 37 DEFAULT '' ,
IN PIVOTFLD CHAR(12) CCSID 37 DEFAULT '' ,
IN "VALUE" DECIMAL(10, 0) DEFAULT 0 ,
INOUT LIST CHAR(5000) CCSID 37 )
LANGUAGE SQL
SPECIFIC DLLIB/G_DPIVOT#
NOT DETERMINISTIC
READS SQL DATA
CALLED ON NULL INPUT
CONCURRENT ACCESS RESOLUTION DEFAULT
SET OPTION ALWBLK = *ALLREAD ,
ALWCPYDTA = *OPTIMIZE ,
COMMIT = *NONE ,
DECRESULT = (31, 31, 00) ,
DFTRDBCOL = *NONE ,
DYNDFTCOL = *NO ,
DYNUSRPRF = *USER ,
SRTSEQ = *HEX
BEGIN
DECLARE STMT1 VARCHAR ( 1000 ) ;
DECLARE C1 CURSOR FOR S1 ;
SET STMT1 = 'WITH DETAILS ( ' || TRIM ( PIVOT ) || ' , ' || TRIM ( PIVOTFLD ) || ' , CURR , PREV ) AS ( ' ||
'SELECT ' || TRIM ( PIVOT ) || ' ,' || TRIM ( PIVOTFLD ) || ',' ||
' ROW_NUMBER ( ) OVER ( PARTITION BY ' || TRIM ( PIVOT ) || ' ORDER BY ' || TRIM ( PIVOTFLD ) || ' ) AS CURR ,' ||
' ROW_NUMBER ( ) OVER ( PARTITION BY ' || TRIM ( PIVOT ) || ' ORDER BY ' || TRIM ( PIVOTFLD ) || ' ) - 1 AS PREV' ||
' FROM ' || TRIM ( TABLE_NAME ) ||
' WHERE ' || TRIM ( PIVOT ) || ' = ' || TRIM ( VALUE ) ||
' GROUP BY ' || TRIM ( PIVOT ) || ' , ' || TRIM ( PIVOTFLD ) || ' )' ||
' SELECT MAX ( TRIM ( L '','' FROM CAST ( SYS_CONNECT_BY_PATH ( ' || TRIM ( PIVOTFLD ) || ' , '','' ) AS CHAR ( 5000 ) ) ) ) ' ||
' FROM DETAILS ' ||
' START WITH CURR = 1 ' ||
' CONNECT BY NOCYCLE ' || TRIM ( PIVOT ) || ' = PRIOR ' || TRIM ( PIVOT ) || ' AND PREV = PRIOR CURR' ;
PREPARE S1 FROM STMT1 ;
OPEN C1 ;
FETCH C1 INTO LIST ;
CLOSE C1 ;
END ;
The basic problem is in the EXECUTE. You can't "execute" the prepared SELECT. Instead, you'll need to DECLARE CURSOR for S1 and FETCH rows from the CURSOR. Note that 'executing' a SELECT statement wouldn't actually do anything if it was allowed; it would just "SELECT", so EXECUTE doesn't make much sense. (A SELECT INTO statement can be different, but it's not clear if that's appropriate here.)
It might be possible to OPEN a CURSOR and return a result set rather than FETCHing rows. With more definition of how you actually want to use this, some elaboration should be possible.
Edit:
Second problem:
I've created more readable versions of your original CTE and the CTE in your edited question. The original:
WITH DETAILS ( PIVOT , PIVOTFLD , CURR , PREV ) AS (
SELECT PIVOT , PIVOTFLD ,
ROW_NUMBER ( ) OVER ( PARTITION BY PIVOT ORDER BY PIVOTFLD ) AS CURR ,
ROW_NUMBER ( ) OVER ( PARTITION BY PIVOT ORDER BY PIVOTFLD ) - 1 AS PREV
FROM TABLE_NAME
WHERE PIVOT = VALUE
GROUP BY PIVOT , PIVOTFLD )
VALUES( SELECT MAX ( CAST ( SYS_CONNECT_BY_PATH ( PIVOTFLD , ',' ) AS CHAR ( 5000 ) ) ) )
FROM DETAILS
START WITH CURR = 1
CONNECT BY NOCYCLE PIVOT = PRIOR PIVOT AND PREV = PRIOR CURR) INTO ? ;
You have a VALUE INTO statement after the CTE. AFAIK, that's not valid.
And your edited example:
WITH DETAILS ( PIVOT , PIVOTFLD , CURR , PREV ) AS (
SELECT PIVOT ,
PIVOTFLD ,
ROW_NUMBER ( ) OVER ( PARTITION BY PIVOT ORDER BY PIVOTFLD ) AS CURR ,
ROW_NUMBER ( ) OVER ( PARTITION BY PIVOT ORDER BY PIVOTFLD ) - 1 AS PREV
FROM TABLE_NAME
WHERE PIVOT = VALUE
GROUP BY PIVOT , PIVOTFLD );
Well, it's just a bare CTE that has no associated SELECT referencing it. You do try to PREPARE a SELECT statement later, but the two need to go together. You can't EXECUTE the CTE by itself.
Try putting them together as a single statement and see if a CURSOR creates over the result. Variable STMT1 would then look something like this:
WITH DETAILS ( PIVOT , PIVOTFLD , CURR , PREV ) AS (
SELECT PIVOT ,
PIVOTFLD ,
ROW_NUMBER ( ) OVER ( PARTITION BY PIVOT ORDER BY PIVOTFLD ) AS CURR ,
ROW_NUMBER ( ) OVER ( PARTITION BY PIVOT ORDER BY PIVOTFLD ) - 1 AS PREV
FROM TABLE_NAME
WHERE PIVOT = VALUE
GROUP BY PIVOT , PIVOTFLD )
SELECT * FROM DETAILS ;
Note that the statement includes the SELECT at the end. The WITH ... clause is followed by the SELECT ... in a single statement that is PREPAREd. The CURSOR would then be OPENed over that statement.
Edit 2:
I have modified a sample CTE that I've had for a while to fit into a stored proc and to return a value. It was compiled and run on my i 6.1 system. The CTE is PREPAREd from a string placed into a VARCHAR, then a CURSOR is opened over it. Rows are FETCHed in a WHILE-loop.
The CTE generates summary rows that are then UNIONed with detail rows from QIWS/QCUSTCDT. The summary is by STATE to provided a sub-total of BALDUE. The WHILE-loop is kind of meaningless; it only shows FETCHing and processing of rows. The only action is to count the number of rows that are not summary rows out of the CTE. This is essentially the same as the number of rows in the base table. The row count is returned in the rowCnt OUT parameter.
The source code is copy/pasted, but comes from two sources. First, the CREATE PROCEDURE statement is taken from iNavigator's 'Run SQL scripts' utility after generating the SQL from the compiled stored procedure. And second, the BEGIN ... END compound statement body is from the original I typed into the iNavigator New-> Procedure function. Although the two would have logical equivalence, I wanted to preserve the actual lines that were input. You can copy/paste the entire source into 'Run SQL Scripts' or go through the utility to create the procedure and only copy/paste the BEGIN ... END compound statement after entering values into the first two tabs of the New-> Procedure function.
I have a schema named SQLEXAMPLE that I build things like this into. You'll need to adjust the schema and procedure names to fit your environment. The QIWS/QCUSTCDT table should exist on nearly all AS/400-series systems.
CREATE PROCEDURE SQLEXAMPLE.CTE_CustCDT (
OUT rowCnt INTEGER )
LANGUAGE SQL
SPECIFIC SQLEXAMPLE.CTECUSTCDT
NOT DETERMINISTIC
READS SQL DATA
CALLED ON NULL INPUT
SET OPTION ALWBLK = *ALLREAD ,
ALWCPYDTA = *OPTIMIZE ,
COMMIT = *NONE ,
CLOSQLCSR = *ENDMOD ,
DECRESULT = (31, 31, 00) ,
DFTRDBCOL = *NONE ,
DYNDFTCOL = *NO ,
DYNUSRPRF = *USER ,
SRTSEQ = *HEX
BEGIN
DECLARE sumRows INTEGER DEFAULT 0 ;
DECLARE cusNum INTEGER ;
DECLARE lstNam CHAR(10) ;
DECLARE state CHAR(2) ;
DECLARE balDue DECIMAL(7, 2) ;
DECLARE stmt1 VARCHAR(512) ;
DECLARE at_end INT DEFAULT 0 ;
DECLARE not_found
CONDITION FOR '02000';
DECLARE c1 CURSOR FOR c1Stmt ;
DECLARE CONTINUE HANDLER FOR not_found
SET at_end = 1 ;
SET stmt1 = 'with t1 As(
SELECT 0 ,''Tot'' , state , sum( balDue )
FROM qiws.qcustcdt
GROUP BY state
ORDER BY state
)
select cusNum , lstNam , state, balDue
from qiws.qcustcdt
union
select *
from t1
order by state FOR FETCH ONLY' ;
PREPARE c1Stmt FROM stmt1 ;
OPEN c1 ;
FETCH C1 INTO cusNum , lstNam , state , balDue ;
WHILE at_end = 0 DO
IF cusNum <> 0 THEN SET sumRows = sumRows + 1 END IF ;
FETCH C1 INTO cusNum , lstNam , state , balDue ;
END WHILE ;
SET rowCnt = sumRows ;
CLOSE c1 ;
END
When the CTE is run by itself in STRSQL, the first few lines of output look like this:
....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+.
CUSNUM LSTNAM STATE BALDUE
475,938 Doe CA 250.00
0 Tot CA 250.00
389,572 Stevens CO 58.75
0 Tot CO 58.75
938,485 Johnson GA 3,987.50
0 Tot GA 3,987.50
846,283 Alison MN 10.00
583,990 Abraham MN 500.00
0 Tot MN 510.00
The summary rows should easily be recognized. And when the stored proc is CALLed from 'Run SQL Scripts', the resulting output is:
Connected to relational database TISI on Tisi as Toml - 090829/Quser/Qzdasoinit
> call SQLEXAMPLE.CTE_CustCDT( 0 )
Return Code = 0
Output Parameter #1 = 12
Statement ran successfully (570 ms)
The QIWS/QCUSTCDT table on that system has 12 rows, and that matches the value returned.
It's not exactly the same as your desired CTE, but it should demonstrate that a dynamic CTE can be used. It also shows how FETCHes might pull rows from the CTE for whatever purpose is needed.