I'm trying to use a pipelined function to save on time and reduce redundancy in my queries.
The function in question returns data from a reference table based on some input. Records in the main data table I am selecting from have multiple columns that all refer to the reference table. The problem I run into is that when I try to use the pipelined function more than once in the query, I get a "cursor already open" error.
For example:
select xmlelement("genInf", xmlelement("ID", vt.ID),
xmlelement("vID", vt.V_ID),
xmlelement("vNum", vt.V_NUM),
xmlelement("terrDataCode", TERR_CODE.column_value), --data is based on reference table
xmlelement("ABValCode", AB_VAL_CD.column_value), --data is based on reference table
...
from V_TAB vt, table(UTIL.fn_getOvrdValXML(vt.terr_cd_id)) TERR_CODE,
table(UTIL.fn_getOvrdValXML(vt.ab_val_id)) AB_VAL_CD
where vt.ID = in_vID;
This worked fine until I added the second reference to my pipeline function (fn_getOvrdValXML), and I now get the "cursor already open" error.
The pipelined function is very simple:
type t_XMLTab is table of XMLType; --this type is in the spec
....
function fn_getOvrdValXML(in_ID in ovrd.id%type) return t_XMLTab
pipelined is
begin
for r in C_OvrdVal(in_ID) loop
pipe row(r.XMLChunk);
end loop;
return;
end;
The cursor is similarly simple:
cursor C_OvrdVal(in_ID in ovrd.id%type) is
select xmlforest(ID as "valueID", S_VAL as "sValue", U_VAL as "uplValue",
O_VAL as "oValue", O_IND as "oIndicator", F_VAL as "finalValue",
O_RSN as "reason") AS XMLChunk
from ovrd_val xov;
where xov.id = in_ID;
Is there a way to work around this, or should I try to tackle this problem (the problem of having to reference ovrd_val and output an xmlforest in the same way many many many many times) differently?
I admit I'm new to pipelined functions so I'm not 100% sure this is an appropriate use, but it made sense at the time and I'm open to other ideas ;)
If you're using pipeline functions, then you're on 9i minimum which means you can use the WITH clause:
WITH ovrdValXML AS (
select xov.id,
xmlforest(ID as "valueID", S_VAL as "sValue", U_VAL as "uplValue",
O_VAL as "oValue", O_IND as "oIndicator", F_VAL as "finalValue",
O_RSN as "reason") AS XMLChunk
from ovrd_val xov)
SELECT xmlelement("genInf", xmlelement("ID", vt.ID),
xmlelement("vID", vt.V_ID),
xmlelement("vNum", vt.V_NUM),
xmlelement("terrDataCode", TERR_CODE.column_value), --data is based on reference table
xmlelement("ABValCode", AB_VAL_CD.column_value), --data is based on reference table
...
FROM V_TAB vt
JOIN ovrdValXML terr_code ON terr_code = vt.?
AND terr_code.id = vt.terr_cd_id
JOIN ovrdValXML ab_val_cd ON ab_val_cd = vt.?
AND ab_val_cd.id = vt.ab_val_cd
WHERE vt.id = IN_VID;
Untested, and it's not clear what you're joining too - hence the ? on the join criteria.
Have you tried actually closing your cursor inside that pipelined function before piping row?
OPEN C_OvrdVal(in_ID);
FETCH c_OrdVal INTO my_chunk_variable;
CLOSE C_OrdVal;
PIPE ROW my_chunk_variable;
RETURN;
Related
I create two functions that worked well as a stand alone functions.
But when I try to run one of them inside the second one I got an error:
'SQL compilation error: Unsupported subquery type cannot be evaluated'
Can you advise?
Adding code bellow:
CASE
WHEN (regexp_substr(ARGUMENTS_JSON,'drives_removed":\\[]') = 'drives_removed":[]') THEN **priceperitem(1998, returnitem_add_item(ARGUMENTS_JSON))**
WHEN (regexp_substr(ARGUMENTS_JSON,'drives_added":\\[\\]') = 'drives_added":[]') THEN 1
WHEN (regexp_substr(ARGUMENTS_JSON,'from_flavor') = 'from_flavor') THEN 1
WHEN (regexp_substr(ARGUMENTS_JSON,'{}') = '{}') THEN 1
ELSE 'Other'
END as Price_List,
The problem happened when with function 'priceperitem'
If I replace returnitem_add_item with a string then it will work fine.
Function 1 : priceperitem get customer number and a item return pricing per the item from customer pricing list
Function 2 : returnitem_add_item parsing string and return a string
As an alternative, you may try processing udf within a udf in the following way :
create function x()
returns integer
as
$$
select 1+2
$$
;
set q=x();
create function pi_udf(q integer)
returns integer
as
$$
select q*3
$$
;
select pi_udf($q);
If this does not work out as well, the issue might be data specific. Try executing the function with one record, and then add more records to see the difference in the behavior. There are certain limitations on using SQL UDF's in Snowflake and hence the 'Unsupported Subquery' Error.
know some basic SQL enough to fix some issues or at least understand how it works, but I got confused with this. I figured out that they're some kind of commands since they're not tables. But I can't find anywhere here or first three pages of Google what exactly does v_in and v_out do.
The most confusing part is:
--some code above
inv_docs.datdoc AS day,
inv_docs.nrdoc AS doc_num,
v_out.article_id AS article_id,
v_out.qty_out AS sent,
--some code bellow
Where did v_out come from? To what is it related? Also bellow where I get to FROM part:
FROM inv_docs_out v_out
JOIN inv_docs_in v_in ON v_out.lotid = v_in.idlot
Now I have v_out from nowhere, and it's not even connected to inv_docs via a dot (eg. inv_docs.v_in).
Those are table aliases, for example if you do
SELECT * FROM table_with_long_name t
You can use t as an alias. For example
SELECT t.date FROM table_with_long_name t WHERE t.name = 'A Name'
So in your case, what the following query does is letting you use v_out instead of inv_docs_out and v_in instead of inv_docs_in.
FROM inv_docs_out v_out
JOIN inv_docs_in v_in ON v_out.lotid = v_in.idl
I asked this question on gis.stackexchange ( but since my actual problem seems to be more a DB problem than GIS I am trying my luck here). Here is the question on gis.stackexchange : https://gis.stackexchange.com/questions/256535/postgis-2-3-splitting-multiline-by-points
I have a trigger in which I a looping when inserting a new line to INSERT the set of splitted lines in my table, but for some reason I do not get the wanted result since in the example I only get two lines out of three. What a I doing wrong ?
Here comes the code of the trigger function :
CREATE OR REPLACE FUNCTION public.split_cable()
RETURNS trigger AS
$BODY$
DECLARE compte integer;
DECLARE i integer := 2;
BEGIN
compte = (SELECT count(*) FROM boite WHERE st_intersects(boite.geom, new.geom));
WHILE i < compte LOOP
WITH brs AS (SELECT row_number() over(), boite.geom FROM boite, cable2
WHERE st_intersects(boite.geom, new.geom)
-- here the ORDER BY serve to get the "boite" objects in a specific order
ORDER BY st_linelocatepoint(st_linemerge(new.geom),boite.geom)),
brs2 AS (SELECT st_union(geom) AS geom FROM brs),
cables AS (SELECT (st_dump(st_split(new.geom, brs2.geom))).geom FROM brs2)
INSERT INTO cable2 (geom) VALUES (
SELECT st_multi(cables.geom) FROM cables WHERE st_startpoint(geom) = (SELECT geom FROM brs WHERE brs.row_number = i));
i = i + 1;
END LOOP;
new.geom = (WITH brs AS (SELECT row_number() over(), boite.geom FROM boite, cable2
WHERE st_intersects(boite.geom, new.geom)
ORDER BY st_linelocatepoint(st_linemerge(new.geom),boite.geom)),
brs2 AS (SELECT st_union(geom) as geom from brs),
cables AS (SELECT (st_dump(st_split(new.geom, brs2.geom))).geom FROM brs2)
SELECT st_multi(cables.geom) FROM cables WHERE st_startpoint(geom) = (SELECT geom FROM brs WHERE brs.row_number = 1));
RETURN new;
END
$BODY$
LANGUAGE plpgsql;
This is a relatively complex query and has a lot of moving parts.
my recommendation for debugging the query involves multiple ideas:
Consider splitting the function into smaller functions, that are easier to test, and then compose the function from a set of parts you know for sure work as you need them to.
export a set of intermediate results to an intermediate table, you you can visualise the intermediate result-sets easily using a graphical tool and can better assess where the data went wrong.
is is possible that the combination of ST_ functions you are using don't create the geometries you think they create, one way to rule this out is by visualising the results of geographical function combinations, like st_dump(st_split(...))) or st_dump(st_split(...)) for example.
perhaps this check: st_startpoint(geom) = (SELECT geom FROM brs WHERE brs.row_number = i)) could be made by checking "points near" and not "exact point", maybe the points are very near, as in centimeters near, making them essentially "the same point", but not actually be the exact point. this is just an assumption though.
Consider sharing more data with StackOverflow! like a small dataset or example so we can actually run the code! :)
I have a table aps_sections with many integer fields (such as bare_width and worn_width). I also have multiple look up tables (such as aps_bare_width and aps_worn_width) which contain an ID column and a WEIGHTING column. The ID is recorded in the above columns of the aps_sections table. I need to sum the WEIGHTINGs of the columns in the aps_sections table (whereby the WEIGHTING value comes from the look up tables). I have successfully managed this using the below SELECT statement.
SELECT aps_sections.ogc_fid,
( aps_bare_width.weighting
+ aps_worn_width.weighting
+ aps_gradient.weighting
+ aps_braiding.weighting
+ aps_pigeon.weighting
+ aps_depth.weighting
+ aps_standing_water.weighting
+ aps_running_water.weighting
+ aps_roughness.weighting
+ aps_surface.weighting
+ aps_dynamic.weighting
+ aps_ex_cond.weighting
+ aps_promotion.weighting
+ aps_level_of_use.weighting) AS calc
FROM row_access.aps_sections,
row_access.aps_bare_width,
row_access.aps_worn_width,
row_access.aps_gradient,
row_access.aps_braiding,
row_access.aps_pigeon,
row_access.aps_depth,
row_access.aps_standing_water,
row_access.aps_running_water,
row_access.aps_roughness,
row_access.aps_surface,
row_access.aps_dynamic,
row_access.aps_ex_cond,
row_access.aps_promotion,
row_access.aps_level_of_use
WHERE aps_bare_width.fid = aps_sections.bare_width
AND aps_worn_width.fid = aps_sections.worn_width
AND aps_gradient.fid = aps_sections.gradient
AND aps_braiding.fid = aps_sections.braiding
AND aps_pigeon.fid = aps_sections.pigeon
AND aps_depth.fid = aps_sections.depth
AND aps_standing_water.fid = aps_sections.standing_water
AND aps_running_water.fid = aps_sections.running_water
AND aps_roughness.fid = aps_sections.roughness
AND aps_surface.fid = aps_sections.surface
AND aps_dynamic.fid = aps_sections.dynamic
AND aps_ex_cond.fid = aps_sections.ex_cond
AND aps_promotion.fid = aps_sections.promotion
AND aps_level_of_use.fid = aps_sections.level_of_use
What I now need to do is create a function that adds the calculated result to the physical_sn_priority column of the aps_sections table. My understanding so far is that my function should look similar to:
CREATE OR REPLACE FUNCTION row_access.aps_weightings()
RETURNS trigger AS
$BODY$
BEGIN
NEW.physical_sn_priority := ;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION public.update_km()
OWNER TO postgres;
But I don't know what to put after NEW.physical_sn_priority :=. I am a beginner to SQL and to PostgreSQL so I would appreciate any guidance!
While Erwin is (as always) correct that a version would be helpful, I think your answer will be simplest with the SELECT ... INTO construction for PL/pgSQL. (Not the same as SELECT INTO that works like INSERT or CREATE TABLE.)
SELECT ( aps_bare_width.weighting
+ /* obvious deletia */
+ aps_level_of_use.weighting)
INTO NEW.physical_sn_priority
FROM row_access.aps_bare_width,
/* snip */,
row_access.aps_level_of_use
WHERE aps_bare_width.fid = NEW.bare_width
AND /* snip */
aps_level_of_use.fid = NEW.level_of_use;
RETURN NEW;
According to the documentation, the INTO can appear in several other places in the line; I find this simple to understand.
[EDIT]
While this works, on reflection, I think the schema should be revised.
CREATE TYPE weighted_item_t AS ENUM ('bare_width', /* ... */, 'level_of_use');
CREATE TABLE current_weights(item_type weighted_item_t, fid int, current_weight float);
/* key and checks omitted */
/* note, if item_type can be deduced from fid, we don't even need the enum */
CREATE TABLE sections_items(section_id int /* FK into aps_sections */,
item_type weighted_item_t, fid int);
Now the queries are going to collapse into simple sums. You need to insert records into section_items before aps_sections, which can be done with deferred constraints in a transaction with or without a stored procedure, depending on how you acquire the data and how much control you have over its format. If (and this is not clear, because it won't change on updates) you want the denormalized total, you can get it with
SELECT SUM(current_weight) INTO NEW.physical_sn_priority
FROM section_items NATURAL JOIN current_weights
WHERE NEW.section_id=section_items.section_id;
This will work out much better if additional weighted characteristics are added at some future date.
Simplified test case
You should present your question with less noise. This shorter query does the job just fine:
SELECT aps_sections.ogc_fid,
( aps_bare_width.weighting
+ aps_worn_width.weighting
+ aps_gradient.weighting) AS calc
FROM row_access.aps_sections,
row_access.aps_bare_width,
row_access.aps_worn_width,
row_access.aps_gradient,
WHERE aps_bare_width.fid = aps_sections.bare_width
AND aps_worn_width.fid = aps_sections.worn_width
AND aps_gradient.fid = aps_sections.gradient;
Answer
As #Andrew already advised, the golden way would be to simplify your schema.
If, for some reason, this is not possible, here is an alternative to simplify the addition of many columns (which may or may not be NULL), create this tiny, powerful function:
CREATE OR REPLACE FUNCTION f_sum(ANYARRAY)
RETURNS numeric LANGUAGE sql AS
'SELECT sum(i)::numeric FROM unnest($1) i';
Call:
SELECT f_sum('{2,NULL,7}'::int[])
It takes a polymorphic array type and returns numeric. Works for any number type that can be summed by sum(). Cast the result if needed.
Simplifies the syntax for summing lots of columns.
NULL values won't break your calculation because the aggregate function sum() ignores those.
In a trigger function this could be used like this:
NEW.physical_sn_priority := f_sum(ARRAY [
COALESCE(physical_sn_priority, 0) -- add to original value
,(SELECT weighting FROM aps_bare_width x WHERE x.fid = NEW.bare_width)
,(SELECT weighting FROM aps_worn_width x WHERE x.fid = NEW.worn_width)
,(SELECT weighting FROM aps_gradient x WHERE x.fid = NEW.gradient)
...
])
Since all your joined tables are only good for looking up a single field, and completely independent from each other, you can just as well use individual subqueries. I also went this way, because we do not know whether any of the subqueries might return NULL, in which case your original query or Andrew's version would result in no row / no assignment.
Assignment to NEW really only makes sense in a BEFORE trigger on the table aps_sections. This code works BEFORE INSERT (where the row cannot be found in the table yet!) as well as BEFORE UPDATE. You want to use the current values from NEW, not the old row version in the table.
I'm reporting on data from two tables that don't have a sane way to join together. Basically it's
inventory in one table, sales in the other, and I'm trying to get the days of inventory on hand by
dividing the two. Since I couldn't think of a way to join the tables I abstracted one query into a
database function and called it from the other.
Here is the function definition:
CREATE OR REPLACE FUNCTION avgsales(date, text, text, integer) RETURNS numeric
AS ' SELECT sum(quantity)/(65.0*$4/90.0) as thirty_day_avg
FROM data_867 JOIN drug_info
ON drug_info.dist_ndc = trim(leading ''0'' from data_867.product_ndc)
WHERE
rpt_start_dt>= $1-$4 AND
rpt_end_dt<= $1 AND
drug_info.drug_name = $2 AND
wholesaler_name = $3 '
LANGUAGE SQL;
And here is the report query:
SELECT
(sum("data_852"."za02")/5)/avgsales(date '2010-11-30', 'Semprex D 100ct', 'McKesson', 30) as doh
FROM
"data_852"
JOIN
"drug_info" ON "drug_info"."dist_ndc" = "data_852"."lin03"
JOIN
"wholesaler_info" ON trim("data_852"."isa06") = trim("wholesaler_info"."isa06")
WHERE
(za01 = 'QA'
OR za01 = 'QP'
OR za01 = 'QI')
and "data_852"."xq02">= DATE '2010-11-30'-5
and "data_852"."xq03"<='2010-11-30'
and drug_info.drug_name = 'Semprex D 100ct'
and wholesaler_info.wholesaler_name = 'McKesson'
;
As it is here, it will run in Pentaho report designer but this is hard coded. When I parameterize the
values for the where clause it complains about a syntax error at $1. From looking at the queries that
Postgres receives, Pentaho passes the query with it's parameters using $1, $2, etc. I think there
might be a conflict with the same variable names being used in our function, or maybe it's just a data type problem.
What could be causing this error? Is it possible to use a function like this in the report query? If not, how can I do something similar using the Report Designer?
It is possible.
I am using Postgres 8.4 and RD 3.7
create function ret_p(text)
returns text
as
$$
select $1;
$$ language sql immutable;
Report designer query
select * from ret_p(${p_val});
where p_val is the parameter name as defined in RD