Postgresql Nested Foreach Int Array - sql

What I was trying to do is, declare two arrays.
One is for invoice array contains invoice id and that invoice's debt money,(for example [6238,236.0000])
The other one is for client's previous recieved collections.(for example client had 3 collections [62.000,40.000,10.000])
My tables are like these.
Invoice Payments (Contains all payments for specific invoice and client)
Total Numeric(18,4)
InvoiceId integer
Description character varying
ClientId integer
...
some other values too which is not important here.
Invoice
InvoiceId integer
grandtotal numeric(18,4)
ClientId integer
InvoiceDate Date
...
some other values too which is not important here.
Client Movements
ClientId integer
CreatedDate Date
Will numeric(18,4)
...
some other values too which is not important here.
I tried to create a function with nested Foreach statements at PostgreSQL. Actually, this is going to be a trigger after insert, update, delete functions.
For testing purposes, I write as a function for now. So I get my _clientid and _clientmoveid manually.
CREATE FUNCTION invoice_pay(_clientid int,_clientmoveid int) RETURNS int[] AS $$
DECLARE _invoices varchar[] := (select array_agg(DISTINCT '[' || inv.invoiceid || ',' || CAST (inv.grandtotal AS int) || ']') from invoices inv
left join invoicepayments as ip on inv.invoiceid = ip.invoiceid where inv.clientid = _clientid group by inv.invoiceid,inv.grandtotal order by inv.invoicedate);
DECLARE _collections varchar[] := (SELECT array_agg(DISTINCT '[' || clm.will || ']') from clientmovements clm where clm.clientid = _clientid
group by clm.createddate order by clm.createddate desc);
DECLARE _total numeric(18,4);
DECLARE _collectionTotal int;
DECLARE _invoicesArray int[];
BEGIN
/*first delete all the previous payments*/
DELETE FROM InvoicePayments where clientid = _clientid;
/*loop through all invoices and those remaining payments */
/*first array contains invoiceid and invoicegrandtotal [6232,246.0000]*/
FOREACH _invoicesArray SLICE 1 IN ARRAY _invoices
LOOP
/*loop throught all collections that client had. The second array contains only collections from client. [[15],[20],[40]]*/
FOREACH _collectionTotal IN ARRAY _collections
LOOP
/*the total calculation is the if collection is bigger than the invoice grandtotal, close invoice as paid. Other than grandtotal minus collection*/
_total := (case when _collectionsTotal > _invoicesArray[2] then _collectionsTotal else _invoicesArray[2] - collectionsTotal end);
insert into invoicepayments(total,description,clientid,invoiceid)
values(_total,'test',_clientid,_invoicesArray[1]);
END LOOP;
END LOOP;
RETURN _invoices;
END;
$$ LANGUAGE plpgsql;
After that function, i get an error like
invalid input syntax for integer: "{6328,236.0000}"
Probably at some point, my array's first or second value does not work like I think. I use them like _invoicesArray[1] that. I want to take 6328 from _invoicesArray[1] value but it gives me an error. What should I do for nested foreach and what do I do wrong?

Related

Returned my Cursor in my oracle PL/SLQ function but not all rows are being returned. Can you only return 1 row in a Oracle pl/sql function?

Here is my code I have 2 rows that share the same name, reservation date, and hotel Id. I don't understand why when I execute this function it gives me the error "exact fetch returns more than requested number of rows" instead of returning my both rows in my Reservation Table.
I have returned the cursor correctly I assume, so it should work?
CREATE OR REPLACE FUNCTION findres(cname IN reservation.cust_name%type,
hotelID IN reservation.hotel_id%type,
resdate IN reservation.reserve_date%type)
RETURN reservation.reserve_id%type is
resid reservation.reserve_id%type;
BEGIN
SELECT reserve_id
INTO resid
FROM reservation
WHERE Cust_name = cname
AND Hotel_id = hotelID
AND reserve_date = resdate;
RETURN resid;
EXCEPTION
WHEN no_data_found THEN
dbms_output.put_line('No reservation found');
END;
/
From the documentation for definition of into_clause : the SELECT INTO statement retrieves one or more columns from a single row and stores them in either one or more scalar variables or one record variable
Then the current SELECT statement should be replaced against the cases of returning more than one row. The following queries might be alternatives for your current SQL Select statement
SELECT reserve_id
INTO resid
FROM
( SELECT r.*,
ROW_NUMBER() OVER (ORDER BY 0) AS rn
FROM reservation
WHERE Cust_name = cname
AND Hotel_id = hotelID
AND reserve_date = resdate
)
WHERE rn = 1;
If DB version is 12+, then use
SELECT reserve_id
INTO resid
FROM reservation
WHERE Cust_name = cname
AND Hotel_id = hotelID
AND reserve_date = resdate
FETCH NEXT 1 ROW ONLY;
without a subquery in order to return one row only, considering you only get duplicates for those columns with no ordering rules for the data. Through use of these queries, no need to handle no_data_found or too_many_rows exceptions.
Update : If your aim is to return all the rows even there are more than one row at once, then you can use SYS_REFCURSOR such as
CREATE OR REPLACE FUNCTION findres(cname reservation.cust_name%type,
hotelID reservation.hotel_id%type,
resdate reservation.reserve_date%type)
RETURN SYS_REFCURSOR IS
recordset SYS_REFCURSOR;
BEGIN
OPEN recordset FOR
SELECT reserve_id
FROM reservation
WHERE Cust_name = cname
AND Hotel_id = hotelID
AND reserve_date = resdate;
RETURN recordset;
END;
/
and call in such a way that
VAR v_rc REFCURSOR
EXEC :v_rc := findres('Avoras',111,date'2020-12-06');
PRINT v_rc
from the SQL Developer's console.

How to retrieve the number of copies of a book PLSQL

Hi I'm trying to write a procedure that retrieves the number of copies of a specific book from a table I have created.
create or replace PROCEDURE book_count(c_isbn IN book_copies.isbn%TYPE)
IS
total number;
BEGIN
SELECT COUNT (*)
FROM book_copies
WHERE isbn = c_isbn;
DBMS_OUTPUT.PUT_LINE('Book count: ' || total);
END book_count;
The error I'm having is that an INTO clause is expected in the SELECT statement, but I can't seem to get it done. Any help is appreciated!
create or replace PROCEDURE book_count(c_isbn IN book_copies.isbn%TYPE)
IS
total number;
BEGIN
SELECT COUNT(*) INTO TOTAL
FROM book_copies
WHERE isbn = c_isbn;
DBMS_OUTPUT.PUT_LINE('Book count: ' || total);
END book_count;
you can´t just use plain SQL in a begin end block. you need to insert as many variables in the into clause, immediatly after the select, that it matches the amount of columns you select.
You either need to do
SELECT COUNT (*)
into total
FROM book_copies
WHERE isbn = c_isbn;
or
for i in (
SELECT COUNT (*) count
FROM book_copies
WHERE isbn = c_isbn;
)
loop
total := i.count;
end loop;
keep in mind that the upper one can throw two exceptions, no_data_found and to_many_rows, while the loop has to notice this "mistake" manually.

Select multiple rows as array

I have a table where two people may have the same name, but separate IDs. I am given the name, and need to request their IDs.
When I use a command like:
SELECT id_num INTO cust_id FROM Customers WHERE name=CName;
If I use this command on the command line (psql), it returns 2 results (for example).
But when I use it as part of an SQL script (PL/pgSQL), it always just grabs the first instance.
I tried selecting into cust_id[], but that produced an error. So what is the proper way to select ALL results, and pump them into an array or another easy way to use them?
In declare
DECLARE id_nums bigint[];
in select
id_nums := ARRAY(select cust_id from Customers WHERE name = CName);
If you prefer loop use
DECLARE id_num bigint;
FOR id_num in select cust_id from Customers WHERE name = CName LOOP
your code here
END LOOP;
Read plpgsql control structures in postgresql docs 9.1.
To put data from individual rows into an array, use an array constructor:
DECLARE id_nums int[]; -- assuming cust_id is of type int
id_nums := ARRAY (SELECT cust_id FROM customers WHERE name = cname);
Or the aggregate function array_agg()
id_nums := (SELECT array_agg(cust_id) FROM customers WHERE name = cname);
Or use SELECT INTO for the assignment::
SELECT INTO id_nums
ARRAY (SELECT cust_id FROM customers WHERE name = cname);

How to structure SQL - select first X rows for each value of a column?

I have a table with the following type of data:
create table store (
n_id serial not null primary key,
n_place_id integer not null references place(n_id),
dt_modified timestamp not null,
t_tag varchar(4),
n_status integer not null default 0
...
(about 50 more fields)
);
There are indices on n_id, n_place_id, dt_modified and all other fields used in the query below.
This table contains about 100,000 rows at present, but may grow to closer to a million or even more. Yet, for now let's assume we're staying at around the 100K mark.
I'm trying to select rows from these table where one two conditions are met:
All rows where n_place_id is in a specific subset (this part is easy); or
For all other n_place_id values the first ten rows sorted by dt_modified (this is where it becomes more complicated).
Doing it in one SQL seems to be too painful, so I'm happy with a stored function for this. I have my function defined thus:
create or replace function api2.fn_api_mobile_objects()
returns setof store as
$body$
declare
maxres_free integer := 10;
resulter store%rowtype;
mcnt integer := 0;
previd integer := 0;
begin
create temporary table paid on commit drop as
select n_place_id from payments where t_reference is not null and now()::date between dt_paid and dt_valid;
for resulter in
select * from store where n_status > 0 and t_tag is not null order by n_place_id, dt_modified desc
loop
if resulter.n_place_id in (select n_place_id from paid) then
return next resulter;
else
if previd <> resulter.n_place_id then
mcnt := 0;
previd := resulter.n_place_id;
end if;
if mcnt < maxres_free then
return next resulter;
mcnt := mcnt + 1;
end if;
end if;
end loop;
end;$body$
language 'plpgsql' volatile;
The problem is that
select * from api2.fn_api_mobile_objects()
takes about 6-7 seconds to execute. Considering that after that this resultset needs to be joined to 3 other tables with a bunch of additional conditions applied and further sorting applied, this is clearly unacceptable.
Well, I still do need to get this data, so either I am missing something in the function or I need to rethink the entire algorithm. Either way, I need help with this.
CREATE TABLE store
( n_id serial not null primary key
, n_place_id integer not null -- references place(n_id)
, dt_modified timestamp not null
, t_tag varchar(4)
, n_status integer not null default 0
);
INSERT INTO store(n_place_id,dt_modified,n_status)
SELECT n,d,n%4
FROM generate_series(1,100) n
, generate_series('2012-01-01'::date ,'2012-10-01'::date, '1 day'::interval ) d
;
WITH zzz AS (
SELECT n_id AS n_id
, rank() OVER (partition BY n_place_id ORDER BY dt_modified) AS rnk
FROM store
)
SELECT st.*
FROM store st
JOIN zzz ON zzz.n_id = st.n_id
WHERE st.n_place_id IN ( 1,22,333)
OR zzz.rnk <=10
;
Update: here is the same selfjoin construct as a subquery (CTEs are treated a bit differently by the planner):
SELECT st.*
FROM store st
JOIN ( SELECT sx.n_id AS n_id
, rank() OVER (partition BY sx.n_place_id ORDER BY sx.dt_modified) AS zrnk
FROM store sx
) xxx ON xxx.n_id = st.n_id
WHERE st.n_place_id IN ( 1,22,333)
OR xxx.zrnk <=10
;
After much struggle, I managed to get the stored function to return the results in just over 1 second (which is a huge improvement). Now the function looks like this (I added the additional condition, which didn't affect the performance much):
create or replace function api2.fn_api_mobile_objects(t_search varchar)
returns setof store as
$body$
declare
maxres_free integer := 10;
resulter store%rowtype;
mid integer := 0;
begin
create temporary table paid on commit drop as
select n_place_id from payments where t_reference is not null and now()::date between dt_paid and dt_valid
union
select n_place_id from store where n_status > 0 and t_tag is not null group by n_place_id having count(1) <= 10;
for resulter in
select * from store
where n_status > 0 and t_tag is not null
and (t_name ~* t_search or t_description ~* t_search)
and n_place_id in (select n_place_id from paid)
loop
return next resulter;
end loop;
for mid in
select distinct n_place_id from store where n_place_id not in (select n_place_id from paid)
loop
for resulter in
select * from store where n_status > 0 and t_tag is not null and n_place_id = mid order by dt_modified desc limit maxres_free
loop
return next resulter;
end loop;
end loop;
end;$body$
language 'plpgsql' volatile;
This runs in just over 1 second on my local machine and in about 0.8-1.0 seconds on live. For my purpose, this is good enough, although I am not sure what will happen as the amount of data grows.
As a simple suggestion, the way I like to do this sort of troubleshooting is to construct a query that gets me most of the way there, and optimize it properly, and then add the necessary pl/pgsql stuff around it. The major advantage to this approach is that you can optimize based on query plans.
Also if you aren't dealing with a lot of rows, array_agg() and unnest() are your friends as they allow you (on Pg 8.4 and later!) to dispense with the temporary table management overhead and simply construct and query an array of tuples in memory as a relation. It may perform better also if you are just hitting an array in memory instead of a temp table (less planning overhead and less query overhead too).
Also on your updated query I would look at replacing that final loop with a subquery or a join, allowing the planner to decide when to do a nested loop lookup or when to try to find a better way.

Use query for SQL alias OR join column names to row values?

I'm working with data in PostgreSQL that uses a data dictionary table to provide descriptions for the column (variable) names of other tables in the dataset. For example:
Table 1:
a00600 | a00900
-------+-------
row 1 | row 1
row 2 | row 2
Data Dictionary (Key) columns:
Variable | Description
---------+------------
a00600 | Total population
a00900 | Zipcode
For reporting purposes, how do I write SQL to perform the following dynamically (without specifying each column name)?
SELECT 'a00600' AS (SELECT Key.Description
WHERE Key.Variable = 'a00600')
FROM Table 1;
I realize there's likely a better way to parse this question/problem and am open to any ideas for what I need to accomplish.
You need to use dynamic SQL with a procedural language function. Usually plpgsql and use EXECUTE with it.
The tricky part is to define the return type at creation time.
I have compiled a number of solutions in this related answer.
There are lots of related answer on SO already. Search for combinations of terms like [plpgsql] EXECUTE RETURN QUERY [dynamic-sql] quote_ident.
Your approach is commonly frowned upon among database designers.
My personal opinion: I wouldn't go that route. I always use basic, descriptive names. You can always add more décor in your application if needed.
Another way to get the descriptions instead of the actual column names would be to create views (one for every table). This can be automated by generating the views automatically. This looks rather clumsy, but it has the huge advantage that for "complex* queries the resulting queryplans will be axactly the same as for the original columns names. (functions joined into complex queries will perform badly: the optimiser cannot take them apart, so the resulting behavior will be equivalent to "row at a time")
Example:
-- tmp schema is only for testing
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
CREATE TABLE thedata
( a00600 varchar
, a00900 varchar
);
INSERT INTO thedata(a00600 , a00900) VALUES
('key1', 'data1')
,('key2', 'data2');
CREATE TABLE thedict
( variable varchar
, description varchar
);
INSERT INTO thedict(variable , description) VALUES
('a00600' , 'Total population')
,('a00900' , 'Zipcode' );
CREATE OR REPLACE FUNCTION create_view_definition(zname varchar)
RETURNS varchar AS
$BODY$
DECLARE
thestring varchar;
therecord RECORD;
iter INTEGER ;
thecurs cursor for
SELECT co.attname AS zname, d.description AS zdesc
FROM pg_class ct
JOIN pg_namespace cs ON cs.oid=ct.relnamespace
JOIN pg_attribute co ON co.attrelid = ct.oid AND co.attnum > 0
LEFT JOIN thedict d ON d.variable = co.attname
WHERE ct.relname = 'thedata'
AND cs.nspname = 'tmp'
;
BEGIN
thestring = '' ;
iter = 0;
FOR therecord IN thecurs LOOP
IF (iter = 0) THEN
thestring = 'CREATE VIEW ' || quote_ident('v'||zname) || ' AS ( SELECT ' ;
ELSE
thestring = thestring || ', ';
END IF;
iter=iter+1;
thestring = thestring || quote_ident(therecord.zname);
IF (therecord.zdesc IS NOT NULL) THEN
thestring = thestring || ' AS ' || quote_ident(therecord.zdesc);
END IF;
END LOOP;
IF (iter > 0) THEN
thestring = thestring || ' FROM ' || quote_ident(zname) || ' )' ;
END IF;
RETURN thestring;
END;
$BODY$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION execute_view_definition(zname varchar)
RETURNS INTEGER AS
$BODY$
DECLARE
meat varchar;
BEGIN
meat = create_view_definition(zname);
EXECUTE meat;
RETURN 0;
END;
$BODY$ LANGUAGE plpgsql;
SELECT create_view_definition('thedata');
SELECT execute_view_definition('thedata');
SELECT * FROM vthedata;
RESULT:
CREATE FUNCTION
CREATE FUNCTION
create_view_definition
---------------------------------------------------------------------------------------------------
CREATE VIEW vthedata AS ( SELECT a00600 AS "Total population", a00900 AS "Zipcode" FROM thedata )
(1 row)
execute_view_definition
-------------------------
0
(1 row)
Total population | Zipcode
------------------+---------
key1 | data1
key2 | data2
(2 rows)
Please note this is only an example. If it were for real, I would at least put the generated views into a separate schema, to avoid name collisions and pollution of the original schema.