PostgreSQL combine multiple EXECUTE statements - sql

I have a PREPARE statement which is being called multiple times using EXECUTE.
To save database connection cost, we make a big query like:
PREPARE updreturn as update myTable set col1 = 1 where col2= $1 returning col3;
EXECUTE updreturn(1);
EXECUTE updreturn(2);
....
EXECUTE updreturn(10);
and send to the database.
However, I get the result for only the last EXECUTE statement.
Is there a way I could store these results in a temporary table and get all the results?

You can use a transaction and a temporary table. And execute 3 queries:
Query 1: Start a Transaction (I don't know what you are using to connect to the database).
Query 2:
-- Create a Temporary Table to store the returned values
CREATE TEMPORARY TABLE temp_return (
col3 text
) ON COMMIT DROP;
-- Prepare the Statement
PREPARE updreturn AS
WITH u AS (
UPDATE myTable SET col1 = 1 WHERE col2= $1 RETURNING col3
)
INSERT INTO temp_return (col3) SELECT col3 FROM u;
EXECUTE updreturn(1);
EXECUTE updreturn(2);
.....
EXECUTE updreturn(10);
-- Deallocate the Statement
DEALLOCATE updreturn;
-- Actually return the results
SELECT * FROM temp_return;
Query 3: Commit the Transaction (see note at Query 1)
Without any other details about your complete scenario I can't tell you more, but you should get the idea.

I think you need a hack for that.
Create a result table to store your results
Create a trigger before update on myTable
Inside that trigger add INSERT INTO result VALUES(col3)
So every time your myTable row is update also a value will be insert into result

Related

How to combine 2 results from SQL procedure

I wrote a SQL query, it is 300+ line and I made this as a procedure.
I want to run two times this procedure with different parameters and then want to see all result in one table.
For example:
exec sp_xxxxx 4652,'2022-02-07 00:00:00.000',1
// Returns 2 columns, number of rows can vary
exec sp_xxxxx 4652,'2022-02-14 00:00:00.000',1
// Returns 2 columns, number of rows can vary
I run these together, then I hope to get a result of 4 columns
// 4 column,number of rows can vary
I tried openrowset but SQL blocked.
How can I do this, I would be very happy if you can help.
There's not enough information to provide a demonstrable solution, but the approach should be:
Create temp table #T1(col1, col2)
Create temp table #T2(col1, col2)
Insert into #T1(col1, col2) exec proc
Insert into #T2(col1, col2) exec proc
select t1.col1, t1.col2, t2.col1, t2.col2
from #T1 inner/left/full join #T2 on<criteria>
Also note that prefixing procedures with "sp" is not recommended, this is reserved by MS and indicates a Special Procedure. Choose a different prefix - or no prefix.
Start with creating a table type than matches the output of your procedure.
For example:
CREATE TYPE XxxxxTblType AS TABLE(
Col1 varchar(10) not null,
Col2 decimal(8,2) not null
);
This table type could also be used by your procedure.
Then use a variable with that table type to collect the results from the procedures. Then create a temporary table from that table variable.
declare #Xxxxx XxxxxTblType;
insert into #Xxxxx exec sp_xxxxx 4652,'2022-02-07 00:00:00.000',1;
insert into #Xxxxx exec sp_xxxxx 4652,'2022-02-14 00:00:00.000',1;
select * into #tmpXxxxx from #Xxxxx;
Now you can query the temporary table.
select * from #tmpXxxxx;

How to find affected rows, after an update in SQL

I have a table and a stored procedure. I use the stored procedure to update the table. There are some cursors in the stored procedure and the SP is updating the table. I want to get the rows updated by the stored procedure. I don't want to number of updated rows, I want just updated rows.
I created a temporary table to insert with updated rows but can't get the updated rows. How can I get?
I am using SQL Server.
If your RDBMS supports it, you can use update returning like this:
sql> update your_table
set your_field = 'my new value'
where other_field = 'your condition'
returning *; -- this returning will return a result set with the modified rows
-- you could also specify a list of columns here if you don't want
-- all fields returned
Using returning clause should work with PostgreSQL, Oracle, and others.
If you are using SQLServer (as you've just stated on your question update), you can use output:
sql> update your_table
set your_field = 'my new value'
output your_list_of_fields -- this is a comma separated list of the
-- columns you want to return
where other_field = 'your condition';
You could use the INSERTED and DELETED virtual or "psuedo" tables which are created for this purpose. In UPDATE statements the virtual tables are accessible using the OUTPUT clause. Here's an example
drop table if exists #t;
go
create table #t(col_x char(1));
insert #t values('a');
update #t
set col_x='b'
output inserted.col_x as new_val,
deleted.col_x as old_val;
new_val old_val
b a

Is it possible to pass variable tables through procedures in SQL DEV?

set serveroutput on;
CREATE OR REPLACE PROCEDURE test_migrate
(
--v_into_table dba_tables.schema#dbprd%TYPE,
--v_from_table dba_tables.table#dbprd%TYPE,
v_gid IN NUMBER
)
IS
BEGIN
select * INTO fx.T_RX_TXN_PLAN
FROM fx.T_RX_TXN_PLAN#dbprd
WHERE gid = v_gid;
--and schema = v_into_table
--and table = v_from_table;
COMMIT;
END;
I thought that SELECT * INTO would create a table in the new database from #dbprd. However, the primary issue is just being able to set these as variables and the goal is to EXEC(INTO_Table,FROM_Table,V_GID) to run the above code.
Error(9,19): PLS-00201: identifier 'fx.T_RX_TXN_PLAN' must be
declared  Error(10,5): PL/SQL: ORA-00904: : invalid identifier
If your goal is to copy data from table in "another" database into a table that resides in "this" database (regarding database link you used), then it it INSERT INTO, not SELECT INTO.
For example:
CREATE OR REPLACE PROCEDURE test_migrate (v_gid in number)
IS
BEGIN
insert into fx.t_rx_txn_plan (col1, col2, ..., coln)
select col1, col2, ..., coln
from fx.t_rx_txn_plan#dbprod
where gid = v_gid;
END;
Last sentence you wrote looks like you'd want to make it dynamic, i.e. pass table names and v_gid (whatever that might be; looks like all tables that should be involved into this process have it). That isn't a simple task.
If you plan to use insert into select * from, that's OK but not for production system. What if someone alters a table and adds (or drops) a column or two? Your procedure will automatically fail. Correct way to do it is to enumerate all columns involved, but that requires fetching data from user_tab_columns (or all_ or dba_ version of the same), which complicates it even more.
Therefore, if you want to move data from here to there, why don't you do it using Data Pump Export & Import? Those utilities are designed for such a purpose, and will do the job better than your procedure. At least, I think so.
This way you should be returning a row. If so, add an OUT type parameter to the procedure with
CREATE OR REPLACE PROCEDURE test_migrate(
--v_into_table dba_tables.schema#dbprd%TYPE,
--v_from_table dba_tables.table#dbprd%TYPE,
i_gid IN NUMBER,
o_RX_TXN_PLAN OUT fx.T_RX_TXN_PLAN#dbprd%rowtype
) IS
BEGIN
SELECT *
INTO RT_RX_TXN_PLAN
FROM fx.T_RX_TXN_PLAN#dbprd
WHERE id = v_gid;
--and schema = v_into_table
--and table = v_from_table;
END;
and call the procedure such as
declare
v_rx_txn_plan fx.T_RX_TXN_PLAN#dbprd%rowtype;
v_gid number:=5345;
begin
test_migrate(v_gid => v_gid, rt_rx_txn_plan => v_rx_txn_plan);
dbms_output.put_line(v_rx_txn_plan.col1);
dbms_output.put_line(v_rx_txn_plan.col2);
end;
to print out the returning values for some columns of the table. to be able to create a new table from this, not SELECT * INTO ... syntax, but
CREATE TABLE T_RX_TXN_PLAN AS
SELECT *
INTO RT_RX_TXN_PLAN
FROM fx.T_RX_TXN_PLAN#dbprd
WHERE ...
is used.
But neither of the cases to issue a COMMIT since there's no DML exists within them.
To create a table you must use the CREATE TABLE statement, and to use any DDL statement in PL/SQL you have to use EXECUTE IMMEDIATE:
CREATE OR REPLACE PROCEDURE test_migrate
(
v_gid IN NUMBER
)
IS
BEGIN
EXECUTE IMMEDIATE 'CREATE TABLE FX.T_RX_TXN_PLAN AS
SELECT *
FROM fx.T_RX_TXN_PLAN#dbprd
WHERE gid = :GID'
USING IN v_gid;
END;

How to avoid launching a trigger for every row of a prepared statement?

I have a FOR EACH STATEMENT trigger on table1:
CREATE TRIGGER trg_table1 AFTER INSERT OR DELETE OR UPDATE OF field1 ON table1
FOR EACH STATEMENT EXECUTE PROCEDURE my_trigger_procedure();
I have to update rows of that table as follows:
UPDATE table1 SET field1='abc' WHERE field5=1;
UPDATE table1 SET field1='def' WHERE field5=2;
UPDATE table1 SET field1='ghi' WHERE field5=3;
...
Names and values are simplified for clarity.
Each UPDATE is considered a single statement, so the trigger is fired for every one of those rows.
To avoid that, I made a prepared statement:
PREPARE my_prep_stmnt AS
UPDATE table1
SET
field1=$1
WHERE field5=$2
;
EXECUTE my_prep_stmnt ('abc',1);
EXECUTE my_prep_stmnt ('def',2);
EXECUTE my_prep_stmnt ('ghi',3);
I was expecting the trigger to be fired only after the prepared statement was done, but no, the trigger is fired for every EXECUTE row.
The problem is that the trigger procedure takes time to execute.
Any idea to go around this ?
Provide multiple rows in a subquery with a VALUES expression to make it a single UPDATE statement:
UPDATE table1 t
SET field1 = u.field1
FROM (
VALUES
('abc'::text, 1) -- cast string literal in row 1 to make type unambiguous
,('def', 2)
,('ghi', 3)
) u (field1, filed5)
WHERE t.field5 = u.field5;
You need to cast to a matching type, since the VALUES expressions stands alone and cannot derive the column type like in the UPDATE. Unquoted numbers with just digits default to integer (bigint / numeric if too big).
You can still use a prepared statement (with many parameters). For a large number of rows rather switch to bulk loading to a temporary staging table and update from there:
How to update selected rows with values from a CSV file in Postgres?
More options:
How to UPDATE table from csv file?

Sql update statement, any way of knowing what it actually did?

Typically, I test an update by running a query using the where statement and then after verifying that it does what I think I want it to copying the where clause to the update statement and executing it. But is there any way of getting the statement to return what the update did besides the '4 rows updated'
Sure, take a look at the output clause of T-SQL
http://msdn.microsoft.com/en-us/library/ms177564.aspx
You could load your records into a temp table/variable in SQL Server:
DECLARE #Temp TABLE(ID INT)
INSERT INTO #Temp (ID)
SELECT ID
FROM Customer
WHERE AcctBalance > 5000
--inspect as needed
UPDATE Customer
SET AcctBalance = 0
WHERE ID IN (SELECT ID FROM #Temp)
That depend in the server, library that you use, in php, pdo exec return number of row effected by delete or update cluase