only one AS items needed for language "plpgsql"; - sql

I am trying to run a copy command inside a stored procedure.
This copy command copies from aws s3 to a table in aws redshift
This is the copy command
copy schema1_ghsheet.ghseet_temp
from
''''s3://root2/rawfiles/''''
iam_role ''''arn:aws:iam::743:role/redshift''''
csv DELIMITER ',' IGNOREHEADER 1 TRUNCATECOLUMNS;
I am trying to add it into a stored proc, which now looks like this.
here i am trying to create a temp table which contains all the data from s3, which is being copied to temp table using a copy command.
CREATE OR REPLACE PROCEDURE proc_test() LANGUAGE plpgsql
AS
'
BEGIN
drop table if exists
schema1_ghsheet.ghseet_temp
;
create table
schema1_ghsheet.ghseet_temp(emp_id int, emp_name varchar(100),hrly_rate int,mod_timestamp timestamp)
;
copy schema1_ghsheet.ghseet_temp
from
''''s3://root2/rawfiles/''''
iam_role ''''arn:aws:iam::743:role/redshift''''
csv DELIMITER ',' IGNOREHEADER 1 TRUNCATECOLUMNS;
drop table if exists
schema1_ghsheet.ghseet_main
;
create table
schema1_ghsheet.ghseet_main
as
select h1.emp_id,h1.emp_name,h1.hrly_rate,h1.mod_timestamp
from schema1_ghsheet.ghseet_hstry h1
inner join (
select emp_id ,emp_name , max(mod_timestamp ) mod_timestamp
from schema1_ghsheet.ghseet_hstry
group by 1,2
) h2
on h1.emp_id=h2.emp_id
and h1.mod_timestamp=h2.mod_timestamp
group by 1,2,3,4
;
END;
'
But this throws the error :
only one AS items needed for language "plpgsql";
So , how to add a copy command inside a stored procedure or do i need to call the copy command separately ?

COPY is an allowed DML statement within a stored procedure.
It looks like the error is referring to the AS keyword. As far as I know, the syntax in the documentation requires the script after AS to be wrapped in $$ and the LANGUAGE plpgsql designation should come after the script body . Example given in the documentation is:
CREATE OR REPLACE PROCEDURE test()
AS $$
BEGIN
SELECT 1 a;
END;
$$
LANGUAGE plpgsql
;
/
so you should change the order of the script like follows (I cannot test this locally at this time)
CREATE OR REPLACE PROCEDURE proc_test()
AS
$$
BEGIN
drop table if exists
schema1_ghsheet.ghseet_temp
;
create table
schema1_ghsheet.ghseet_temp(emp_id int, emp_name varchar(100),hrly_rate int,mod_timestamp timestamp)
;
copy schema1_ghsheet.ghseet_temp
from
''''s3://root2/rawfiles/''''
iam_role ''''arn:aws:iam::743:role/redshift''''
csv DELIMITER ',' IGNOREHEADER 1 TRUNCATECOLUMNS;
drop table if exists
schema1_ghsheet.ghseet_main
;
create table
schema1_ghsheet.ghseet_main
as
select h1.emp_id,h1.emp_name,h1.hrly_rate,h1.mod_timestamp
from schema1_ghsheet.ghseet_hstry h1
inner join (
select emp_id ,emp_name , max(mod_timestamp ) mod_timestamp
from schema1_ghsheet.ghseet_hstry
group by 1,2
) h2
on h1.emp_id=h2.emp_id
and h1.mod_timestamp=h2.mod_timestamp
group by 1,2,3,4
;
END;
$$
LANGUAGE plpgsql;

Related

SQL Function to turn JSON keys into Attribute columns

I have a JSON_TEXT column in my PostgreSQL DB such as this {'a':'one', 'b':'two', 'c':'three'}
I would like to create a function that would loop through all of the DISTINCT JSON_object_keys and create a column for each of the keys, and populate all of the values into their new columns. psuedo code example:
create or replace function myFunction (input json_text)
returns //not sure as $$//
BEGIN
// foreach(key in input)
// make and return a column populated with its values somehow idk
END; $$
I understand you can hard code the names of each key and create attributes for them but I have hundreds of keys so this wont be feasible for me.
Your request looks like a pivot table with a list of columns not defined at the run time. You can get your result by creating a composite type dynamically corresponding to the list of json keys and then by using the standard json function json_to_record :
CREATE OR REPLACE PROCEDURE create_composite_type(input json) LANGUAGE plpgsql AS $$
DECLARE
column_list text ;
BEGIN
SELECT string_agg(DISTINCT quote_ident(i) || ' text', ',')
INTO column_list
FROM json_object_keys(input) AS i ;
DROP TYPE IF EXISTS composite_type ;
EXECUTE 'CREATE TYPE composite_type AS (' || column_list || ')' ;
END ;
$$ ;
CALL create_composite_type(input) ;
SELECT * FROM json_populate_record( null :: composite_type, input) ;
Found the best answer to this was the following. this is somewhat similar to Eduard's answer just a bit of a different approach.
DO
$$
DECLARE
l_keys text;
BEGIN
DROP TABLE IF EXISTS new_table_name CASCADE;
SELECT string_agg(DISTINCT format('column_name::json ->> %L as %I',jkey, jkey), ', ')
INTO l_keys
FROM old_table_namejson_object_keys(deviceinformation::json) AS t(jkey);
EXECUTE 'CREATE TABLE new_table_name AS SELECT '||l_keys||' FROM new_table_name';
END;
$$;
this took a JSON text column and put every one of its keys and their associated values into their own columns in a new table, ta-da.

Is it possible to insert a row only to a partition?

I have an already exist table which I want to be divided.
Here is my script for partitions creating:
create or replace function create_partition_and_insert_to_partition_bundle() returns trigger as
$$
declare
partition text;
dt_constraint text;
begin
dt_constraint := format( 'y%sq%s', (select date_part('year', date(new.created))), select ceil(date_part('month', date(new.created))::float / 3));
partition := format( 'bundle_%s', dt_constraint);
if not exists(select relname from pg_class where relname = partition) then
execute 'create table '||partition||' (like bundle including all) inherits (bundle)';
end if;
execute 'insert into ' || partition || ' values ( ($1).* )' using new;
return null;
end
$$ language plpgsql;
create trigger create_insert_partition_bundle before insert on bundle for each row execute procedure create_partition_and_insert_to_partition_bundle();
set constraint_exclusion = partition;
When I add a new row, the trigger runs and creates a new partition for a new quart, but the row is also inserted to a parent table bundle. Is it possible to insert only to a partition, and how should I change my script?
Personally i found inheritance very hard to understand. So I use partition.
The following is using partition.You don't even need trigger.
CREATE TABLE bundle (
bundle_id int not null,
created date not null,
misc text
) PARTITION BY RANGE (created);
CREATE TABLE bundle_y2022q1 PARTITION OF bundle
FOR VALUES FROM ('2022-01-01') TO ('2022-04-01');
CREATE TABLE bundle_y2022q2 PARTITION OF bundle
FOR VALUES FROM ('2022-04-01') TO ('2022-07-01');
CREATE TABLE bundle_y2022q3 PARTITION OF bundle
FOR VALUES FROM ('2022-07-01') TO ('2022-10-01');
CREATE TABLE bundle_y2022q4 PARTITION OF bundle
FOR VALUES FROM ('2022-10-01') TO ('2023-01-01');

Creating stored procedure in redshift

When creating stored procedure in redshift like below:
CREATE OR REPLACE PROCEDURE sp_test()
AS '
BEGIN
TRUNCATE TABLE TABLE_1;
INSERT INTO TABLE_1
SELECT COL1, COL2
FROM TABLE_2
WHERE CONDITION='SAMPLE';
END;
'
LANGUAGE plpgsql;
This gives an error syntax error near 'SAMPLE' because single quotes is already used for stored procedure begin and end. Also, here we would not be able to replace single quotes in INSERT query to double because redshift will consider it to be a column.
Few other posts suggests to use $$ for stored procedure, however $$ is not supported in sql workbench.
Any work around for this. Thanks.
Have you tried double-quoting the string?
WHERE CONDITION=''SAMPLE'';
Data Sample
CREATE TABLE t (id int, status text);
INSERT INTO t VALUES (42,'foo');
Procedure
CREATE OR REPLACE PROCEDURE sp_test()
AS'
BEGIN
TRUNCATE TABLE t;
INSERT INTO t
SELECT 8,''new record'';
END;'
LANGUAGE plpgsql;
Test procedure
CALL sp_test();
SELECT * FROM t
id | status
----+------------
8 | new record
(1 Zeile)

PostgreSQL - Shared temp table between functions

I wnat to know if its possible to share a temporary table between functions that are called in a "main function", like this:
-- some sub function
create or replace function up_sub_function (str text)
returns table (id int, descr text) as $$
begin
return query select * from temp_table where descr like concat('%', str , '%');
end; $$
language plpgsql;
-- main function
create or replace function up_main_function ()
returns table (id int, descr text) as $$
begin
create temporary table temp_table if not exists (
id int,
descr text
);
insert into temp_campaigns select id, descr from test_table;
return query select * from up_sub_function('a');
end; $$
language plpgsql;
BEGIN;
select * from up_main_function();
drop table temp_table;
COMMIT;
If you can show me the correct way to achieve this, I want to be able to populate a temporary table and then filter rows by calling othe functions inside the main function.
Thanks ans happy programming! :)
See the documentation https://www.postgresql.org/docs/current/static/sql-createtable.html
temp tables are valid for the entire session. That is as long as you stay connected to the database.
In your case you only need it during the transaction. So you should create it with ON COMMIT DROP
create temporary table temp_table if not exists (
id int,
descr text
) ON COMMIT DROP;
Once you created the table you can use it within any function in the current transaction.
You do not need the BEGIN to start the transaction. A transaction is automatically started when the outer function is called.
Nested function calls share the same transaction. So they all see the table.

Teradata: replace an existing table with "CREATE TABLE"

What setting should I change to make Teradata replace existing tables with CREATE TABLE query?
Currently, if the table exists, an attemps to CREATE it results in error. So I have to DROP the table before CREATing it.
thx
REPLACE PROCEDURE DROP_IF_EXISTS(IN table_name VARCHAR(60),IN db_name VARCHAR(60))
BEGIN
IF EXISTS(SELECT 1 FROM dbc.tables WHERE databasename=db_name AND tablename=table_name)
THEN
CALL DBC.SysExecSQL('DROP TABLE ' || db_name ||'.'|| table_name);
END IF;
END;
And in your DDL script:
call drop_if_exists('$your_table_name','$your_db_name')
;
database $your_db_name;
create table $your_table_name ...
;