How to dynamically SELECT from manually partitioned table - sql

Suppose I have table of tenants like so;
CREATE TABLE tenants (
name varchar(50)
)
And for each tenant, I have a corresponding table called {tenants.name}_entities, so for example for tenant_a I would have the following table.
CREATE TABLE tenant_a_entities {
id uuid,
last_updated timestamp
}
Is there a way I can create a query with the following structure? (using create table syntax to show what I'm looking for)
CREATE TABLE all_tenant_entities {
tenant_name varchar(50),
id uuid,
last_updated timestamp
}
--
I do understand this is a strange DB layout, I'm playing around with foreign data in Postgres to federate foreign databases.

Did you consider declarative partitioning for your relational design? List partitioning for your case, with PARTITION BY LIST ...
To answer the question at hand:
You don't need the table tenants for the query at all, just the detail tables. And one way or another you'll end up with UNION ALL to stitch them together.
SELECT 'a' AS tenant_name, id, last_updated FROM tenant_a_entities
UNION ALL SELECT 'b', id, last_updated FROM tenant_b_entities
...
You can add the name dynamically, like:
SELECT tableoid::regclass::text, id, last_updated FROM tenant_a_entities
UNION ALL SELECT tableoid::regclass::text, id, last_updated FROM tenant_a_entities
...
See:
Get the name of a row's source table when querying the parent it inherits from
But it's cheaper to add a constant name while building the query dynamically in your case (the first code example) - like this, for example:
SELECT string_agg(format('SELECT %L AS tenant_name, id, last_updated FROM %I'
, split_part(tablename, '_', 2)
, tablename)
, E'\nUNION ALL '
ORDER BY tablename) -- optional order
FROM pg_catalog.pg_tables
WHERE schemaname = 'public' -- actual schema name
AND tablename LIKE 'tenant\_%\_entities';
Tenant names cannot contain _, or you have to do more.
Related:
Table name as a PostgreSQL function parameter
How to check if a table exists in a given schema
You can wrap it in a custom function to make it completely dynamic:
CREATE OR REPLACE FUNCTION public.f_all_tenant_entities()
RETURNS TABLE(tenant_name text, id uuid, last_updated timestamp)
LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY EXECUTE
(
SELECT string_agg(format('SELECT %L AS tn, id, last_updated FROM %I'
, split_part(tablename, '_', 2)
, tablename)
, E'\nUNION ALL '
ORDER BY tablename) -- optional order
FROM pg_tables
WHERE schemaname = 'public' -- your schema name here
AND tablename LIKE 'tenant\_%\_entities'
);
END
$func$;
Call:
SELECT * FROM public.f_all_tenant_entities();
You can use this set-returning function (a.k.a "table-function") just like a table in most contexts in SQL.
Related:
How to UNION a list of tables retrieved from another table with a single query?
Simulate CREATE DATABASE IF NOT EXISTS for PostgreSQL?
Function to loop through and select data from multiple tables
Note that RETIRN QUERY does not allow parallel queriies before Postgres 14. The release notes:
Allow plpgsql's RETURN QUERY to execute its query using parallelism (Tom Lane)

Related

Insert into Postgres table only if record exists

I am writing some sql in Postgres to update an audit table. My sql will update the table being audited based on some criteria and then select that updated record to update information in an audit table. This is what I have so far:
DO $$
DECLARE
jsonValue json;
revId int;
template RECORD;
BEGIN
jsonValue = '...Some JSON...'
UPDATE projectTemplate set json = jsonValue where type='InstallationProject' AND account_id IS NULL;
template := (SELECT pt FROM ProjectTemplate pt WHERE pt.type='InstallationProject' AND pt.account_id IS NULL);
IF EXISTS (template) THEN
(
revId := nextval('hibernate_sequence');
insert into revisionentity (id, timestamp) values(revId, extract(epoch from CURRENT_TIMESTAMP));
insert into projectTemplate_aud (rev, revtype, id, name, type, category, validfrom, json, account_id)
VALUES (revId, 1, template.id, template.name, template.type, template.category, template.validfrom, jsonValue, template.account_id);
)
END $$;
My understanding is that template will be undefined if there is nothing in the table that matches that query (and there isn't currently). I want to make it so my query will not attempt to update the audit table if template doesn't exist.
What can I do to update this sql to match what I am trying to do?
You cannot use EXISTS like that, it expects a subquery expression. Plus some other issues with your code.
This single SQL DML statement with data-modifying CTEs should replace your DO command properly. And faster, too:
WITH upd AS (
UPDATE ProjectTemplate
SET json = '...Some JSON...'
WHERE type = 'InstallationProject'
AND account_id IS NULL
RETURNING *
)
, ins AS (
INSERT INTO revisionentity (id, timestamp)
SELECT nextval('hibernate_sequence'), extract(epoch FROM CURRENT_TIMESTAMP)
WHERE EXISTS (SELECT FROM upd) -- minimal valid EXISTS expression!
RETURNING id
)
INSERT INTO ProjectTemplate_aud
(rev , revtype, id, name, type, category, validfrom, json, account_id)
SELECT i.id, 1 , u.id, u.name, u.type, u.category, u.validfrom, u.json, u.account_id
FROM upd u, ins i;
Inserts a single row in revisionentity if the UPDATE found any rows.
Inserts as many rows projectTemplate_aud as rows have been updated.
About data-modifying CTEs:
Insert data in 3 tables at a time using Postgres
Aside: I see a mix of CaMeL-case, some underscores, or just lowercased names. Consider legal, lower-case names exclusively (and avoid basic type names as column names). Most importantly, though, be consistent. Related:
Are PostgreSQL column names case-sensitive?
Misnamed field in subquery leads to join

Abbreviate a list in PostgreSQL

How can I abbreviate a list so that
WHERE id IN ('8893171511',
'8891227609',
'8884577292',
'886790275X',
.
.
.)
becomes
WHERE id IN (name of a group/list)
The list really would have to appear somewhere. From the point of view of your code being maintainable and reusable, you could represent the list in a CTE:
WITH id_list AS (
SELECT '8893171511' AS id UNION ALL
SELECT '8891227609' UNION ALL
SELECT '8884577292' UNION ALL
SELECT '886790275X'
)
SELECT *
FROM yourTable
WHERE id IN (SELECT id FROM cte);
If you have a persistent need to do this, then maybe the CTE should become a bona fide table somewhere in your database.
Edit: Using the Horse's suggestion, we can tidy up the CTE to the following:
WITH id_list (id) AS (
VALUES
('8893171511'),
('8891227609'),
('8884577292'),
('886790275X')
)
If the list is large, I would create a temporary table and store the list there.
That way you can ANALYZE the temporary table and get accurate estimates.
The temp table and CTE answers suggested will do.
Just wanted to bring another approach, that will work if you use PGAdmin for querying (not sure about workbench) and represent your data in a "stringy" way.
set setting.my_ids = '8893171511,8891227609';
select current_setting('setting.my_ids');
drop table if exists t;
create table t ( x text);
insert into t select 'some value';
insert into t select '8891227609';
select *
from t
where x = any( string_to_array(current_setting('setting.my_ids'), ',')::text[]);

How do I get rid of inherited SELECTs in Postgres queries?

Postgres 9.4
I guess, queries like this is not the best approach in terms of database performance:
SELECT t.name,
t.description,
t.rating,
t.readme,
t.id AS userid,
t.notifications
FROM ( SELECT "user".name,
"user".description,
"user".rating,
"user".readme,
"user".id,
( SELECT array_to_json(array_agg(row_to_json(notifications.*))) AS array_to_json
FROM ( SELECT notification.id,
notification.action_type,
notification.user_id,
notification.user_name,
notification.resource_id,
notification.resource_name,
notification.resource_type,
notification.rating,
notification.owner
FROM notification
WHERE (notification.owner = "user".id)
ORDER BY notification.created DESC) notifications) AS notifications
FROM "user") t
Column notification contains json object with all the matched rows from notification table.
How should I rebuild this query to receive data in the same manner? I suppose, I should use JOIN commands somehow.
I have request, which utilise more than one inherited SELECT.
Thank you for your time!
The outermost query only aliases id to userid. You can move the alias to the inner query, and omit the outer query entirely.
Then you can create a function to create the notification JSON:
create or replace function get_user_notifications(user_id bigint)
returns json language sql as
$$
select array_to_json(array_agg(row_to_json(n)))
from (
select id
, action_type
, ... other columns from notification ...
from notification
-- Use function name to refer to parameter not column
where user_id = get_user_notifications.user_id
order by
created desc
) n
$$;
Now you can write the query as:
select id as userid
, ... other columns from "user" ...
, get_user_notifications(id) as notifications
from "user" u;
Which looks a lot better, at the cost of having to maintain Postgres functions.

Merging two tables into one with the same column names

I use this command to merge 2 tables into one:
CREATE TABLE table1 AS
SELECT name, sum(cnt)
FROM (SELECT * FROM table2 UNION ALL SELECT * FROM table3) X
GROUP BY name
ORDER BY 1;
table2 and table3 are tables with columns named name and cnt, but the result table (table1) has the columns name and sum.
The question is how to change the command so that the result table will have the columns name and cnt?
Have you tried this (note the AS cnt)?
CREATE TABLE table1 AS SELECT name,sum(cnt) AS cnt
FROM ...
In the absence of an explicit name, the output of a function inherits the basic function name in Postgres. You can use a column alias in the SELECT list to fix this - like #hennes already supplied.
If you need to inherit all original columns with name and type (and possibly more) you can also create the table with a separate command:
To copy columns with names and data types only, still use CREATE TABLE AS, but add LIMIT 0:
CREATE TABLE table1 AS
TABLE table2 LIMIT 0; -- "TABLE" is just shorthand for "SELECT * FROM"
To copy (per documentation):
all column names, their data types, and their not-null constraints:
CREATE TABLE table1 (LIKE table2);
... and optionally also defaults, constraints, indexes, comments and storage settings:
CREATE TABLE table1 (LIKE table2 INCLUDING ALL);
... or, for instance, just defaults and constraints:
CREATE TABLE table1 (LIKE table2 INCLUDING DEFAULTS INCLUDING CONSTRAINTS);
Then INSERT:
INSERT INTO table1 (name, cnt)
SELECT ... -- column names are ignored

Oracle SQL - select from view more rows than running select in the view

When I run this SQL, I get 116,463 rows.
select * from appsdisc.appsdisc_phones_gen_v
When I run the select that is in the view definition script, I get 11,702 rows.
I can't figure out why the result set is different.
The view script is as follows.
CREATE OR REPLACE FORCE VIEW APPSDISC.APPSDISC_PHONES_GEN_V
(PARTY_ID, CUSTOMER_ID, CUSTOMER_NUMBER, PHONE_NUMBER, PHONE_TYPE)
AS
SELECT party_id,
customer_id,
customer_number,
phone_number,
phone_type
FROM appsdisc_phones_v pv1
WHERE pv1.phone_type LIKE
DECODE (TRIM (SUBSTR (pv1.attribute14, 1, 4)),
'FR', 'FR T%',
'PHONE')
AND pv1.contact_point_id =
(SELECT MIN (pv2.contact_point_id)
FROM appsdisc_phones_v pv2
WHERE pv2.customer_id = pv1.customer_id
AND pv2.phone_type LIKE
DECODE (
TRIM (SUBSTR (pv1.attribute14, 1, 4)),
'FR', 'FR T%',
'PHONE'));
If you're running the view query exactly as it is, and you are not logged in as APPSDISC, you might be querying your own table (or view), since appsdisc_phones_v isn't prefixed by the schema in the view script. Hopefully this is a development environment and you have an old copy for a valid reason.
Here's a demo of the effect I think you're seeing. As one user (SOUSER1) I can create and populate a table with a view on top of it, and grant access to that view to a different user. Notice I don't need to grant access to the underlying table directly.
create table my_table (id number);
insert into my_table
select level as id from dual connect by level <= 1000;
commit;
create view souser1.my_view as select * from my_table;
grant select on souser1.my_view to souser2;
select count(*) from my_view;
COUNT(*)
----------
1000
select count(*) from my_table;
COUNT(*)
----------
1000
I didn't specify the schema in the select inside the view statement, so it's going to be the same as the view owner, which is SOUSER1 in this case.
Then as a second user (SOUSER2) I can create my own version of the table, with fewer rows. Querying the view still shows the row count from the SOUSER1 table, not my own.
create table my_table (id number);
insert into my_table
select level as id from dual connect by level <= 100;
commit;
select count(*) from souser1.my_view;
COUNT(*)
----------
1000
If I run the query from the original view I'm seeing my own copy of the table, which is smaller, because the table name isn't qualified with the schema name - hence it defaults to my own:
select count(*) from my_table;
COUNT(*)
----------
100
So seeing a different number of rows makes sense as long as there are two versions of the table and you haven't specified which you want to query.
And in my case, if I try to query the other schema's table directly I get an error, since I didn't grant any privileges on that:
select count(*) from souser1.my_table;
SQL Error: ORA-00942: table or view does not exist
00942. 00000 - "table or view does not exist"
But you'd see the same error querying my_table if you don't have your own copy, you don't set your current schema on login, and you don't have a synonym pointing to a table in some schema.
Gordon is right, the select should return the same results as the view. Issues such as running the query in different schema, databases, or over database links could explain what you are observing. You can see this by running the two SQL commands in the same database and schema below and comparing the values returned by both.
First, confirm the number of rows the view returns with the SQL:
SELECT COUNT(*) FROM APPSDISC.APPSDISC_PHONES_GEN_V;
Then confirm the number of rows the query for the view returns with the SQL:
WITH RESULTS AS (
SELECT party_id,
customer_id,
customer_number,
phone_number,
phone_type
FROM appsdisc_phones_v pv1
WHERE pv1.phone_type LIKE
DECODE (TRIM (SUBSTR (pv1.attribute14, 1, 4)),
'FR', 'FR T%',
'PHONE')
AND pv1.contact_point_id =
(SELECT MIN (pv2.contact_point_id)
FROM appsdisc_phones_v pv2
WHERE pv2.customer_id = pv1.customer_id
AND pv2.phone_type LIKE
DECODE (
TRIM (SUBSTR (pv1.attribute14, 1, 4)),
'FR', 'FR T%',
'PHONE')))
SELECT COUNT(*)
FROM RESULTS
/
Both queries should be returning the same values. If not then there is more to this issue than a query returning a different number of rows.