Translating query from Firebird to PostgreSQL - sql

I have a Firebird query which I should rewrite into PostgreSQL code.
SELECT TRIM(RL.RDB$RELATION_NAME), TRIM(FR.RDB$FIELD_NAME), FS.RDB$FIELD_TYPE
FROM RDB$RELATIONS RL
LEFT OUTER JOIN RDB$RELATION_FIELDS FR ON FR.RDB$RELATION_NAME = RL.RDB$RELATION_NAME
LEFT OUTER JOIN RDB$FIELDS FS ON FS.RDB$FIELD_NAME = FR.RDB$FIELD_SOURCE
WHERE (RL.RDB$VIEW_BLR IS NULL)
ORDER BY RL.RDB$RELATION_NAME, FR.RDB$FIELD_NAME
I understand SQL, but have no idea, how to work with this system tables like RDB$RELATIONS etc. It would be really great if someone helped me with this, but even some links with this tables explanation will be OK.
This piece of query is in C++ code, and when I'm trying to do this :
pqxx::connection conn(serverAddress.str());
pqxx::work trans(conn);
pqxx::result res(trans.exec(/*there is this SQL query*/));//and there is a mistake
it writes that:
RDB$RELATIONS doesn't exist.

Postgres has another way of storing information about system content. This is called System Catalogs.
In Firebird your query basically returns a row for every column of a table in every schema with an additional Integer column that maps to a field datatype.
In Postgres using system tables in pg_catalog schema something similar can be achieved using this query:
SELECT
TRIM(c.relname) AS table_name, TRIM(a.attname) AS column_name, a.atttypid AS field_type
FROM pg_class c
LEFT JOIN pg_attribute a ON
c.oid = a.attrelid
AND a.attnum > 0 -- only ordinary columns, without system ones
WHERE c.relkind = 'r' -- only tables
ORDER BY 1,2
Above query does return system catalogs as well. If you'd like to exclude them you need to add another JOIN to pg_namespace and a where clause with pg_namespace.nspname <> 'pg_catalog', because this is the schema where system catalogs are stored.
If you'd also like to see datatype names instead of their representative numbers add a JOIN to pg_type.
Information schema consists of collection of views. In most cases you don't need the entire SQL query that stands behind the view, so using system tables will give you better performance. You can inspect views definition though, just to get you started on the tables and conditions used to form an output.

I think you are looking for the information_schema.
The tables are listed here: https://www.postgresql.org/docs/current/static/information-schema.html
So for example you can use:
select * from information_schema.tables;
select * from information_schema.columns;

Related

Greenplum PSQL Format for Dynamic Query

Firstly, thank you in advance for any help with my relatively simple issue below. It's honestly driving my insane!
Simply, I'm trying to select some metrics on all tables in a schema. However, this specifically includes Partitioned tables in Greenplum (which, for those who don't know it, have a single parent table named X and then child tables named X_1_prt_3, X_1_prt_4, etc).
As a result, my query in trying to get the total table size for the single partitioned table X is as follows:
-- Part 1
select cast(sum(sotaidtablesize) as bigint) / 1024 / 1024 as "Table Size (MB)"
from gp_toolkit.gp_size_of_table_and_indexes_disk
where sotaidschemaname = 'Y'
and sotaidtablename like 'X%'
;
This sums up the table size for any table named X or similar thereafter, which is effectively what I want. But this is just a part of a bigger query.. I don't want to actually specify the schema and table, I want it to be:
-- Part 2
where sotaidschemaname = t4.nspname
and sotaidtablename like 't4.relname%'
but that sadly doesn't just work (what a world that would be!!). I've tried the following, which I think is close, but I cannot get it to return any value other than NULL :
-- Part 3
and sotaidtablename like quote_literal(format( '%I', tablename )::regclass)
where tablename is a column from another part (I already use this column in another format which correctly works, so I know this bit in particular isn't the issue).
Thank you in advance to anyone for any help!
Regards,
Vinny
I find it easier using gp_size_of_table_and_indexes_disk.sotaidoid on the join clause rather than (sotaidschemaname, sotaidtablename).
For example:
SELECT pg_namespace.nspname AS schema,
pg_class.relname AS relation,
pg_size_pretty(sotd.sotdsize::BIGINT) as tablesize,
pg_size_pretty(sotd.sotdtoastsize::BIGINT) as toastsize,
pg_size_pretty(sotd.sotdadditionalsize::BIGINT) as othersize,
pg_size_pretty(sotaid.sotaidtablesize::BIGINT) as tabledisksize,
pg_size_pretty(sotaid.sotaididxsize::BIGINT) as indexsize
FROM pg_class
LEFT JOIN pg_stat_user_tables
ON pg_stat_user_tables.relid = pg_class.oid
LEFT JOIN gp_toolkit.gp_size_of_table_disk sotd
ON sotd.sotdoid = pg_class.oid
LEFT JOIN gp_toolkit.gp_size_of_table_and_indexes_disk sotaid
ON sotaid.sotaidoid = pg_class.oid
LEFT JOIN pg_namespace
ON pg_namespace.oid = pg_class.relnamespace
WHERE
pg_class.relkind = 'r'
AND relstorage != 'x'
AND pg_namespace.nspname NOT IN ('information_schema', 'madlib', 'pg_catalog', 'gptext')
AND pg_class.relname NOT IN ('spatial_ref_sys');

Redshift: Simple query is leading to nested loop join

I am using a query to fetch the number of rows deleted for a given queryid:
select stl_delete.query,
listagg(distinct svv_table_info.table,',')
from stl_delete
join svv_table_info on svv_table_info.table_id=stl_delete.tbl
where stl_delete.query=1090750
group by stl_delete.query
The result seems correct.
When I run:
select event,solution from stl_alert_event_log where query = pg_last_query_id();
event solution
================================== ======================================================
Nested Loop Join in the query plan Review the join predicates to avoid Cartesian products
Firstly, why is there nested loop?
How do I fix the nested loop join here? Going through the internet, solution is the join predicate which is present in the query.
Even if I remove the listaggr and group by, I still see the issue:
select stl_delete.query,
svv_table_info.table
from stl_delete
join svv_table_info on svv_table_info.table_id=stl_delete.tbl
where stl_delete.query=1090750
The system view svv_table_info is complex and gather a lot of information about tables most of which you are not using. The loop join is inside this view and is needed to produce the in-depth table report.
Your query just needs the name of the table based on tableid. There is a system table that holds this information and will run quicker and not produce a loop join. pg_class has the tableid in a column called oid and the table name in relname. (FYI if you select * from pg_class oid won't show up, you need to specify it by name)
Or you can just live with the alert. This loop join isn't very big in Redshift terms.

Programmatically get all tables of a database owned by a user

I created the following query:
select
is_tables.table_name
from information_schema.tables is_tables
join pg_tables
on is_tables.table_name=pg_tables.tablename
where
is_tables.table_catalog='<mydatabase>'
and is_tables.table_schema<>'information_schema'
and is_tables.table_schema<>'pg_catalog'
and pg_tables.tableowner='<myuser>';
I assume there is no database vendor independent way of querying this. Is this the easiest/shortest SQL query to achieve what I want in PostgreSQL?
I think you're pretty close. Object owners don't seem to appear in the information_schema views, although I might have overlooked it.
select is_tables.table_schema,
is_tables.table_name
from information_schema.tables is_tables
inner join pg_tables
on is_tables.table_name = pg_tables.tablename
and is_tables.table_schema = pg_tables.schemaname
where is_tables.table_catalog = '<mydatabase>'
and is_tables.table_schema <> 'information_schema'
and is_tables.table_schema <> 'pg_catalog'
and pg_tables.tableowner = '<myuser>';
You need to join on both the table name and the schema name. Table names are unique within a schema; they're not unique within a database.

Postgres query to find all dependent tables

I want to find all objects (tables, views, ... etc) that have a dependency on a specific table.
What is a query I could write in postgres to accomplish this.
You'd need to query the catalog for that. Probably pg_depend:
http://www.postgresql.org/docs/current/static/catalog-pg-depend.html
Incase you ever need it, don't miss the convenience type converter that lets you turn table oids and text into relnames like so:
select 'pg_statistics'::regclass; -- 'pg_statistics'
select 2619::regclass; -- 'pg_statistics' too, on my install
# select refclassid::regclass from pg_depend where classid = 'pg_class'::regclass group by refclassid;
refclassid
--------------
pg_namespace
pg_type
pg_class

Getting tables with no rows without counting

I got a huge PostgreSQL database with lots of tables. I want learn all empty tables without counting each tables for performance reasons (Some of the tables have several millions rows).
This query will give you an approximate result, but does not include counting table rows.
SELECT relname FROM pg_class JOIN pg_namespace ON (pg_class.relnamespace = pg_namespace.oid) WHERE relpages = 0 AND pg_namespace.nspname = 'public';
This will work best after a VACUUM ANALYZE.
as per http://wiki.postgresql.org/wiki/Slow_Counting , one solution is to first find the tables with small 'reltuples' via
select relname from pg_class where reltuples < X
and then test for emptiness only those.
so u want to see table structure, right? try pg admin
u can open table and see all structure eg datatype, index, function and etc