POSTGRES - Risk of dropping foreign data wrapper/schema - sql

We had one of the devs create a foreign data wrapper with these commands:
CREATE SERVER serverName FOREIGN DATA WRAPPER postgres_fdw OPTIONS (xxxx);
CREATE USER MAPPING FOR user SERVER foreign_db OPTIONS (user 'xxxx', password 'xxxx');
CREATE SCHEMA foreign_db;
IMPORT FOREIGN SCHEMA public FROM SERVER serverName INTO foreign_db;
To drop this schema the suggestion was to run:
DROP SCHEMA if exists foreign_db cascade;
DROP USER mapping if exists for user server foreign_db;
DROP SERVER if exists serverName;
In the spec I see this for CASCADE:
Automatically drop objects (tables, functions, etc.) that are
contained in the schema, and in turn all objects that depend on those
objects
what concerns me is this line:
and in turn all objects that depend on those objects
My question is there a possibility of dropping anything outside of foreign_db schema and if yes, how can I check it?
Thank you.

It is possible that the command drops something outside the schema. Consider this:
create schema example;
create table example.my_table (id int);
create view public.my_view as select * from example.my_table;
If the schema is dropped with the cascade option, public.my_view will also be dropped. However, the behavior is logical and desirable.
You can check this executing these commands one by one:
begin;
drop schema example cascade;
rollback;
The schema will not be dropped and after drop... you should get something like this:
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to table example.my_tabledrop cascades to view my_view
Alternatively, you can use the system catalog pg_depend, see this answer How to list tables affected by cascading delete.

Related

Role permissions don't get assigned to table moved from public schema

I want to create new schemas and transfer the table in public schema to these schemas, but whenever I'm moving a table from the public schema to another schema, the user/role which has usage access on the new schema, as well as on its tables (including future tables), isn't able to access the newly moved table.
The table (in public schema):
CREATE TABLE atable(ID INT);
INSERT INTO atable VALUES(1);
INSERT INTO atable VALUES(2);
New user:
create user x_user with login password 'x_user';
New schema:
create schema dw;
Then I grant it all the access to the new schema and its tables:
GRANT USAGE ON SCHEMA dw TO x_user;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA dw to x_user;
GRANT SELECT ON ALL TABLES IN SCHEMA dw TO x_user;
For tables added to the schema in the future
ALTER DEFAULT PRIVILEGES IN SCHEMA dw GRANT SELECT ON TABLES TO x_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA dw GRANT USAGE ON SEQUENCES TO x_user;
Now I change the schema of the atable to dw:
ALTER TABLE atable SET SCHEMA dw;
Also, I create another table in the dw schema:
CREATE TABLE dw.btable(id int);
INSERT INTO dw.btable VALUES(3);
INSERT INTO dw.btable VALUES(4);
Now when I connect to the database, using the new user credentials, and run:
SELECT * FROM dw.atable;
I get: ERROR: permission denied for relation atable 1 statement failed.
Whereas if I run the same query for btable , which was created in the dw schema, it works.
SELECT * FROM dw.btable;
id
---
3
4
It also works when I move a table from one schema to another, but when I'm moving a table from the public schema to another schema, it is not working.
What am I doing wrong here?
GRANT ... ON ALL TABLES IN SCHEMA affects only the current contents of the schema.
ALTER DEFAULT PRIVILEGES IN SCHEMA affects tables created in the schema.
Neither of these have any effect when moving tables from one schema to another, and I'm not aware of anything which does.
It should be possible to do this by creating an event trigger which fires on any ALTER TABLE command and applies the appropriate GRANT. Unfortunately, while you can write these trigger functions in PL/pgSQL, I don't think it (currently) provides any way to find out what the actual command was; you'd need to either:
Write a C function to inspect the pg_ddl_command structure returned by pg_event_trigger_ddl_commands(), or
Blindly run a GRANT after every ALTER TABLE, regardless of whether or not it was a SET SCHEMA command.
A far simpler option - provided that it fits your use case - would be to write a move_table() function which combines the ALTER and GRANT commands.

Synonym support on PostgreSQL

How to create and use Synonyms on PostgreSQL as in Oracle. Do I need to create some DB link or any thing else. I could not find any good official doc on this topic.
Edit 1
Actually as of now i have an application which has two separate modules which connects with two different oracle databases; One modules need to access tables of other so for which we use synonyms over db link in oracle. Now we are migrating application to postgresql, so we need synonyms.
Edit 2
When i say two different oracle databases it means it can be two different oracle instances or two schemas of same db, it is configurable in application and application must support both modes.
PostgreSQL version: 9.6.3
Approach 1:-
Finally i got it working using foreign data wrapper postgres_fdw as below
I have two databases named dba and dbb. dbb has a table users and i need to access it in dba
CREATE SERVER myserver FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'localhost', dbname 'dbb', port '5432');
CREATE USER MAPPING FOR postgres
SERVER myserver
OPTIONS (user 'user', password 'password');
CREATE FOREIGN TABLE users (
username char(1))
SERVER myserver
OPTIONS (schema_name 'public', table_name 'users');
CREATE FOREIGN TABLE users (users char(1));
Now i can execute all select/update queries in dba.
Approach 2:-
Can be achieved by creating two schemas in same db, below are the steps:
create two schemas ex app_schema, common_schema.
Grant access:
GRANT CREATE,USAGE ON SCHEMA app_schema TO myuser;
GRANT CREATE,USAGE ON SCHEMA common_schema TO myuser;
Now set search path of user as below
alter user myuser set search_path to app_schema,common_schema;
Now tables in common_schema will be visible to myuser. For example let say we have a table user in common_schema and table app in app_schema then below queries will be running easily:
select * from user;
select * from app;
This is similar to synonyms in oracle.
Note- Above queries will work PostgreSQL 9.5.3+
I think you don't need synonyms in Postgres the way you need them in Oracle because unlike Oracle there is a clear distinction between a user and a schema in Postgres. It's not a 1:1 relationship and multiple users can easily use multiple schemas without the need to fully qualify the objects by exploiting Postgres' "search path" feature -  mydb.public.mytable.
If the tables are supposed to be in a different database in PostgreSQL as well, you'd create a foreign table using a foreign data wrapper.
If you used the Oracle synonym just to avoid having to write atable#dblink, you don't have to do anything in PostgreSQL, because foreign tables look and feel just like local tables in PostgreSQL.
If you use the synonym for some other purposes, you can either set search_path to include the schema where the target table is, or you can create a simple view that just selects everything from the target table.

Db2 drop schema contents along with schema at once

Is there a query in db2 9.7 control center wherein I can't DELETE(DROP) all the contents of my schema (including the schema) at once?
My other option is to drop/delete the objects first and then DROP schema..
But I want to DROP THE ENTIRE SCHEMA WITH ALL OBJECTS at once.
DROP SCHEMA <schema_name> CASCADE/RESTRICT didn't work for me.
The ADMIN_DROP_SCHEMA procedure is what you're looking for.
The ADMIN_DROP_SCHEMA procedure is used to drop a specific schema and all objects contained in it.
http://publib.boulder.ibm.com/infocenter/db2luw/v9/topic/com.ibm.db2.udb.admin.doc/doc/r0022036.htm
First drop all the tables in the schema.
Then try to delete the schema using
DROP SCHEMA SCHEMA_NAME RESTRICT
webchain.in have sample java program, explains how to delete the schema using java program
in case drop schema fails after dropping all the tables with the error SQLCODE=-551, SQLSTATE=42501, try command
grant dbadm on database to USER_NAME

Drop or Delete Schema Postgresql via controller Rails

I have multi-tenant application rails with postgresql,
i want to drop schema (schema name = subdomain) and delete or table on schema.
primitive code on controller, wkwkwk.
accounts_controller.rb
def destroy
#account = Account.find(params[:id])
conn = ActiveRecord::Base.connection
conn.execute("DROP SCHEMA "+#account.subdomain)
end
error message
ActiveRecord::StatementInvalid in AccountsController#destroy
PG::Error: ERROR: cannot drop schema subdomain1 because other objects depend on it
DETAIL: table articles depends on schema subdomain1
table gambarinfos depends on schema subdomain1
table pages depends on schema subdomain1
table redactor_assets depends on schema subdomain1
table schema_migrations depends on schema subdomain1
table usersekolahs depends on schema subdomain1
HINT: Use DROP ... CASCADE to drop the dependent objects too.
: DROP SCHEMA subdomain1
any ideas?
thx
problem solved with
add CASCADE to conn.execute("DROP SCHEMA "+#account.subdomain+" CASCADE")

Specifying schema for temporary tables

I'm used to seeing temporary tables created with just the hash/number symbol, like this:
CREATE TABLE #Test
(
[Id] INT
)
However, I've recently come across stored procedure code that specifies the schema name when creating temporary tables, for example:
CREATE TABLE [dbo].[#Test]
(
[Id] INT
)
Is there any reason why you would want to do this? If you're only specifying the user's default schema, does it make any difference? Does this refer to the [dbo] schema in the local database or the tempdb database?
It won't make any difference if you are specifying the users default schema, but if the users default schema changes then it will try to keep the temporary table in the dbo schema.
Temp tables are created in tempdb so (even if you could, as noted by Jim in comments) it means you'd need to maintain the schema in tempdb, and offers no benefit.