Postgres: Is there a way to tie a User to a Schema? - sql

In our database we have users: A, B, C.
Each user has its own corresponding schema: A, B, C.
Normally if I wanted to select from a table in one of the schemas I would have to do:
select * from A.table;
My question is:
Is there a way to make:
select * from table
go to the correct schema based on the user that is logged in?

This is the default behavior for PostgreSQL. Make sure your search_path is set correctly.
SHOW search_path;
By default it should be:
search_path
--------------
"$user",public
See PostgreSQL's documentation on schemas for more information. Specifically this part:
You can create a schema for each user with the same name as that user. Recall that the default search path starts with $user, which resolves to the user name. Therefore, if each user has a separate schema, they access their own schemas by default.
If you use this setup then you might also want to revoke access to the public schema (or drop it altogether), so users are truly constrained to their own schemas.
Update RE you comment:
Here is what happens on my machine. Which is what I believe you are wanting.
skrall=# \d
No relations found.
skrall=# show search_path;
search_path
----------------
"$user",public
(1 row)
skrall=# create schema skrall;
CREATE SCHEMA
skrall=# create table test(id serial);
NOTICE: CREATE TABLE will create implicit sequence "test_id_seq" for serial column "test.id"
CREATE TABLE
skrall=# \d
List of relations
Schema | Name | Type | Owner
--------+-------------+----------+--------
skrall | test | table | skrall
skrall | test_id_seq | sequence | skrall
(2 rows)
skrall=# select * from test;
id
----
(0 rows)
skrall=#

Related

Data extract and import from CSV with foreign keys - Postgresql

I have a multi tenant database. My requirement is to extract a single tenant's data from a database and insert in to other database.
So I have 2 tables: users and identities.
users table has foreign key identity_id connected with identities table
There can be many identities and users under a customer.
I am extracting the data to a csv file and inserting into new database from the csv file.
primary key is set to auto increment, so users and identities table generate id while inserting data from csv.
Table data from existing database
Users table
| id | identity_id |
| --- | ------------|
| 86 | 70 |
| 193 | 127 |
| 223 | 131 |
Identities table
|id |name |email |
|---|------------|-----------------|
|70 |Alon muscle |muscle#test.com |
|131|james |james#james.com |
|127|watson |watson#watson.com|
Now identity_id is the foreign key in users table mapping to identities table.
I am trying to insert users and identities data to new database
So primary key will be auto incremented for users and identities.
The problem comes here with foreign key.
How can I maintain foreign_key relationship as I have multiple users and identities records?
Well you did not actually provide details on your tables, that would be the actual definitions (ddl). Nor provide the CVS contents, which I assume your stage table contains same. However with the test data provided and a couple assumptions the following demonstrates a method to load your data. The method is to build a procedure which uses the stage table to load identities table then selects the generates id from the email provided to populate users table. Assumptions:
email must be unique in identities (at least in lower case).
stage table reflects name and email for identities.
Procedure to load identities and users.
create or replace procedure generate_user_idents()
language sql $
insert into identities(name, email)
select name, email
from stage
on conflict (low_email)
do nothing ;
as $$
insert into users(ident_id)
select ident.ident_id
from identities ident
where ident.low_email in
( select lower(email)
from stage
) ;
$$;
Script to clear and repopulate stage data then load stage to identities and users.
do $$
begin
execute 'truncate table stage';
-- replace the following with your \copy to load stage
insert into stage(name, email)
values ( 'Alon muscle', 'muscle#test.com' )
, ( 'watson', 'watson#watson.com')
, ( 'james', 'james#james.com' );
call generate_user_idents();
end ;
$$;
See demo here. Since the demo generates the ids, it does not exactly match your provided values, but close. As it stands the procedure would be happy generating duplicates should you fail to clear the stage table or reenter the same values into it. You have to decide how to handle that.

How to list schemas of a specific database in Informix using SQL?

What is the query to get a list of schemas names in a specific database in Informix?
Schemas are not commonly used in Informix databases and have very little trackability within a database. The CREATE SCHEMA notation is supported because it was part of SQL-89. The AUTHORIZATION clause is used to determine the (default) 'owner' of the objects created with the CREATE SCHEMA statement. There is nothing to stop a single user running the CREATE SCHEMA statement multiple times, either consecutively or at widely different times (in any given database within an Informix instance).
CREATE SCHEMA AUTHORIZATION "pokemon"
CREATE TABLE gizmo (s SERIAL NOT NULL PRIMARY KEY, v VARCHAR(20) NOT NULL)
CREATE TABLE widget(t SERIAL NOT NULL PRIMARY KEY, d DATETIME YEAR TO SECOND NOT NULL)
;
CREATE SCHEMA AUTHORIZATION "pokemon"
CREATE TABLE object (u SERIAL NOT NULL PRIMARY KEY, i INTEGER NOT NULL)
CREATE TABLE "pikachu".complain (C SERIAL NOT NULL PRIMARY KEY, v VARCHAR(255) NOT NULL)
;
After the CREATE SCHEMA statement executes, there is no way of tracking that either pair of these tables were created together as part of the same schema; there's no way to know that "pikachu".complain was part of a CREATE SCHEMA statement executed on behalf of "pokemon". There is no DROP SCHEMA statement that would necessitate such support.
A schema belongs to a user. You can list all available users from the sysusers system catalog :
SELECT username FROM "informix".sysusers;
Since only DBAs and Resource privilieges allow a user to issue a CREATE SCHEMA statement, we could restrict the query like :
SELECT username FROM "informix".sysusers WHERE usertype IN ('D', 'R');
Another solution is to list only users that actually have created tables ; for that, you can query the systables system catalog and list distinct owners.
SELECT DISTINCT owner FROM FROM "informix".systables
As commented by #JonathanLeffler, a user could have been granted RESOURCE privileges and have created a table, and then be 'demoted' to CONNECT privileges. The user would still own the table. Hence the second solution is the most accurate.

Dual table issue

Hi We all know DUAL is the table of SYS and other users have synonym of it.
but when i fire the below commnad
create table dual(x varchar2(1));
it worked. An object of the name DUAL was created.
when there is already a synonym with the name DUAL how can we create another object of same name? why oracle is allowing us to do it.
You can create a table named DUAL because tables and public synonyms have a different namespace. For details, see chapter Database Object Names and Qualifiers of Oracle's SQL Language Reference.
EDIT: To illustrate the mechanism:
If you create your own DUAL table as user scott
CREATE TABLE dual (x VARCHAR2(1));
... then it shows up in the data dictionary:
SELECT object_id, object_type, owner, object_name, namespace
FROM dba_objects
WHERE object_name='DUAL';
OBJECT_ID OBJECT_TYPE OWNER OBJECT_NAME NAMESPACE
142 TABLE SYS DUAL 1
143 SYNONYM PUBLIC DUAL 1
78138 TABLE SCOTT DUAL 1
So, the names are unique per owner and namespace. You cannot add yet another table in your schema called DUAL. You cannot create a private synonym named DUAL, but you can create your own synonym for schemas in other objects.
Please make sure to drop the table again. Even simple statements won't work anymore:
SELECT sysdate FROM DUAL;
--
DROP TABLE dual;
SELECT sysdate FROM DUAL;
01.07.2018
I think DUAL is a distraction here. Of course it's a system table and there are side effects if you mess about with it, so don't. But the question is really about why there is no namespace conflict between any object and a public synonym with the same name. For example, I can create a table named ALL_TABLES or DBMS_OUTPUT in my own schema (if I really want to). Or, I can create a table called MYDEMOTABLE and then create a public synonym MYDEMOTABLE for WILLIAM.MYDEMOTABLE.
But then, what restriction are you expecting to exist? There is already both a table (owned by SYS) and a public synonym (owned by PUBLIC) named DUAL. You can create a third object with the same name, as long as it isn't owned by SYS or PUBLIC.
I think I found a way to demonstrate what is going on (without using DUAL or any other SYS object ;-):
If two users create a table with the same name, the tables end up in their own schema as expected:
CREATE USER user_a IDENTIFIED BY user_a;
CREATE USER user_b IDENTIFIED BY user_b;
CREATE TABLE user_a.foo (x NUMBER);
CREATE TABLE user_b.foo (x NUMBER);
SELECT object_id, object_type, owner, object_name, namespace
FROM dba_objects
WHERE object_name='FOO';
OBJECT_ID OBJECT_TYPE OWNER OBJECT_NAME NAMESPACE
78225 TABLE USER_A FOO 1
78226 TABLE USER_B FOO 1
But when one of them creates a public synonym (as SYS did with it's DUAL table), it ends up in a magic schema with the name PUBLIC:
CREATE PUBLIC SYNONYM foo FOR user_a.foo;
SELECT object_id, object_type, owner, object_name, namespace
FROM dba_objects
WHERE object_name='FOO';
OBJECT_ID OBJECT_TYPE OWNER OBJECT_NAME NAMESPACE
78225 TABLE USER_A FOO 1
78226 TABLE USER_B FOO 1
78156 SYNONYM PUBLIC FOO 1
So, in other words, public synonyms are just synonyms that live in the schema PUBLIC. And you can have only one table, view, sequence, package, synonym with the same name per schema.
There have various schemas in your single database. When you asked about default DUAL table, which actually belongs to 'sys' Schema which is system schema. But your default Database schema is dbo it means when you have executed your query to create DUAL table is actually created in your default schema named dbo.
If you want to check their actual double existence there are many ways to check.
You can apply a query like "select * from sys.DUAL" and "select * from dbo.DUAL". there you will see two different outputs.
you can check all the schemas in "select * from sys.schemas".
Also, you can check their different schemas in with table name
"select * from sys.tables where name ='DUAL'";
the result will be two different tables with two different schema Id's.
Hope this will help you to understand schema basics.

Mapped relation in postgresql

I have been playing with postgresql for a while now and this one caught my eye. What is a "mapped relation" in postgresql. According to the documentation,
When the name of on-disk file is zero, it is called a "mapped"
relation whose disk file name is determined by low - level state.
Is it a simple relation that doesnt have a fixed OID to reference it with. Why is it created? What is its significance?Or is it similar to a temp table?
can some one thow light on this?
https://www.postgresql.org/docs/current/static/storage-file-layout.html
Also, for certain system catalogs including pg_class itself,
pg_class.relfilenode contains zero. The actual filenode number of
these catalogs is stored in a lower-level data structure, and can be
obtained using the pg_relation_filenode() function.
t=# select relfilenode from pg_class where relname = 'pg_class';
relfilenode
-------------
0
(1 row)
t=# select pg_relation_filenode('pg_class');
pg_relation_filenode
----------------------
12712
(1 row)
now a little barbarian (yet user friendly) way to make sure it is the file:
t=# create table very_special_name(i int);
CREATE TABLE
t=# CHECKPOINT; --to actually write to disk
CHECKPOINT
t=# select oid from pg_database where datname='t';
oid
----------
13805223
(1 row)
so we check the readable strings:
-bash-4.2$ strings /pg/data/base/13805223/12712 | grep very_special
very_special_name
New table name is in...

Copy a table (including indexes) in postgres

I have a postgres table. I need to delete some data from it. I was going to create a temporary table, copy the data in, recreate the indexes and the delete the rows I need. I can't delete data from the original table, because this original table is the source of data. In one case I need to get some results that depends on deleting X, in another case, I'll need to delete Y. So I need all the original data to always be around and available.
However it seems a bit silly to recreate the table and copy it again and recreate the indexes. Is there anyway in postgres to tell it "I want a complete separate copy of this table, including structure, data and indexes"?
Unfortunately PostgreSQL does not have a "CREATE TABLE .. LIKE X INCLUDING INDEXES'
New PostgreSQL ( since 8.3 according to docs ) can use "INCLUDING INDEXES":
# select version();
version
-------------------------------------------------------------------------------------------------
PostgreSQL 8.3.7 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4 (Ubuntu 4.2.4-1ubuntu3)
(1 row)
As you can see I'm testing on 8.3.
Now, let's create table:
# create table x1 (id serial primary key, x text unique);
NOTICE: CREATE TABLE will create implicit sequence "x1_id_seq" for serial column "x1.id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "x1_pkey" for table "x1"
NOTICE: CREATE TABLE / UNIQUE will create implicit index "x1_x_key" for table "x1"
CREATE TABLE
And see how it looks:
# \d x1
Table "public.x1"
Column | Type | Modifiers
--------+---------+-------------------------------------------------
id | integer | not null default nextval('x1_id_seq'::regclass)
x | text |
Indexes:
"x1_pkey" PRIMARY KEY, btree (id)
"x1_x_key" UNIQUE, btree (x)
Now we can copy the structure:
# create table x2 ( like x1 INCLUDING DEFAULTS INCLUDING CONSTRAINTS INCLUDING INDEXES );
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "x2_pkey" for table "x2"
NOTICE: CREATE TABLE / UNIQUE will create implicit index "x2_x_key" for table "x2"
CREATE TABLE
And check the structure:
# \d x2
Table "public.x2"
Column | Type | Modifiers
--------+---------+-------------------------------------------------
id | integer | not null default nextval('x1_id_seq'::regclass)
x | text |
Indexes:
"x2_pkey" PRIMARY KEY, btree (id)
"x2_x_key" UNIQUE, btree (x)
If you are using PostgreSQL pre-8.3, you can simply use pg_dump with option "-t" to specify 1 table, change table name in dump, and load it again:
=> pg_dump -t x2 | sed 's/x2/x3/g' | psql
SET
SET
SET
SET
SET
SET
SET
SET
CREATE TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
And now the table is:
# \d x3
Table "public.x3"
Column | Type | Modifiers
--------+---------+-------------------------------------------------
id | integer | not null default nextval('x1_id_seq'::regclass)
x | text |
Indexes:
"x3_pkey" PRIMARY KEY, btree (id)
"x3_x_key" UNIQUE, btree (x)
[CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } ] TABLE table_name
[ (column_name [, ...] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace ]
AS query][1]
Here is an example
CREATE TABLE films_recent AS
SELECT * FROM films WHERE date_prod >= '2002-01-01';
The other way to create a new table from the first is to use
CREATE TABLE films_recent (LIKE films INCLUDING INDEXES);
INSERT INTO films_recent
SELECT *
FROM books
WHERE date_prod >= '2002-01-01';
Note that Postgresql has a patch out to fix tablespace issues if the second method is used
There are many answers on the web, one of them can be found here.
I ended up doing something like this:
create table NEW ( like ORIGINAL including all);
insert into NEW select * from ORIGINAL
This will copy the schema and the data including indexes, but not including triggers and constraints.
Note that indexes are shared with original table so when adding new row to either table the counter will increment.
I have a postgres table. I need to
delete some data from it.
I presume that ...
delete from yourtable
where <condition(s)>
... won't work for some reason. (Care to share that reason?)
I was going to create a temporary
table, copy the data in, recreate the
indexes and the delete the rows I
need.
Look into pg_dump and pg_restore. Using pg_dump with some clever options and perhaps editing the output before pg_restoring might do the trick.
Since you are doing "what if"-type analysis on the data, I wonder if might you be better off using views.
You could define a view for each scenario you want to test based on the negation of what you want to exclude. I.e., define a view based on what you want to INclude. E.g., if you want a "window" on the data where you "deleted" the rows where X=Y, then you would create a view as rows where (X != Y).
Views are stored in the database (in the System Catalog) as their defining query. Every time you query the view the database server looks up the underlying query that defines it and executes that (ANDed with any other conditions you used). There are several benefits to this approach:
You never duplicate any portion of your data.
The indexes already in use for the base table (your original, "real" table) will be used (as seen fit by the query optimizer) when you query each view/scenario. There is no need to redefine or copy them.
Since a view is a "window" (NOT a shapshot) on the "real" data in the base table, you can add/update/delete on your base table and simply re-query the view scenarios with no need to recreate anything as the data changes over time.
There is a trade-off, of course. Since a view is a virtual table and not a "real" (base) table, you're actually executing a (perhaps complex) query every time you access it. This may slow things down a bit. But it may not. It depends on many issues (size and nature of the data, quality of the statistics in the System Catalog, speed of the hardware, usage load, and much more). You won't know until you try it. If (and only if) you actually find that the performance is unacceptably slow, then you might look at other options. (Materialized views, copies of tables, ... anything that trades space for time.)
A simple way is include all:
CREATE TABLE new_table (LIKE original_table INCLUDING ALL);
Create a new table using a select to grab the data you want. Then swap the old table with the new one.
create table mynewone as select * from myoldone where ...
mess (re-create) with indexes after the table swap.