Row Level Security in Postgres on Normalized Tables - sql

The premise
In documentation, Row Level Security seems great. Based on what I've read I can now stop creating views like this:
SELECT data.*
FROM data
JOIN user_data
ON data.id = user_data.data_id
AND user_data.role = CURRENT_ROLE
The great part is, Postgres has a great analysis for that view starting with an index scan then a hash join on the user_data table, exactly what we want to happen because it's crazy fast. Compare that with a my RLS implementation:
CREATE POLICY data_owner
ON data
FOR ALL
TO user
USING (
(
SELECT TRUE AS BOOL FROM (
SELECT data_id FROM user_data WHERE user_role = CURRENT_USER
) AS user_data WHERE user_data.data_id = data.id
) = true
)
WITH CHECK (TRUE);
This bummer of a policy executes the condition for each row in the data table, instead of optimizing by scoping the query to the rows which our CURRENT_USER has access to, like our view does. To be clear, that means select * from data hits every row in the data table.
The question
How do I write a policy with an inner select which doesn't test said select on every row in the target table. Said another way: how do I get RLS to run my policy on the target table before running the actual query on the result?
p.s. I've left this question someone vague and fiddle-less, mostly because sqlfiddle hasn't hit 9.5 yet. Let me know if I need to add more color or some gists to get my question across.

PostgreSQL may be able to generate a better plan if you phrase the policy like this:
...
USING (EXISTS
(SELECT data_id
FROM user_data
WHERE user_data.data_id = data.id
AND role = current_user
)
)
You should have a (PRIMARY KEY?) index ON user_data (role, data_id) to speed up nested loop joins.
But I think that it would be a better design to include the permission information in the data table itself, perhaps using the name[] type:
CREATE TABLE data(
id integer PRIMARY KEY,
val text,
acl name[] NOT NULL
);
INSERT INTO data VALUES (1, 'one', ARRAY[name 'laurenz', name 'advpg']);
INSERT INTO data VALUES (2, 'two', ARRAY[name 'advpg']);
INSERT INTO data VALUES (3, 'three', ARRAY[name 'laurenz']);
Then you can use a policy like this:
CREATE POLICY data_owner ON data FOR ALL TO PUBLIC
USING (acl #> ARRAY[current_user::name])
WITH CHECK (TRUE);
ALTER TABLE data ENABLE ROW LEVEL SECURITY;
ALTER TABLE data FORCE ROW LEVEL SECURITY;
When I SELECT, I get only the rows for which I have permission:
SELECT id, val FROM data;
id | val
----+-------
1 | one
3 | three
(2 rows)
You can define a GIN index to support that condition:
CREATE INDEX ON data USING gin (acl _name_ops);

Related

UPDATE two columns with new value under large size table

We have table like :
mytable (pid, string_value, int_value)
This table has more than 20M rows in total. Now we have a feature try to mark all the rows from this tables as invalid. So we need update the table columns: string_Value = NULL and int_value = 0 which indicate this is invalid row ( we still want to keep the pid as it is important to us)
So what is the best way?
I use the following SQL:
UPDATE Mytable
SET string_value = NULL,
int_value = 0;
but this query takes more than 4 minutes in my test env. Is there any better way we can improve it?
Updating all the rows can be quite expensive. Often, it is faster to empty the table and reload it.
In generic SQL this looks like:
create table mytable_temp as
select pid
from mytable;
truncate table mytable; -- back it up first!
insert into mytable (pid, string_value, int_value)
select pid, null, 0
from mytable_temp;
The creation of the temporary table may use different syntax, depending on our database.
Updates can take time to complete. Another way of achieving this is to follow the following steps:
Add new columns with the values you need set as the default value
Drop the original columns
Rename the new columns with the names of the original columns.
You can then drop the default values on the new columns.
This needs to be tested as different DBMSs allow different levels of table alters (i.e. not all DMBSs allow a drop default or a drop column).

Create virtual table with rowid only of another table

Suppose I have a table in sqlite as follows:
`name` `age`
"bob" 20 (rowid=1)
"tom" 30 (rowid=2)
"alice" 19 (rowid=3)
And I want to store the result of the following table using minimal storage space:
SELECT * FROM mytable WHERE name < 'm' ORDER BY age
How can I store a virtual table from this resultset that will just give me the ordered resultset. In other words, storing the rowid in an ordered way (in the above it would be 3,1) without saving all the data into a separate table.
For example, if I stored this information with just the rowid in a sorted order:
CREATE TABLE vtable AS
SELECT rowid from mytable WHERE name < 'm' ORDER BY age;
Then I believe every time I would need to query the vtable I would have to join it back to the original table using the rowid. Is there a way to do this so that the vtable "knows" the content that it has based on the external table (I believe this is referred to as external-content when creating an fts index -- https://sqlite.org/fts5.html#external_content_tables).
I believe this is referred to as external-content when creating an
fts.
No a virtual table is CREATED using CREATE VIRTUAL TABLE ...... USING module_name (module_parameters)
Virtual tables are tables that can call a module, thus the USING module_name(module_parameters) is mandatory.
For FTS (Full Text Serach) you would have to read the documentation but it could be something like
CREATE VIRTUAL TABLE IF NOT EXISTS bible_fts USING FTS3(book, chapter INTEGER, verse INTEGER, content TEXT)
You very likely don't need/want a VIRTUAL table.
CREATE TABLE vtable AS SELECT rowid from mytable WHERE name < 'm' ORDER BY age;
Would create a normal table IF it didn't already exist that would persist. And if you wanted to use it then it would probably only be of use by joining it with mytable. Effectively it would allow a snapshot, but at a cost, of at least 4k for every snapshot.
I'd suggest a single table for all snapshots that has two columns a snapshot identifier and the rowid of the snapshot. This would probably be far less space consuming.
Basic Example
Consider :-
CREATE TABLE IF NOT EXISTS mytable (
id INTEGER PRIMARY KEY, /* NOTE not using an alias of the rowid may present issues as the id's can change */
name TEXT,
age INTEGER
);
CREATE TABLE IF NOT EXISTS snapshot (id TEXT DEFAULT CURRENT_TIMESTAMP, mytable_map);
INSERT INTO mytable (name,age) VALUES('Mary',21),('George',22);
INSERT INTO snapshot (mytable_map) SELECT id FROM mytable;
SELECT snapshot.id,name,age FROM snapshot JOIN mytable ON mytable.id = snapshot.mytable_map;
And the above is run 3 times with a reasonable interval (seconds so as to distinguish the snapshot id (the timestamp)).
Then you would get 3 snapshots (each with a number of rows but the same value in the id column for each snapshot), the first with 2 rows, the 2nd with 4 and the last with 6 (as each run 2 rows are being added to mytable) :-

Merge update records in a final table

I have a user table in Hive of the form:
User:
Id String,
Name String,
Col1 String,
UpdateTimestamp Timestamp
I'm inserting data in this table from a file which has the following format:
I/U,Timestamp when record was written to file, Id, Name, Col1, UpdateTimestamp
e.g. for inserting a user with Id 1:
I,2019-08-21 14:18:41.002947,1,Bob,stuff,123456
and updating col1 for the same user with Id 1:
U,2019-08-21 14:18:45.000000,1,,updatedstuff,123457
The columns which are not updated are returned as null.
Now simple insertion is easy in hive using load in path in a staging table and then ignoring the first two fields from the stage table.
However, how would I go about the update statements? So that my final row in hive looks like below:
1,Bob,updatedstuff,123457
I was thinking to insert all rows in a staging table and then perform some sort of merge query. Any ideas?
Typically with a merge statement your "file" would still be unique on ID and the merge statement would determine whether it needs to insert this as a new record, or update values from that record.
However, if the file is non-negotiable and will always have the I/U format, you could break the process up into two steps, the insert, then the updates, as you suggested.
In order to perform updates in Hive, you will need the users table to be stored as ORC and have ACID enabled on your cluster. For my example, I would create the users table with a cluster key, and the transactional table property:
create table test.orc_acid_example_users
(
id int
,name string
,col1 string
,updatetimestamp timestamp
)
clustered by (id) into 5 buckets
stored as ORC
tblproperties('transactional'='true');
After your insert statements, your Bob record would say "stuff" in col1:
As far as the updates - you could tackle these with an update or merge statement. I think the key here is the null values. It's important to keep the original name, or col1, or whatever, if the staging table from the file has a null value. Here's a merge example which coalesces the staging tables fields. Basically, if there is a value in the staging table, take that, or else fall back to the original value.
merge into test.orc_acid_example_users as t
using test.orc_acid_example_staging as s
on t.id = s.id
and s.type = 'U'
when matched
then update set name = coalesce(s.name,t.name), col1 = coalesce(s.col1, t.col1)
Now Bob will show "updatedstuff"
Quick disclaimer - if you have more than one update for Bob in the staging table, things will get messy. You will need to have a pre-processing step to get the latest non-null values of all the updates prior to doing the update/merge. Hive isn't really a complete transactional DB - it would be preferred for the source to send full user records any time there's an update, instead of just the changed fields only.
You can reconstruct each record in the table using you can use last_value() with the null option:
select h.id,
coalesce(h.name, last_value(h.name, true) over (partition by h.id order by h.timestamp) as name,
coalesce(h.col1, last_value(h.col1, true) over (partition by h.id order by h.timestamp) as col1,
update_timestamp
from history h;
You can use row_number() and a subquery if you want the most recent record.

Create a unique primary key (hash) from database columns

I have this table which doesn't have a primary key.
I'm going to insert some records in a new table to analyze them and I'm thinking in creating a new primary key with the values from all the available columns.
If this were a programming language like Java I would:
int hash = column1 * 31 + column2 * 31 + column3*31
Or something like that. But this is SQL.
How can I create a primary key from the values of the available columns? It won't work for me to simply mark all the columns as PK, for what I need to do is to compare them with data from other DB table.
My table has 3 numbers and a date.
EDIT What my problem is
I think a bit more of background is needed. I'm sorry for not providing it before.
I have a database ( dm ) that is being updated everyday from another db ( original source ) . It has records form the past two years.
Last month ( july ) the update process got broken and for a month there was no data being updated into the dm.
I manually create a table with the same structure in my Oracle XE, and I copy the records from the original source into my db ( myxe ) I copied only records from July to create a report needed by the end of the month.
Finally on aug 8 the update process got fixed and the records which have been waiting to be migrated by this automatic process got copied into the database ( from originalsource to dm ).
This process does clean up from the original source the data once it is copied ( into dm ).
Everything look fine, but we have just realize that an amount of the records got lost ( about 25% of july )
So, what I want to do is to use my backup ( myxe ) and insert into the database ( dm ) all those records missing.
The problem here are:
They don't have a well defined PK.
They are in separate databases.
So I thought that If I could create a unique pk from both tables which gave the same number I could tell which were missing and insert them.
EDIT 2
So I did the following in my local environment:
select a.* from the_table#PRODUCTION a , the_table b where
a.idle = b.idle and
a.activity = b.activity and
a.finishdate = b.finishdate
Which returns all the rows that are present in both databases ( the .. union? ) I've got 2,000 records.
What I'm going to do next, is delete them all from the target db and then just insert them all s from my db into the target table
I hope I don't get in something worst : - S : -S
The danger of creating a hash value by combining the 3 numbers and the date is that it might not be unique and hence cannot be used safely as a primary key.
Instead I'd recommend using an autoincrementing ID for your primary key.
Just create a surrogate key:
ALTER TABLE mytable ADD pk_col INT
UPDATE mytable
SET pk_col = rownum
ALTER TABLE mytable MODIFY pk_col INT NOT NULL
ALTER TABLE mytable ADD CONSTRAINT pk_mytable_pk_col PRIMARY KEY (pk_col)
or this:
ALTER TABLE mytable ADD pk_col RAW(16)
UPDATE mytable
SET pk_col = SYS_GUID()
ALTER TABLE mytable MODIFY pk_col RAW(16) NOT NULL
ALTER TABLE mytable ADD CONSTRAINT pk_mytable_pk_col PRIMARY KEY (pk_col)
The latter uses GUID's which are unique across databases, but consume more spaces and are much slower to generate (your INSERT's will be slow)
Update:
If you need to create same PRIMARY KEYs on two tables with identical data, use this:
MERGE
INTO mytable v
USING (
SELECT rowid AS rid, rownum AS rn
FROM mytable
ORDER BY
co1l, col2, col3
)
ON (v.rowid = rid)
WHEN MATCHED THEN
UPDATE
SET pk_col = rn
Note that tables should be identical up to a single row (i. e. have same number of rows with same data in them).
Update 2:
For your very problem, you don't need a PK at all.
If you just want to select the records missing in dm, use this one (on dm side)
SELECT *
FROM mytable#myxe
MINUS
SELECT *
FROM mytable
This will return all records that exist in mytable#myxe but not in mytable#dm
Note that it will shrink all duplicates if any.
Assuming that you have ensured uniqueness...you can do almost the same thing in SQL. The only problem will be the conversion of the date to a numeric value so that you can hash it.
Select Table2.SomeFields
FROM Table1 LEFT OUTER JOIN Table2 ON
(Table1.col1 * 31) + (Table1.col2 * 31) + (Table1.col3 * 31) +
((DatePart(year,Table1.date) + DatePart(month,Table1.date) + DatePart(day,Table1.date) )* 31) = Table2.hashedPk
The above query would work for SQL Server, the only difference for Oracle would be in terms of how you handle the date conversion. Moreover, there are other functions for converting dates in SQL Server as well, so this is by no means the only solution.
And, you can combine this with Quassnoi's SET statement to populate the new field as well. Just use the left side of the Join condition logic for the value.
If you're loading your new table with values from the old table, and you then need to join the two tables, you can only "properly" do this if you can uniquely identify each row in the original table. Quassnoi's solution will allow you to do this, IF you can first alter the old table by adding a new column.
If you cannot alter the original table, generating some form of hash code based on the columns of the old table would work -- but, again, only if the hash codes uniquely identify each row. (Oracle has checksum functions, right? If so, use them.)
If hash code uniqueness cannot be guaranteed, you may have to settle for a primary key composed of as many columns are required to ensure uniqueness (e.g. the natural key). If there is no natural key, well, I heard once that Oracle provides a rownum for each row of data, could you use that?

Complicated/Simple SQL Insert: adding multiple rows

I have a table connecting principals to their roles. I have come upon a situation where I need to add a role for each user. I have a statement SELECT id FROM principals which grabs a list of all the principals. What I want to create is something like the following:
INSERT INTO role_principal(principal_id,role_id)
VALUES(SELECT id FROM principals, '1');
so for each principal, it creates a new record with a role_id=1. I have very little SQL experience, so I dont know if I can do this as simply as I would like to or if there is some sort of loop feature in SQL that I could use.
Also, this is for a mySQL db (if that matters)
Use VALUES keyword if you want to insert values directly. Omit it to use any SELECT (where column count and type matches) to get the values from.
INSERT INTO role_principal(principal_id,role_id)
(SELECT id, 1 FROM principals);
To avoid duplicates is useful to add a subquery :
INSERT INTO role_principal(principal_id,role_id)
(SELECT id, 1 FROM principals p
WHERE NOT EXISTS
(SELECT * FROM role_principal rp WHERE rp.principal_id=p.id AND role_id=1)
)