How to make a field NOT NULL in a multi-tenant database - sql

This is a muti-tenant app. All records have a client id to separate client data. Customers can insert their own data in this table and set their own field nullable or not null. Therefore, setting the whole field not null will not work. I need to set a field null for a specific client id.
I am currently querying the database to check if the value is null. On INSERT I check if the inserting value is null if so I throw an error. I would like the database to do all these checks. is this possible in a multi tenant database like this?
Also, I need suggestions for SQL Server, oracle and postgresql. Thanks

With Postgresql at least you could do this with table inheritance.
You could define an inherited table for this specific client which included the required constraint.
Consider the following example:
psql=> CREATE TABLE a(client INT NOT NULL, id SERIAL, foo TEXT);
CREATE TABLE
psql=> CREATE TABLE b(foo TEXT NOT NULL, CHECK (CLIENT=1) ) INHERITS (a);
NOTICE: moving and merging column "foo" with inherited definition
DETAIL: User-specified column moved to the position of the inherited column.
CREATE TABLE
psql=> INSERT INTO b(client,foo) VALUES (1,'a');
INSERT 0 1
psql=> INSERT INTO b(client,foo) VALUES (1,NULL);
ERROR: null value in column "foo" violates not-null constraint
DETAIL: Failing row contains (1, 2, null).
The table 'b' in this case inherits from 'a' but has a different definition for column 'foo' including a not-null constraint. Also note that I have used a check constraint to ensure that only records for client 1 can go into this table.
For this to work, either your application would have to be updated to insert client records into the correct table, or you would need to write a trigger that does that automatically. Examples of how to do that are given in the manual section on partitioning.
Your application can still make queries against the parent table ('a' from my example) and get the records for all clients, including any in child tables.

You won't be able to do this with a column constraint. Think you're going to have to write a trigger.

Related

What different between SERIAL and INT Generated Always As Identity in PostgreSQL? [duplicate]

To have an integer auto-numbering primary key on a table, you can use SERIAL
But I noticed the table information_schema.columns has a number of identity_ fields, and indeed, you could create a column with a GENERATED specifier...
What's the difference? Were they introduced with different PostgreSQL versions? Is one preferred over the other?
serial is the "old" implementation of auto-generated unique values that has been part of Postgres for ages. However that is not part of the SQL standard.
To be more compliant with the SQL standard, Postgres 10 introduced the syntax using generated as identity.
The underlying implementation is still based on a sequence, the definition now complies with the SQL standard. One thing that this new syntax allows is to prevent an accidental override of the value.
Consider the following tables:
create table t1 (id serial primary key);
create table t2 (id integer primary key generated always as identity);
Now when you run:
insert into t1 (id) values (1);
The underlying sequence and the values in the table are not in sync any more. If you run another
insert into t1 default_values;
You will get an error because the sequence was not advanced by the first insert, and now tries to insert the value 1 again.
With the second table however,
insert into t2 (id) values (1);
Results in:
ERROR: cannot insert into column "id"
Detail: Column "id" is an identity column defined as GENERATED ALWAYS.
So you can't accidentally "forget" the sequence usage. You can still force this, using the override system value option:
insert into t2 (id) overriding system value values (1);
which still leaves you with a sequence that is out-of-sync with the values in the table, but at least you were made aware of that.
identity columns also have another advantage: they also minimize the grants you need to give to a role in order to allow inserts.
While a table using a serial column requires the INSERT privilege on the table and the USAGE privilege on the underlying sequence this is not needed for tables using an identity columns. Granting the INSERT privilege is enough.
It is recommended to use the new identity syntax rather than serial

Include one table's values in multiple other tables and allow FK references

I'm still a relative novice when it comes to designing SQL databases, so apologies if this is something obvious that I'm missing.
I have a few tables of controlled vocabularies for certain values that I'm representing as FKs referencing the controlled vocab tables (there are few distinct vocabularies I'm trying to represent). My schema specification allows each of these vocabularies to also allow a controlled set of values for "unknown" information (coming from DataCite). Here is an example using a table dates that must specify a date_type, which should be either a value from date_types or unknown_values. I have a few more tables with this model as well, each with their own specific controlled vocabularies, but should also allow values from unknown_values. So the values in unknown_values should be shared among many tables of controlled vocabularies with similar structure to date_types.
CREATE TABLE dates (
date_id integer NOT NULL PRIMARY KEY autoincrement ,
date_value date NOT NULL DEFAULT CURRENT_DATE ,
date_type text NOT NULL ,
FOREIGN KEY ( date_type ) REFERENCES date_types( date_type )
);
CREATE TABLE date_types (
date_type text NOT NULL PRIMARY KEY ,
definition text
);
CREATE TABLE unknown_values (
code text NOT NULL PRIMARY KEY ,
definition text
);
INSERT INTO date_types (date_type, definition)
VALUES
('type_a', 'The first date type'),
('type_b', 'The second date type');
INSERT INTO unknown_values (code, definition)
VALUES
(':unac', 'Temporarily inaccessible'),
(':unal', 'Unallowed, suppressed intentionally'),
(':unap', 'Not applicable, makes no sense'),
(':unas', 'Value unassigned (e.g., Untitled)'),
(':unav', 'Value unavailable, possibly unknown'),
(':unkn', 'Known to be unknown (e.g., Anonymous, Inconnue)'),
(':none', 'Never had a value, never will'),
(':null', 'Explicitly and meaningfully empty'),
(':tba', 'To be assigned or announced later'),
(':etal', 'Too numerous to list (et alia)');
My first thought was a view that creates a union of date_types and unknown_values, but you cannot make FK references onto a view, so that's not suitable.
The "easiest" solution would be to duplicate the values from unknown_values in each controlled vocabulary table (date_types etc.), but this feels incorrect to have duplicated values.
I also thought about a single table for all the controlled vocabularies with a third field (something like vocabulary_category with values like 'date'), so all my tables could reference that one table, but then I would likely need a function and a CHECK constraint to ensure that the value has the right "category". This feels inelegant and messy.
I'm stumped about the best way to proceed, or what to search for to find help. I can't imagine this is too rare of a requirement, but I can't seem to find any solutions online. My target DB is SQLite, but I'd be interested in solutions that would be possible in PostgreSQL as well.
What you are requesting is the ability for a FK to have optional referenced table. Also as discovered Postgres nor SQLite(?) provides this option (afaik neither does any other RDBMS). Postgres at lease offers a work around, I do not know it its doable in SQLite. You need to:
drop the not null constraint on the currently defined FK
add a FK column referencing the unknown_values table
add check constraint that requires exactly 1 on the columns
date_type and the new FK column to be null. See the num_nulls function.
Changes you need: ( see demo )
alter table dates
alter column date_type
drop not null;
alter table dates
add unknown_value text
references unknown_values(code);
alter table dates
add constraint one_null
check (num_nulls(date_type, unknown_value ) = 1);
Note: Postgres does not support the autoincrement key word. The same is accomplished using a generated column generated always as identity (for older varsions use serial).

Combine results from multiple views and add identifier

I have a view that selects and joins data from several tables.
I have this same view in multiple databases on different servers. (it's part of the same application installed on various servers)
What I'm trying to do is use the 'Export Data' wizard to create an SSIS package that copies the data from these views to a single data warehouse database.
However, since I can't guarantee that there won't be identical rows in the the views, I want to add an ID column in the data warehouse db. But I can't seem to get it to work.
Usually when you want to add an autoincrement ID, one simply inserts 'NULL' into that column. So I've added a 'NULL' value to the select of the view. And I've added an ID column to the destination table, with Identity and auto-increment on.
However, when I run the Export Data wizard, it gives an error
'The value violated the integrity constraints for the column.'
Does anyone have an idea how to combine data from different views on different db servers and add a unique identifier in the destination table?
Cheers, CJ
You can't insert NULL into the Primary Key. Just don't insert anything into the ID column if you have it as auto incremented. Example:
CREATE TABLE test_table
(
ID int IDENTITY(1,1) PRIMARY KEY,
test varchar(255) NOT NULL
);
INSERT INTO test_table(test )
VALUES ('some value');
And ID will be set to 1 for this record.

How to add unique column in sybase?

I have a sybase db table in which i need to add a new column.
The conditions: The column must not allow nulls and be unique.
what is the alter table sql to achieve this?
EDIT:
It is a varchar type column.Yes the table as of now is empty, but when filled it is ensured that unique values would be filled in.
I tired executing
alter table Test add Name varchar not null unique
i get error saying default value must be specified as not null is given.
but i want to add unique constraint so do i really need to specify default?
thanks
Unique values are specified as part of an index on the column, not in the column definition itself.
Try:
alter table Test add Name varchar not null
create unique index index_name_unique on Test (Name)
The ASE reference manual can help with more detail.
Once a table has been created Sybase ASE does not allow addition of NOT NULL column directly unless a default is specified for the column. However, if the table is still empty you can do the following -
First add the new column as a NULL column to the table using alter table command -
alter table Test add Name varchar(100) null
Once this has been done, try modifying the same column Name in the table Test using the alter table script -
alter table Test modify Name varchar(100) NOT NULL
and you will see that you are able to modify the Name column to a NOT NULL column using these steps. This is because at this time Sybase server checks as there is no data in the table hence the NOT NULL constraint is not checked and the column is made NOT NULL. Hence, we are able to skip the default constraint.
In case there would have been some data already present in the table Test, then we need to add one more step in between steps 1 and 2 which will add default values to the existing rows in the table. This can be done via a script for previous data and then following the step 2.
To make the column only allow unique values for the column you need to add a unique key constraint using the following syntax -
alter table Test add constraint UK1 unique(Name)

sqlite and 'constraint failed' error while select and insert at the same time

I'm working on migration function. It reads data from old table and inserts it into the new one. All that stuff working in background thread with low priority.
My steps in pseudo code.
sqlite3_prepare_stmt (select statement)
sqlite3_prepare_stmt (insert statement)
while (sqlite3_step (select statement) == SQLITE_ROW)
{
get data from select row results
sqlite3_bind select results to insert statement
sqlite3_step (insert statement)
sqlite3_reset (insert statement)
}
sqlite3_reset (select statement)
I'm always getting 'constraint failed' error on sqlite3_step (insert statement). Why it's happend and how i could fix that?
UPD: As i'm understand that's happend because background thread use db handle opened in main thread. Checking that guess now.
UPD2:
sqlite> select sql from sqlite_master where tbl_name = 'tiles';
CREATE TABLE tiles('pk' INTEGER PRIMARY KEY, 'data' BLOB, 'x' INTEGER, 'y' INTEGER, 'z' INTEGER, 'importKey' INTEGER)
sqlite> select sql from sqlite_master where tbl_name = 'tiles_v2';
CREATE TABLE tiles_v2 (pk int primary key, x int, y int, z int, layer int, data blob, timestamp real)
It probably means your insert statement is violating a constraint in the new table. Could be a primary key constraint, a unique constraint, a foreign key constraint (if you're using PRAGMA foreign_keys = ON;), and so on.
You fix that either by dropping the constraint, correcting the data, or dropping the data. Dropping the constraint is usually a Bad Thing, but that depends on the application.
Is there a compelling reason to copy data one row at a time instead of as a set?
INSERT INTO new_table
SELECT column_list FROM old_table;
If you need help identifying the constraint, edit your original question, and post the output of these two SQLite queries.
select sql from sqlite_master where tbl_name = 'old_table_name';
select sql from sqlite_master where tbl_name = 'new_table_name';
Update: Based on the output of those two queries, I see only one constraint--the primary key constraint in each table. If you haven't built any triggers on these tables, the only constraint that can fail is the primary key constraint. And the only way that constraint can fail is if you try to insert two rows that have the same value for 'pk'.
I suppose that could happen in a few different ways.
The old table has duplicate values in
the 'pk' column.
The code that does your migration
alters or injects a duplicate value
before inserting data into your new
table.
Another process, possibly running on
a different computer, is inserting or
updating data without your knowledge.
Other reasons I haven't thought of yet. :-)
You can determine whether there are duplicate values of 'pk' in the old table by running this query.
select pk
from old_table_name
group by pk
having count() > 1;
You might consider trying to manually migrate the data using INSERT INTO . . . SELECT . . . If that fails, add a WHERE clause to reduce the size of the set until you isolate the bad data.
I have found that troubleshooting foreign key constraint errors with sqlite to be troublesome, particularly on large data sets. However the following approach helps identify the offending relation.
disable foreign key checking: PRAGMA foreign_keys = 0;
execute the statement that cause the error - in my case it was an INSERT of 70,000 rows with 3 different foreign key relations.
re-enable foreign key checking: PRAGMA foreign_keys = 1;
identify the foreign key errors: PRAGMA foreign_key_check(table-name);
In my case, it showed the 13 rows with invalid references.
Just in case anyone lands here looking for "Constraint failed" error message, make sure your Id column's type is INTEGER, not INTEGER (0, 15) or something.
Background
If your table has a column named Id, with type INTEGER and set as Primary Key, SQLite treats it as an alias for the built-in column RowId. This column works like an auto-increment column. In my case, this column was working fine till some table designer (probably the one created by SQLite guys for Visual Studio) changed column type from INTEGER to INTEGER (0, 15) and all of a sudden my application started throwing Constraint failed exception.
just for making it more clearer :
make sure that you use data type correct , i was using int instead of integer on creating tmy tables like this :
id int primary key not null
and i was stuck on the constrain problem over hours ...
just make sure that to enter data type correctly in creating database.
One of the reasons that this error occurs is inserting duplicated data for a unique row too. Check your all unique and keys for duplicated data.
Today i had similar error: CHECK constraint failed: profile
First i read here what is constraint.
The CONSTRAINTS are an integrity which defines some conditions that
restrict the column to contain the true data while inserting or
updating or deleting. We can use two types of constraints, that is
column level or table level constraint. The column level constraints
can be applied only on a specific column where as table level
constraints can be applied to the whole table.
Then figured out that i need to check how database was created, because i used sqlalchemy and sqlite3 dialect, i had to check tables schema. Schema will show actual database structure.
sqlite3 >>> .schema
CREATE TABLE profile (
id INTEGER NOT NULL,
...
forum_copy_exist BOOLEAN DEFAULT 'false',
forum_deleted BOOLEAN DEFAULT 'true',
PRIMARY KEY (id),
UNIQUE (id),
UNIQUE (login_name),
UNIQUE (forum_name),
CHECK (email_verified IN (0, 1)),
UNIQUE (uid),
CHECK (forum_copy_exist IN (0, 1)),
CHECK (forum_deleted IN (0, 1))
So here I found that boolean CHECK is always 0 or 1 and i used column default value as 'false', so each time i had no forum_copy_exist or forum_deleted value, false was inserted, and because false is invalid value it throws error, and didn't inserted row.
So changing database defaults to:
forum_copy_exist BOOLEAN DEFAULT '0',
forum_deleted BOOLEAN DEFAULT '0',
solved the issue.
In postgresql i think false is valid value. So it depends how database schema is created.
Hope this will help others in the future.