Constrain a table to have only one row - sql

What's the cleanest way to constrain a SQL table to allow it to have no more than one row?
This related question discusses why such a table might exist, but not how the constraint should be implemented.
So far I have only found hacks involving a unique key column that is constrained to have a specific value, e.g. ALWAYS_0 TINYINT NOT NULL PRIMARY KEY DEFAULT (0) CONSTRAINT CHECK_ALWAYS_0 CHECK (ALWAYS_0 = 0). I am guessing there is probably a cleaner way to do it.
The ideal solution would be portable SQL, but a solution specific to MS SQL Server or postgres would also be useful

The cleanest way (I think) would be an ON INSERT trigger that throws an exception (thus preventing the row from being inserted). This also gives the client app a chance to recover gracefully.

I just solved the same problem on SQL Server 2008 by creating a table with a computed column and putting the primary key on that column:
CREATE TABLE MyOneRowTable (
[id] AS (1) PERSISTED NOT NULL CONSTRAINT pk_MyOneRowTable PRIMARY KEY,
-- rest of the columns go here
);

Use Grant to remove permissions for anyone to insert into the table after adding the one row
Your dba will be able to insert but the dba should only be running schema changes which are checked so should not be a problem in practice

Related

SQL Server getting values for CHECK

Is there a way to get the values for a CHECK Constraint
Example
CONSTRAINT TheCollumn CHECK (TheCollumn IN('One','Two','Three') )
I want to get the 'One' 'Two' 'Three' from a Query which I can then use to populate a Dropdown without having to retype the values in the dropdown list
I think you want a foreign key constraint and a reference table:
create table refTheColumn (
name varchar(255) primary key
);
. . .
constraint fk_thecolumn foreign key (theColumn) references refTheColumn(name);
Then you can populate the list with the reference table.
It's a bad idea, but here's the general approach:
USE tempdb
CREATE TABLE #tmp (v varchar(50));
ALTER TABLE #tmp ADD CONSTRAINT TheCollumn CHECK(v IN ('One', 'Two', 'Three'));
GO
SELECT definition FROM sys.check_constraints WHERE Name = 'TheCollumn'
Would output:
([v]='Three' OR [v]='Two' OR [v]='One')
You'd have to parse that in code (but parsing it in SQL would be very challenging and unwise). A foreign key, as Gordon Lindoff suggests, is definitely cleaner/easier to work with.
More reasons why this is a bad idea:
If the check constraint were defined differently, it may store it differently (hence Damien_The_Unbeliever's point about needing a SQL parser). For example, it might be AND clauses, or it might point to a function (consider [v] = right(SomeOtherColumn, 5), and now you have to interpret that)
The sys tables (sys.check_constraints) could change in future versions and isn't considered a reliable way to access this data. Your code might not survive a SQL upgrade (whereas using a reference table would). Even worse, it might not throw an exception on an upgrade that changes the SQL server functionality being leveraged, but it might create a bug that's difficult to track down or reproduce across environments (i.e. prod upgrades but dev doesn't).

Create autoserial column in informix

is it possible to create a autoserial index in order 1,2,3,4... in Informix and what would be the syntax. I have a query and some of my timestamps are identical so I was unable to query using a timestamp variable. Thanks!
These are the commands that I ran to add an id field to an existing table. While logged in to the dbaccess console.
alter table my_table add id integer before some_field;
create sequence myseq;
update my_table set id = myseq.nextval;
drop sequence myseq;
alter table my_table modify (id serial not null);
Thanks to #ricardo-henriques for pointing me in the right direction. These commands will allow you to run the instructions explained in his answer on your database.
That would be the SERIAL data type.
You can use, as #RET mention the SERIAL data type.
Next you will struggle with the fact that you can't add a SERIAL column to an existing table. Ways to work around:
Add an INTEGER column, populate with sequential numbers and then alter the column to SERIAL.
Unload the data to a file, drop the table and recreate it with the new column.
Create a new table with the new column, populate the new table with the data from the old, drop the old and rename the new.
...
Bear in mind that they may not be unique. Hence you have to create an unique index or a primary key or an unique constraint in the column to prevent duplicates.
Another notes you should be aware:
- Primary key don't allow NULLS, unique index and unique constraints allow (as long there is only one record), so you should specify NOT NULL on the column definition.
- If you use a primary key or a unique constraint you can create a foreign key to it.
- In primary key and unique constraint the validation of the uniqueness of the record is done in the end of the DML, for the unique index it is done row a row.
Seems you're getting your first touch with informix, welcome. Yes it can be a little bit hard on the beginning just remember:
Always search before asking, really search.
When in doubt or reached a dead end then ask away.
Try to trim down your case scenario, built your own case the simple away you can, these will not only help us to help us but you will practice and in some cases find the solution by yourself.
When error is involve always give the error code, in informix it is given at least one error code and sometimes an ISAM error too.
Keen regards.

Avoiding a two step insert in SQL

Let's say I have a table defined as follows:
CREATE TABLE SomeTable
(
P_Id int PRIMARY KEY IDENTITY,
CompoundKey varchar(255) NOT NULL,
)
CompoundKey is a string with the primary key P_Id concatenated to the end, like Foo00000001 which comes from "Foo" + 00000001. At the moment, entries insertions into this table happen in 2 steps.
Insert a dummy record with a place holder string for CompoundKey.
Update the CompoundKey with the column with the generated compound key.
I'm looking for a way to avoid the 2nd update entirely and do it all with one insert statement. Is this possible? I'm using MS SQL Server 2005.
p.s. I agree that this is not the most sensible schema in the world, and this schema will be refactored (and properly normalized) but I'm unable to make changes to the schema for now.
Your could use a computed column; change the schema to read:
CREATE TABLE SomeTable
(
P_Id int PRIMARY KEY IDENTITY,
CompoundKeyPrefix varchar(255) NOT NULL,
CompoundKey AS CompoundKeyPrefix + CAST(P_Id AS VARCHAR(10))
)
This way, SQL Server will automagically give you your compound key in a new column, and will automatically maintain it for you. You may also want to look into the PERSIST keyword for computed columns which will cause SQL Server to materialise the value in the data files rather than having to compute it on the fly. You can also add an index against the column should you so wish.
A trigger would easily accomplish this
This is simply not possible.
The "next ID" doesn't exist and thus cannot be read to fulfill the UPDATE until the row is inserted.
Now, if you were sourcing your autonumbers from somwhere else you could, but I don't think that's a good answer to your question.
Even if you want to use triggers, an UPDATE is still executed even if you don't manually execute it.
You can obscure the population of the CompoundKey, but at the end of the day it's still going to be an UPDATE
I think your safest bet is just to make sure the UPDATE is in the same transaction as the INSERT or use a trigger. But, for the academic argument of it, an UPDATE still occurs.
Two things:
1) if you end up using two inserts, you must use transaction! Otherwise other processes may see the database in inconsistent state (i.e. seeing record without CompoundKey).
2) I would refrain from trying to paste the Id to the end of CompoundKey in transaction, trigger etc. It is much cleaner to do it at the output if you need it, e.g. in queries (select concat(CompoundKey, Id) as CompoundKeyId ...). If you need it as a foreign key in other tables, just use the pair (CompoundKey, Id).

Limit column value to 0 or to be unique on insert

I have a table where a int column should either be set to zero or to a value which does not already exist in the table. Can I prevent inserting non zerod duplicated values in such column with a CHECK CONSTRAINT or should I use a BEFORE INSERT trigger? in case I could do this with both, what design is better?
From the .NET windows forms application we are using a global transaction scope to wrap the save and in both cases I would like the insert to fail and the transaction to roll back completely so I don't know if I should put the rollback inside the trigger, that's why I would rather try with a check if possible.
Database: SQL 2008
Thanks.
See the link in Andriy M's comment, it mention a 2008 new concept : filtered index...
CREATE UNIQUE INDEX indexName ON tableName(columns) INCLUDE includeColumns WHERE columnName != 0
This will create an index of unique items that are not 0.
Any attempt to insert a duplicate non-zero value will breach the uniqueness of the index and cause an error.
why are you using zero instead of null.? If you had it as null then the db would handle it for you easily via a nullable unique constraint..
Check constraint, when used properly, prevent bad data. They do not change the bad data to good. For that reason, I would aim for a trigger instead. If you can get around the need for a 0 as NULL, you could use a unique constraint, but supplying the answer would be the job of a trigger regardless.

Any way to enforce numeric primary key size limit in sql?

I'd like to create a table which has an integer primary key limited between 000 and 999. Is there any way to enforce this 3 digit limit within the sql?
I'm using sqlite3.
Thanks.
SQLite supports two ways of doing this:
Define a CHECK constraint on the primary key column:
CREATE TABLE mytable (
mytable_id INT PRIMARY KEY CHECK (mytable_id BETWEEN 0 and 999)
);
Create a trigger on the table that aborts any INSERT or UPDATE that attempts to set the primary key column to a value you don't want.
CREATE TRIGGER mytable_pk_enforcement
BEFORE INSERT ON mytable
FOR EACH ROW
WHEN mytable_id NOT BETWEEN 0 AND 999
BEGIN
RAISE(ABORT, 'primary key out of range');
END
If you use an auto-assigned primary key, as shown above, you may need to run the trigger AFTER INSERT instead of before insert. The primary key value may not be generated yet at the time the BEFORE trigger executes.
You may also need to write a trigger on UPDATE to prevent people from changing the value outside the range. Basically, the CHECK constraint is preferable if you use SQLite 3.3 or later.
note: I have not tested the code above.
You may be able to do so using a CHECK constraint.
But,
CHECK constraints are supported as of version 3.3.0. Prior to version 3.3.0, CHECK constraints were parsed but not enforced.
(from here)
So unless SQLite 3 = SQLite 3.3 this probably won't work
jmisso, I would not recommend reusing primary keys that have been deleted. You can create data integrity problems that way if all other tables that might have that key in them were not deleted first (one reason to always enforce setting up foreign key relationships in a database to prevent orphaned data like this). Do not do this unless you are positive that you have no orphaned data that might get attached to the new record.
Why would you even want to limit the primary key to 1000 possible values? What happens when you need 1500 records in the table? This doesn't strike me as a very good thing to even be trying to do.
What about pre-populating the table with the 1000 rows at the start. Toggle the available rows with some kind of 1/0 column like Is_Available or similar. Then don't allow inserts or deletes, only updates. Under this scenario your app only has to be coded for updates.