Limit column value to 0 or to be unique on insert - sql

I have a table where a int column should either be set to zero or to a value which does not already exist in the table. Can I prevent inserting non zerod duplicated values in such column with a CHECK CONSTRAINT or should I use a BEFORE INSERT trigger? in case I could do this with both, what design is better?
From the .NET windows forms application we are using a global transaction scope to wrap the save and in both cases I would like the insert to fail and the transaction to roll back completely so I don't know if I should put the rollback inside the trigger, that's why I would rather try with a check if possible.
Database: SQL 2008
Thanks.

See the link in Andriy M's comment, it mention a 2008 new concept : filtered index...
CREATE UNIQUE INDEX indexName ON tableName(columns) INCLUDE includeColumns WHERE columnName != 0
This will create an index of unique items that are not 0.
Any attempt to insert a duplicate non-zero value will breach the uniqueness of the index and cause an error.

why are you using zero instead of null.? If you had it as null then the db would handle it for you easily via a nullable unique constraint..

Check constraint, when used properly, prevent bad data. They do not change the bad data to good. For that reason, I would aim for a trigger instead. If you can get around the need for a 0 as NULL, you could use a unique constraint, but supplying the answer would be the job of a trigger regardless.

Related

Is it possible to create a Set as a table?

On the coding side of things, a Set data structure has three distinctive characteristics:
Every item in the Set is unique
The elements have no ordering
Adding an element that already exists in the Set is essentially a no-op
2 is easy enough in a SQL table, and 1 can be achieved by putting a unique constraint on the column(s) in question, but I wonder about #3. If you try to insert a value which is already there into a table constrained by a unique index, it will error out. Is there any way to design a table in SQL Server to ignore that error and just silently do nothing? Or does it have to be handled client-side, catching that error and ignoring it?
You understand how to handle (1) and (2).
For (3), you just need to implement an instead of trigger. If the value is already in the table, then the trigger would do nothing (not attempt an insert).
You can read about instead of triggers in the documentation.

SQL Server : Attempting to Insert a Duplicate Record Costs an Id

I have the following table set up:
Id int pk, unique not null
Name varchar(50) not null
Other columns not relevant to this issue
With an index set up on Name to be unique and non-clustered.
The setup does EXACTLY what I want - that is, only insert new rows whose Name doesn't already exist in the table, and throw an error if the new row is a duplicate Name.
I might be nit-picky about it, but every attempt to add a duplicate will cause SQL Server to skip the next Id that would have been assigned, had the new row been a non-duplicate Name.
Is there a way to prevent this with some setting, without the need to query for existence first before deciding to insert or deny?
No, there is no setting to prevent the identity value from incrementing on a failed insert.
Like you suggest, you can mitigate this by checking for a duplicate before performing the insert - I would do this not just to keep the identity from incrementing, but also to keep your Sql Server from raising errors as a standard procedure.
However, there may be other exceptional circumstances that would cause an insert to fail... so if gaps in the Ids pose more than an aesthetic problem, an identity column might not be the best solution for what you're trying to solve.

Avoiding a two step insert in SQL

Let's say I have a table defined as follows:
CREATE TABLE SomeTable
(
P_Id int PRIMARY KEY IDENTITY,
CompoundKey varchar(255) NOT NULL,
)
CompoundKey is a string with the primary key P_Id concatenated to the end, like Foo00000001 which comes from "Foo" + 00000001. At the moment, entries insertions into this table happen in 2 steps.
Insert a dummy record with a place holder string for CompoundKey.
Update the CompoundKey with the column with the generated compound key.
I'm looking for a way to avoid the 2nd update entirely and do it all with one insert statement. Is this possible? I'm using MS SQL Server 2005.
p.s. I agree that this is not the most sensible schema in the world, and this schema will be refactored (and properly normalized) but I'm unable to make changes to the schema for now.
Your could use a computed column; change the schema to read:
CREATE TABLE SomeTable
(
P_Id int PRIMARY KEY IDENTITY,
CompoundKeyPrefix varchar(255) NOT NULL,
CompoundKey AS CompoundKeyPrefix + CAST(P_Id AS VARCHAR(10))
)
This way, SQL Server will automagically give you your compound key in a new column, and will automatically maintain it for you. You may also want to look into the PERSIST keyword for computed columns which will cause SQL Server to materialise the value in the data files rather than having to compute it on the fly. You can also add an index against the column should you so wish.
A trigger would easily accomplish this
This is simply not possible.
The "next ID" doesn't exist and thus cannot be read to fulfill the UPDATE until the row is inserted.
Now, if you were sourcing your autonumbers from somwhere else you could, but I don't think that's a good answer to your question.
Even if you want to use triggers, an UPDATE is still executed even if you don't manually execute it.
You can obscure the population of the CompoundKey, but at the end of the day it's still going to be an UPDATE
I think your safest bet is just to make sure the UPDATE is in the same transaction as the INSERT or use a trigger. But, for the academic argument of it, an UPDATE still occurs.
Two things:
1) if you end up using two inserts, you must use transaction! Otherwise other processes may see the database in inconsistent state (i.e. seeing record without CompoundKey).
2) I would refrain from trying to paste the Id to the end of CompoundKey in transaction, trigger etc. It is much cleaner to do it at the output if you need it, e.g. in queries (select concat(CompoundKey, Id) as CompoundKeyId ...). If you need it as a foreign key in other tables, just use the pair (CompoundKey, Id).

Constrain a table to have only one row

What's the cleanest way to constrain a SQL table to allow it to have no more than one row?
This related question discusses why such a table might exist, but not how the constraint should be implemented.
So far I have only found hacks involving a unique key column that is constrained to have a specific value, e.g. ALWAYS_0 TINYINT NOT NULL PRIMARY KEY DEFAULT (0) CONSTRAINT CHECK_ALWAYS_0 CHECK (ALWAYS_0 = 0). I am guessing there is probably a cleaner way to do it.
The ideal solution would be portable SQL, but a solution specific to MS SQL Server or postgres would also be useful
The cleanest way (I think) would be an ON INSERT trigger that throws an exception (thus preventing the row from being inserted). This also gives the client app a chance to recover gracefully.
I just solved the same problem on SQL Server 2008 by creating a table with a computed column and putting the primary key on that column:
CREATE TABLE MyOneRowTable (
[id] AS (1) PERSISTED NOT NULL CONSTRAINT pk_MyOneRowTable PRIMARY KEY,
-- rest of the columns go here
);
Use Grant to remove permissions for anyone to insert into the table after adding the one row
Your dba will be able to insert but the dba should only be running schema changes which are checked so should not be a problem in practice

Any way to enforce numeric primary key size limit in sql?

I'd like to create a table which has an integer primary key limited between 000 and 999. Is there any way to enforce this 3 digit limit within the sql?
I'm using sqlite3.
Thanks.
SQLite supports two ways of doing this:
Define a CHECK constraint on the primary key column:
CREATE TABLE mytable (
mytable_id INT PRIMARY KEY CHECK (mytable_id BETWEEN 0 and 999)
);
Create a trigger on the table that aborts any INSERT or UPDATE that attempts to set the primary key column to a value you don't want.
CREATE TRIGGER mytable_pk_enforcement
BEFORE INSERT ON mytable
FOR EACH ROW
WHEN mytable_id NOT BETWEEN 0 AND 999
BEGIN
RAISE(ABORT, 'primary key out of range');
END
If you use an auto-assigned primary key, as shown above, you may need to run the trigger AFTER INSERT instead of before insert. The primary key value may not be generated yet at the time the BEFORE trigger executes.
You may also need to write a trigger on UPDATE to prevent people from changing the value outside the range. Basically, the CHECK constraint is preferable if you use SQLite 3.3 or later.
note: I have not tested the code above.
You may be able to do so using a CHECK constraint.
But,
CHECK constraints are supported as of version 3.3.0. Prior to version 3.3.0, CHECK constraints were parsed but not enforced.
(from here)
So unless SQLite 3 = SQLite 3.3 this probably won't work
jmisso, I would not recommend reusing primary keys that have been deleted. You can create data integrity problems that way if all other tables that might have that key in them were not deleted first (one reason to always enforce setting up foreign key relationships in a database to prevent orphaned data like this). Do not do this unless you are positive that you have no orphaned data that might get attached to the new record.
Why would you even want to limit the primary key to 1000 possible values? What happens when you need 1500 records in the table? This doesn't strike me as a very good thing to even be trying to do.
What about pre-populating the table with the 1000 rows at the start. Toggle the available rows with some kind of 1/0 column like Is_Available or similar. Then don't allow inserts or deletes, only updates. Under this scenario your app only has to be coded for updates.