I want to add something to a table (trigger?) so that, for exactly, exactly 1 row per ID has a specific value for a specific column. So that if a statement was run that makes this not the case, an exception would be thrown and the insert would be rolled back.
Let's take this schema.
ID Current Value
1 Y 0
1 N 0
1 N 2
2 Y 2
And the constraint I want is that for each ID, exactly one row has a current of 'Y'.
Therefore, these statements would not be executed and return an appropriate error:
insert into table values (1,'Y',1);
insert into table values (3,'N',2);
update table set current = 'N' where ID = 1;
I have two questions:
Is it a good idea to code this kind of constraint logic into your table, or is that best saved for the applications that manipulate the data? Why?
How can it be done? What kind of tool does oracle provide to create a constraint like this?
It's best if you can specify it in a declarative fashion (rather than procedurally, e.g. using triggers). Especially because triggers, without some kind of locking algorithm, will NOT work anyway due to concurrent sessions trying to insert/update the table at the same time.
In this instance, the simplest solution is a unique, function-based index, e.g.:
CREATE UNIQUE INDEX only_one_current ON thetable
(CASE WHEN Current = 'Y' THEN ID END);
The expression is NULL if Current = 'N', and all-NULL rows in an index are not stored, which means that the uniqueness constraint will only apply to rows where Current = 'Y'.
I think what you are looking for is just a unique constraint.
You can add it using below statement so that only unique combination of ID , Current can exist in table.
ALTER TABLE table_name add CONSTRAINT constraint_name UNIQUE (ID,Current);
Related
I am trying to create a new table in SQL Developer that has a four columns. In one column there is a numerical value called ORG_ID, this ORG_ID can be the same across multiple entries in the table. Another column is called DEFAULT_FLAG, this column only contains a Y or N character denoting if it is the default entry for the table for that ORG_ID.
I am trying to create a CHECK in the DEFAULT_FLAG column that makes sure there is only one entry with a Y for all entries with the same ORG_ID. Here is an example of what it would look like:
xxxx|xxxx|ORG_ID|DEFAULT_FLAG
xxxx|xxxx|123456| Y
xxxx|xxxx|123456| N
xxxx|xxxx|987654| Y
xxxx|xxxx|567495| Y
In the above table, the second entry for ORG_ID 123456 would need to be rejected if Y was inserted as the DEFAULT_FLAG.
I'm a little new to SQL, so I've done my research of needing to use a constraint and check on the column. I tried writing my own but it did not work, the code is below.
default_flag varchar(1)
constraint one_default Check(ORG_ID AND DEFAULT_FLAG != "Y"),
This is too long for a comment.
You are trying to use a check constraint for something it is not designed for. You have an org_id. You should have an organizations table that uses this id as its primary key.
Then, then flag you want to store should be in the organizations table. Voila! The flag is only stored once. You don't need to worry about keeping it in synch between different rows.
Create a unique index for all ORG_ID entries with a 'Y', so each ORG_ID can only have one row with a 'Y':
create unique index idx on mytable(case when default_flag = 'Y' then org_id end)
I think a technically-better solution than the one from Thorsten Kettner, but using the same idea, is
CREATE UNIQUE INDEX ON mytable(org_id)
WHERE default_flag = 'Y';
But let me also suggest a table organization_defaults with two columns, one the ID for an organization and the other the ID for mytable is a better approach, as suggested in comments to the OP.
I have a table with a SERIAL ID as primary key.
As you know the serial id increments itself automatically, and I need this feature in my table.
ID | info
---------
1 | xxx
2 | xxx
3 | xxx
For ordering matters, I want to insert a row between 1 and 2. Thus give to the new row an ID equal to 2, and want the other ID's to automatically increment to 3,4. If I execute such a query I get a duplicate key error.
Is there a way to make it possible, maybe changing the SERIAL ID to some other type?
What you are describing is not what most people would consider an ID, which should be a permanent and arbitrary identifier, for which an auto-increment column is just a convenient way of creating unique values. You couldn't use a value that kept changing as a foreign key, for example, so might well want both columns.
However, the task you've described is easily achieved with just an ordinary Integer column, let's call it "position", since that seems a more logical label for this behaviour.
The algorithm is simple:
Make a space for the new value by shifting all existing elements up one place.
Insert your new element.
In SQL, that would look something like this, to insert at position 42:
UPDATE items SET position=position + 1 WHERE position >= 42;
INSERT INTO items ( position, name ) VALUES ( 42, 'Answer' );
You could wrap this up in an SQL function on the server, and wrap it in a transaction to prevent concurrent inserts messing each other up.
Note that by default, a PRIMARY KEY or UNIQUE constraint on the position column may be invalidated during the update, as changes to each row are validated separately. To get around this, you can use a "deferrable constraint"; even in "immediate" mode, this will only be checked at the end of the statement, so the update will not violate it.
CONSTRAINT uq_position UNIQUE (position) DEFERRABLE INITIALLY IMMEDIATE
Note also that a Serial column doesn't have to be unique, so you could still have the default value be an auto-increment. However, it won't notice you inserting extra values, so you need to reset the sequence after a manual insert:
SELECT setval(
pg_get_serial_sequence('items', 'position'),
( SELECT max(position) FROM items )
);
Here is a live demo putting it all together. (SQLFiddle seems to have a bug which isn't dropping/resetting the sequence, making the id values look rather odd.)
Let's say I have the following Categories table:
Category MinValue MaxValue
A 1 2
B 3 9
C 10 0
Above I'm using 0 to indicate no maximum. These values will be configurable by end users. They will be able to add and remove categories, and modify the max and min values. Is there any sort of a constraint I can place on the table to ensure that no two ranges overlap?
This table will be modified using a web application so I could pre-validate changes to the table using Javascript so even an algorithm to prevent duplicates might suffice.
Maybe I'm missing the obvious here, but I don't think this is easy in Oracle.
I've seen solutions using a materialized view
that contains the overlaps from the Categories table
is refresh on commit
has a check constraint that it not contain any rows. This can be achieved by having a "rownum" column in the materialized view and a check constraint that this "rownum" column's value is always 0.
The check constraint on the materialized will then be violated on commit if a user enters any overlapping data.
You'll need to write your front end to allow for exceptions to be raised by Oracle on commit and to present an appropriate message to the user.
Now in the latest version of Postgresql for example, this is very easy with exclusion constraints.
I don't think that you can do it with a constraint, but you should be able to create a before insert/update trigger and use raise_application_error to abort the insert if it violates the conditions.
Something like...
if exists (select * from yourtable where :new.minvalue<maxvalue and :new.maxvalue>minvalue)
begin
raise_application_error(...)
end
I asked two questions at once in my last thread, and the first has been answered. I decided to mark the original thread as answered and repost the second question here. Link to original thread if anyone wants it:
Handling SQL Server concurrency issues
Suppose I have a table with a field which holds foreign keys for a second table. Initially records in the first table do not have a corresponding record in the second, so I store NULL in that field. Now at some point a user runs an operation which will generate a record in the second table and have the first table link to it. If two users simultaneously try to generate the record, a single record should be created and linked to, and the other user receives a message saying the record already exists. How do I ensure that duplicates are not created in a concurrent environment?
The steps I need to carry out are:
1) Look up x number of records in table A
2) Perform some business logic that prepares a single row which is inserted into table B
3) Update the records selected in step 1) to point to the newly created record in table B
I can use scope_identity() to retrieve the primary key of the newly created record in table B, so I don't need to worry about the new record being lost due to simultaneous transactions. However I need to eliminate the possibility of concurrently executing processes resulting in a duplicate record in table B being created.
In SQL Server 2008, this can be handled with a filtered unique index:
CREATE UNIQUE INDEX ix_MyIndexName ON MyTable (FKField) WHERE FkField IS NOT NULL
This will require all non-null values be unique, and the database will enforce it for you.
The 2005 way of simulating a unique filtered index for constraint purposes is
CREATE VIEW dbo.EnforceUnique
WITH SCHEMABINDING
AS
SELECT FkField
FROM dbo.TableB
WHERE FkField IS NOT NULL
GO
CREATE UNIQUE CLUSTERED INDEX ix ON dbo.EnforceUnique(FkField)
Connections that update the base table will need to have the correct SET options but unless you are using non default options this will be the case anyway in SQL Server 2005 (ARITH_ABORT used to be the problem one in 2000)
Using a computed column
ALTER TABLE MyTable ADD
OneNonNullOnly AS ISNULL(FkField, -PkField)
CREATE UNIQUE INDEX ix_OneNullOnly ON MyTable (OneNonNullOnly);
Assumes:
FkField is numeric
no clash of FkField and -PkField values
Decided to go with the following:
1) Begin transaction
2) UPDATE tableA SET foreignKey = -1 OUTPUT inserted.id INTO #tempTable
FROM (business logic)
WHERE foreignKey is null
3) If ##rowcount > 0 Then
3a) Create record in table 2.
3b) Capture ID of newly created record using scope_identity()
3c) UPDATE tableA set foreignKey = IdOfNewRecord FROM tableA INNER JOIN #tempTable ON tableA.id = tempTable.id
Since I write junk into the foreign key field in step 2), those rows are locked and no concurrent transactions will touch them. The first transaction is free to create the record. After the transaction is committed, the blocked transaction will execute the update query, but won't capture any of the original rows due to the WHERE clause only considering NULL foreignKey fields. If no rows are returned (##rowcount = 0), the current transaction exits without creating the record in table B, and returns some sort of error message to the client. (e.g. Error: Record already exists)
Given the following simple table structure (SQL Server 2008), I want to be able to maintain uniqueness of my numerical sequence column, but I want to be able to update that value for any given record(s).
CREATE TABLE MYLIST(
ID int NOT NULL IDENTITY(1, 1)
, TITLE varchar(50) NOT NULL
, SEQUENCE int NOT NULL
, CONSTRAINT pk_mylist_id PRIMARY KEY(ID)
, CONSTRAINT uq_mylist_sequence UNIQUE(SEQUENCE)
);
My interface allows me to jumble up the order of the items and I will know prior to doing the update which non-overlapping sequence they should all be in, but how do I perform the update without being confronted with a violation of the unique constraint?
For instance, say I have these records:
ID TITLE SEQUENCE
1 APPLE 1
2 BANANA 2
3 CHERRY 3
And I want to update their sequence numbers to the following:
ID TITLE SEQUENCE
1 APPLE 3
2 BANANA 1
3 CHERRY 2
But really I could be dealing with a couple dozen items. The sequence numbers must not overlap. I've thought about trying to use triggers or temporarily disabling the constraint, but that would seem to create more issues. I am using C# and LINQ-to-SQL, but am open to strictly database solutions.
The obvious way is to write as one batch. Internally, SQL will defer the constraint checks so intermediate uniqueness is irrelevant.
Writing row by row does not make sense and causes the problem you have.
You can change this to write into a temp table, and then "flush" the results at the end, even check uniqueness over the temp table first.
DECLARE #NewSeq TABLE (ID int, NewSeq int)
INSERT #NewSeq (ID, NewSeq) VALUES (1, 3)
INSERT #NewSeq (ID, NewSeq) VALUES (2, 1)
INSERT #NewSeq (ID, NewSeq) VALUES (3, 2)
UPDATE
M
SET
SEQUENCE = NewSeq
FROM
MYLIST M
JOIN
#NewSeq N ON M.ID = N.ID
You could assign them the negative of their correct value, then after all updates have occurred, do a final update where you set SEQUENCE = -SEQUENCE.
It is not very efficient, but since you say you only have a couple dozen items, I doubt the impact would be noticeable. I am also assuming that you can use negative numbers as "magic values" to indicate temporarily mis-assigned values.
If you really have to follow that workflow of inserting without knowing the right order and then having to come back with an update later to set the right order, I'd say your best option is to get rid of the unique constraint because it is causing you more problems than it is worth. Of course, only you know how much that unique constraint is "worth" to your application.