I have a table with column position, which has unique and not null constraint.
I have move up/down the selected table item requirement,
for that I am taking the selected index and swapping the indexes.
And saving those two items as in DB.
whenever I am trying to insert first item itself its giving UNIQUE constraint..
Because the item's index is already there in DB.
There is a possibility that I can take temporary index, swapping... and saving .. I think it works.
But is there any other way to achieve this requirement
If you do the update in one Update statement, it'll work fine.
create table t (id number primary key);
insert into t values (1);
insert into t values (2);
commit;
update t set id = case when id = 1 then 2 else 1 end
where id in (1,2);
The easiest way would be to use a temporary value like you say because the constraint will not let you have two rows with the same value at any time.
You can probably derive a temporary value that is in itself unique by basing it on the original value and looking at what kind of data you cannot normally have. For example, negative numbers might work.
Other than that, you could declare the constraint as deferred. Then it won't be enforced until the end of your transaction. But that is probably a bit too much effort/impact.
If the field in question is really only used for sorting (and not for object identity), you could consider dropping the uniqueness altogether. You can use a unique primary key as a tie-breaker if necessary.
Related
I know this sounds crazy (And if I designed the database I would have done it differently) but I actually want to force a duplicate key on an insert. I'm working with a database that was designed to have columns as 'not null' pk's that have the same value in them for every row. The records keeping software I'm working with is somehow able to insert dups into these columns for every one of its records. I need to copy data from a column in another table into one column on this one. Normally I just would try to insert into that column only, but the pk's are set to 'not null' so I have to put something in them, and the way the table is set up that something has to be the same thing for every record. This should be impossible but the company that made the records keeping software made it work some how. I was wondering if anyone knows how this could be done?
P.S. I know this is normally not a good idea at all. So please just include suggestions for how this could be done regardless of how crazy it is. Thank you.
A SQL Server primary key has to be unique and NOT NULL. So, the column you're seeing duplicate data in cannot be the primary key on it's own. As urlreader suggests, it must be part of a composite primary key with one or more other columns.
How to tell what columns make up the primary key on a table: In Enterprise Manager, expand the table and then expand Columns. The primary key columns will have a "key" symbol next to them. You'll also see "PK" in the column description after, like this:
MyFirstIDColumn (PK, int, not null)
MySecondIDColumn (PK, int, not null)
Once you know which columns make up the primary key, simply ensure that you are inserting a combination of unique data into the columns. So, for my sample table above, that would be:
INSERT INTO MyTable (MyFirstIDColumn, MySecondIDColumn) VALUES (1,1) --SUCCEED
INSERT INTO MyTable (MyFirstIDColumn, MySecondIDColumn) VALUES (1,2) --SUCCEED
INSERT INTO MyTable (MyFirstIDColumn, MySecondIDColumn) VALUES (1,1) --FAIL because of duplicate (1,1)
INSERT INTO MyTable (MyFirstIDColumn, MySecondIDColumn) VALUES (1,3) --SUCCEED
More on primary keys:
http://msdn.microsoft.com/en-us/library/ms191236%28v=sql.105%29.aspx
I need help with constraints in SQL Server. The situation is for each OrderID=1 (foreign key not primary key so there are multiple rows with the same ID) on the table, the bit field can only be 1 for one of those rows, and for each row with OrderID=2, the bit field can only be 1 for one row, etc etc. It should be 0 for all other rows with the same OrderID. Any new records coming in with 1 in the bit field should reject if there is already a row with that OrderID which has the bit field set to 1. Any ideas?
CREATE UNIQUE INDEX ON UnnamedTable (OrderID) WHERE UnnamedBitField=1
It's called a Filtered Index. If you're on a pre-2008 version of SQL Server, you can implement a poor-mans equivalent of a filtered index using an indexed view:
CREATE VIEW UnnamedView
WITH SCHEMABINDING
AS
SELECT OrderID From UnnamedSchema.UnnamedTable WHERE UnnamedBitField=1
GO
CREATE UNIQUE CLUSTERED INDEX ON UnnamedView (OrderID)
You can't really do it as a constraint, since SQL Server only supports column constraints and row constraints. There's no (non-fudging) way to write a constraint that deals with all values in the table.
You could more fully normalize the schema which will help you not have to hunt for the already set bit but use a join. You need to remove the bit field and crate a new table say X containing OrderID and the primary key of your table, with the primary key of X being all those fields.
This means that when you insert you need to insert into your original table and into X f and only if you would have set the bit to 1 on your table. The insert will fail if there is already a row in X which is as if there was already an original row with bit set to 1.
The downside is that this takes up more space than your schema but is easier to maintain as you can't get to the equivalent of having two rows with the bit set to 1.
The only way to do that is to subclass the parent table. You didn't mention it but a common reason for this pattern is to represent one unique active row from the set of all rows with the same common key value. Let's Assume your bit field represents the active Orders....
Then I would create a separate table called ActiveOrders, which will only contain the one row with the bit field set to 1
Create Table ActiveOrders(int Orderid Primary Key Null)
and the other table with all the rows in it, with it's own unique Primary Key OrderId
Create Table AllOrders
(OrderId Integer Primary Key Not Null, ActiveOrderId Integer Not Null,
[All other data fields]
Constraint FK_AllOrders2ActiveOrder
Foreign Key(ActiveOrderId) references ActiveOrders(OrderId))
You now no longer even need the bit field, as the presence of the row in the ActiveOrders table identifies it as the Active Order... To get only the active Orders (the ones that in your scheme would have bit field set to 1), just join the two tables.
I aggree with the other answers and if you can change the schema then do that but if not then I think something like this will do.
CREATE FUNCTION fnMyCheck
(#id INT)
RETURNS INT
AS
BEGIN
DECLARE #i INT
SELECT #i = COUNT(*)
FROM MyTable
WHERE FkCol = #id
AND BitCol = 1
RETURN #i
END
ALTER TABLE YourTable
ADD CONSTRAINT ckMyCheck CHECK (fnMyCheck(FkCol)<=1)
but there are problems that can come from doing using a udf in a check constraint, such as this
Edit to add comment regarding problems with this 'solution':
There are more straightforward issues than what you've linked to.
INSERT INTO YourTable(FkCol,BitCol) VALUES (1,1),(1,0)
followed by
UPDATE YourTable SET BitCol=1
succeeds and leaves two rows with FkCol=1 and BitCol=1
I have a two tables which insert using jdbc. For example its parcelsTable and filesTableAnd i have some cases:
1. INSERT new row in both tables.
2. INSERT new row only in parcelsTable.
TABLES:
DROP parcelsTable;
CREATE TABLE(
num serial PRIMARY KEY,
parcel_name text,
filestock_id integer
)
DROP filesTable;
CREATE TABLE(
num serial PRIMARY KEY,
file_name text,
files bytea
)
I want to set parcelsTable.filestock_id=filesTable.num when i have INSERT in both tables using TRIGGER.
Its possible? How to know that i insert in both tables?
You don't need to use a trigger to get the foreign key value in this case. Since you have it set as serial you can access the latest value using currval. Run something like this this from your app:
insert into filesTable (file_name, files) select 'f1', 'asdf';
insert into parcelsTable (parcel_name, filestock_id) select 'p1', currval('filesTable_num_seq');
Note that this should only be used when inserting one record at a time to grab individual key values from currval. I'm calling the default sequence name of table_column_seq, which you should be able to use unless you've explicitly declared something different.
I would also recommend explicitly declaring nullability and the relationship:
CREATE TABLE parcelsTable (
...
filestock_id integer NULL REFERENCES filesTable (num)
);
Here is a working demo at SqlFiddle.
This might not be an answer, but it may be what you need. I am making this an answer instead of a comment because I need the space.
I don't know if you can have a trigger on two tables. Typically this is not needed. As in your case, typically either you are creating a parent record and a child record, or you are just creating a child record of an existing record.
So, typically, if you need a trigger when creating both, it is sufficient to put the trigger on the parent record.
I don't think you can do what you need. What you are trying to do is populate the foreign key with the parent record primary key in the same transaction. I don't think you can do that. I think you will have to provide the foreign key in the insert for parcelsTable.
You will end up leaving this NULL when you are creating a record in the parcelsTable at times when you are not creating a record in filesTable. So I think you will want to set the foreign key in the INSERT statement.
Only idea I've got by now is that you can create function that do indirect insert to the tables. then you can have whatever condition you need, with parallel inserts too.
It's far from the ideal situation, but I need to fix a database by appending the number "1" to the PK Identiy column which has FK relations to four other tables. I'm basically making a four digit number a five digit number. I need to maintain the relations. I could store the number in a var, do a Set query and append the 1, and do that for each table...
Is there a better way of doing this?
You say you are using an identity data type for your primary key so before you update the numbers you will have to SET IDENTITY_INSERT ON (documentation here) and then turn it off again after the update.
As long as you have cascading updates set for your relations the other tables should be updated automatically.
EDIT: As it's not possible to change an identity value I guess you have to export the data, set the new identity values (+10000) and then import your data again.
Anyone have a better suggestion...
Consider adding another field to the PK instead of extending the length of the PK field. Your new field will have to cascade to the related tables, like a field length increase would, but you get to retain your original PK values.
My suggestion is:
Stop writing to the tables.
Copy the tables to new tables with the new PK.
Rename the old tables to backup names.
Rename the new tables to the original table name.
Count the rows in all the tables and double check your work.
Continue using the tables.
Changing a PK after the fact is not fun.
If the column in question has an identity property on it, it gets complicated. This is more-or-less how I'd do it:
Back up your database.
Put it in single user mode. You don't need anybody mucking around whilst you do the surgery.
Execute the ALTER TABLE statements necessary to
disable the primary key constraint on the table in question
disable all triggers on the table in question
disable all foreign key constraints referencing the table in question.
Clone your table, giving it a new name and a column-for-column identical definitions. Don't bother with any triggers, indices, foreign keys or other constraints. Omit the identity property from the table's definition.
Create a new 'map' table that will map your old id values to the new value:
create table dbo.pk_map
(
old_id int not null primary key clustered ,
new_id int not null unique nonclustered ,
)
Populate the map table:
insert dbo.pk_map
select old_id = old.id ,
new_id = f( old.id ) // f(x) is the desired transform
from dbo.tableInQuestion old
Populate your new table, giving the primary key column the new value:
insert dbo.tableInQuestion_NEW
select id = map.id ,
...
from dbo.tableInQuestion old
join dbo.pk_map map on map.old_id = old.id
Truncate the original table: TRUNCATE dbo.tableInQuestion. This should work—safely—since you've disabled all the triggers and foreign key constraints.
Execute SET IDENTITY_INSERT dbo.tableInQuestion ON.
Reload the original table:
insert dbo.tableInQuestion
select *
from dbo.tableInQuestion_NEW
Execute SET IDENTITY_INSERT dbo.tableInQuestion OFF
Execute drop table dbo.tableInQuestion_NEW. We're all done with it.
Execute DBCC CHECKIDENT( dbo.tableInQuestion , reseed ) to get the identity counter back in sync with the data in the table.
Now, use the map table to propagate the changed primary key column down the line. Depending on your E-R model, this can get complicated as foreign keys referencing the updated column may themselves be part of a composite primary key.
When you're all done, start re-enabling the constraints and triggers you disabled. Make sure you do this using the WITH CHECK option. Fix any problems thus uncovered.
Finally, drop the map table, and clear the single user flag and bring your system(s) back online.
Piece of cake! (or something.)
Consider this approach:
Reset the identity seed to the 10000 + the current seed.
Set identity insert on
Insert into the table from the values in the table and add 10000 to the identity column on the way.
EX:
Set identity insert on
Insert Table(identity, column1, eolumn2)
select identity + 10000, column1, column2
From Table
Where identity < origional max identity value
After the insert you know the identity is exactly 10000 more than the origional.
Update the foreign keys by addding 10000.
I am using SQL Server 2005. I want to constrain the values in a column to be unique, while allowing NULLS.
My current solution involves a unique index on a view like so:
CREATE VIEW vw_unq WITH SCHEMABINDING AS
SELECT Column1
FROM MyTable
WHERE Column1 IS NOT NULL
CREATE UNIQUE CLUSTERED INDEX unq_idx ON vw_unq (Column1)
Any better ideas?
Using SQL Server 2008, you can create a filtered index.
CREATE UNIQUE INDEX AK_MyTable_Column1 ON MyTable (Column1) WHERE Column1 IS NOT NULL
Another option is a trigger to check uniqueness, but this could affect performance.
The calculated column trick is widely known as a "nullbuster"; my notes credit Steve Kass:
CREATE TABLE dupNulls (
pk int identity(1,1) primary key,
X int NULL,
nullbuster as (case when X is null then pk else 0 end),
CONSTRAINT dupNulls_uqX UNIQUE (X,nullbuster)
)
Pretty sure you can't do that, as it violates the purpose of uniques.
However, this person seems to have a decent work around:
http://sqlservercodebook.blogspot.com/2008/04/multiple-null-values-in-unique-index-in.html
It is possible to use filter predicates to specify which rows to include in the index.
From the documentation:
WHERE <filter_predicate> Creates a filtered index by specifying which
rows to include in the index. The filtered index must be a
nonclustered index on a table. Creates filtered statistics for the
data rows in the filtered index.
Example:
CREATE TABLE Table1 (
NullableCol int NULL
)
CREATE UNIQUE INDEX IX_Table1 ON Table1 (NullableCol) WHERE NullableCol IS NOT NULL;
Strictly speaking, a unique nullable column (or set of columns) can be NULL (or a record of NULLs) only once, since having the same value (and this includes NULL) more than once obviously violates the unique constraint.
However, that doesn't mean the concept of "unique nullable columns" is valid; to actually implement it in any relational database we just have to bear in mind that this kind of databases are meant to be normalized to properly work, and normalization usually involves the addition of several (non-entity) extra tables to establish relationships between the entities.
Let's work a basic example considering only one "unique nullable column", it's easy to expand it to more such columns.
Suppose we the information represented by a table like this:
create table the_entity_incorrect
(
id integer,
uniqnull integer null, /* we want this to be "unique and nullable" */
primary key (id)
);
We can do it by putting uniqnull apart and adding a second table to establish a relationship between uniqnull values and the_entity (rather than having uniqnull "inside" the_entity):
create table the_entity
(
id integer,
primary key(id)
);
create table the_relation
(
the_entity_id integer not null,
uniqnull integer not null,
unique(the_entity_id),
unique(uniqnull),
/* primary key can be both or either of the_entity_id or uniqnull */
primary key (the_entity_id, uniqnull),
foreign key (the_entity_id) references the_entity(id)
);
To associate a value of uniqnull to a row in the_entity we need to also add a row in the_relation.
For rows in the_entity were no uniqnull values are associated (i.e. for the ones we would put NULL in the_entity_incorrect) we simply do not add a row in the_relation.
Note that values for uniqnull will be unique for all the_relation, and also notice that for each value in the_entity there can be at most one value in the_relation, since the primary and foreign keys on it enforce this.
Then, if a value of 5 for uniqnull is to be associated with an the_entity id of 3, we need to:
start transaction;
insert into the_entity (id) values (3);
insert into the_relation (the_entity_id, uniqnull) values (3, 5);
commit;
And, if an id value of 10 for the_entity has no uniqnull counterpart, we only do:
start transaction;
insert into the_entity (id) values (10);
commit;
To denormalize this information and obtain the data a table like the_entity_incorrect would hold, we need to:
select
id, uniqnull
from
the_entity left outer join the_relation
on
the_entity.id = the_relation.the_entity_id
;
The "left outer join" operator ensures all rows from the_entity will appear in the result, putting NULL in the uniqnull column when no matching columns are present in the_relation.
Remember, any effort spent for some days (or weeks or months) in designing a well normalized database (and the corresponding denormalizing views and procedures) will save you years (or decades) of pain and wasted resources.