SQL Server 2005 UNIQUE Constraint on Multiple Column while creating table - sql

I am working with a SQL Server 2005 database and I am facing a problem.
I am creating a table like this:
CREATE TABLE CONT_UNIQUE
(
NUM INT,
BRANCH VARCHAR(10),
PIN INT,
CONSTRAINT CON UNIQUE(NUM,BRANCH,PIN)
)
means I am adding a unique constraint to all columns present in my table. But while inserting values in table, it is considering only NUM to be as UNIQUE, but allowing duplicate values for branch and PIN.
Below are my two insert queries.
INSERT INTO CONT_UNIQUE VALUES(1, 'MP', 123) -> Working fine
INSERT INTO CONT_UNIQUE VALUES(2, 'MP', 123) -> Should throw error since MP, and 123 are present.
Note:
CREATE TABLE CONT_UNIQUE
(
NUM INT UNIQUE ,
BRANCH VARCHAR(10), UNIQUE,
PIN INT UNIQUE
)
this works perfectly as expected.
Kindly let me know what is the problem with my queries.

You have created a single constraint that ensures no two rows have the same values in all 3 columns.
You want three separate constraints, one on NUM, one on BRANCH and one on PIN.
CREATE TABLE CONT_UNIQUE
(
NUM INT,
BRANCH VARCHAR(10),
PIN INT,
CONSTRAINT CON UNIQUE(NUM),
CONSTRAINT CON2 UNIQUE(BRANCH),
CONSTRAINT CON3 UNIQUE(PIN)
)

You have created unique constraint on the combination of 3 columns but not 2 columns, what I mean is you can not insert 1,'MP',123 value again into the table, but you can insert 1,'MP',12 or 1,'MP',13 into the table.

That won't throw an error because the unique constraint is on all 3 columns.
I think you want this as well / instead:
... CONSTRAINT only_two_columns UNIQUE (branch, pin) ...

From all of your replies i learnt that,
1)Unique key works with Unique combination rather than focusing on individual uniqueness...
Ex: unique(Column1, Column2) means column1 and column2 combination should not repeat, but individual values can repeat.
2)If we want unique value on each column then we need to mention the "unique" to each column while creating table.
Ex:Num int unique, Branch varchar(10) unique...etc so that each column will have unique values.
Previously i thought Unique(Col1, col2) is same as "col1 int unique, col2 int unique". So i asked the question.
ONCE AGAIN THANKS TO ALL OF YOU FOR YOUR SUPPORT IN SOLVING MY QUERY.. :)
Thanks
Mahesh

Related

No auto increment in my foreign key

I'm working in my school project and I want to make the id subject of any student incremented automatically as foreign key. I am showing you the example below as prototype, there are two tables, when I'm trying to insert data into the second table, I get an error (necessary to insert another field id of the table)
CREATE DATABASE database1;
USE database1;
CREATE TABLE table1
(
IdTable1 INT NOT NULL IDENTITY,
NOM VARCHAR(30),
PRIMARY KEY(IDMEDO)
);
--auto increment is working here
INSERT INTO table1
VALUES ('data1Table1'), ('data2Table1'), ('data3Table1');
--auto increment is working here just with the primary key
CREATE TABLE table2
(
IdTable2 INT not null IDENTITY,
IdTable1 INT,
dataTable2 varchar(30),
primary key(IdTable2),
constraint fk_tbl1 foreign key(IdTable1) references Table1
);
--necessary to add id field
INSERT INTO table2
VALUES ('data1Table2'), ('data2Table2'), ('data3Table2');
You should always (as a "best practice") define the columns you want to insert data into - that way, you can specify those that you have to provide values for, and let SQL Server handle the others. And for the foreign key, you have to explicitly provide a value - there's no "auto-magic" (or "auto-incrementing") way of associating a child row with its parent table - you have to provide the value in your INSERT statement.
So change your code to:
-- explicitly *specify* the NOM column here!
INSERT INTO table1 (NOM)
VALUES ('data1Table1'), ('data2Table1'), ('data3Table1');
-- again: explicitly *specify* the columns you want to insert into!
INSERT INTO table2 (IdTable1, dataTable2)
VALUES (1, 'data1Table2'), (2, 'data2Table2'), (3, 'data3Table2');
You need to insert the value of the foreign key wrt to table 1 now if you enter a value that's not in table 1 column which is acting as a foreign key then also it will not accept
Simply
Primary key and foreign key should match
Foreign key can't be null if table 1 doesn't contain a primary key with value null

Proper way to make a relation between multiple rows of single table

I've got following situation: I want to connect multiple records from one table with some kind of relation. Record could have no connection to other, or could have multiple of them (1 or more). There is no hierarchy in this relation.
For example:
CREATE TABLE x
(
x_id SERIAL NOT NULL PRIMARY KEY,
data VARCHAR(10) NOT NULL
);
I've thought of two ideas:
Make a new column in this table, which will contain some relationId. It won't reference anything. When new record is inserted, I will generate new relationId and put it there. If I would want to connect other record with this one, I will simply put the same relationId.
Example:
CREATE TABLE x
(
x_id NUMBER(19, 0) NOT NULL PRIMARY KEY,
data VARCHAR(10) NOT NULL,
relation_id NUMBER(19, 0) NOT NULL
);
insert into x values (nextval, 'blah', 1);
insert into x values (nextval, 'blah2', 1);
It will connect these two rows.
pros:
very easy
easy queries to get all records connected to particular record
no overhead
cons:
hibernate entity will contain only relationId, no collection of
related records (or maybe it's possible somehow?)
Make a separate join table, and connect rows with many-to-many relation. Join table would contain two column with ids, so one entry would connect two rows.
Example:
CREATE TABLE x
(
x_id SERIAL NOT NULL PRIMARY KEY,
data VARCHAR(10) NOT NULL
);
CREATE TABLE bridge_x
(
x_id1 NUMBER(19, 0) NOT NULL REFERENCES x (x_id),
x_id2 NUMBER(19, 0) NOT NULL REFERENCES x (x_id),
PRIMARY KEY(x_id1, x_id2)
);
insert into x values (1, 'blah');
insert into x values (2, 'blah2');
insert into bridge_x values (1, 2);
insert into bridge_x values (2, 1);
pros:
normalized relation
easy hibernate entity mapping, with collection containing related
records
cons:
overhead (with multiple connected rows, every pair must be inserted)
What is the best way to do this? Is there any other way than these two?
The best way in my experience is to use normalization as you've said in your second option. What you are looking for here is to create a foreign key.
So if you use the example you've given in example 2 and then apply the following SQL statement, you will create a relational database that can have 0 to many relations.
ALTER TABLE `bridgex` ADD CONSTRAINT `fk_1` FOREIGN KEY (`xID`) REFERENCES `x`(`xID`) ON DELETE NO ACTION ON UPDATE NO ACTION;

Primary key without duplicated values

i'm trying and success to create primary key in Redshif
create table my_table(id int ,
primary key(id));
insert into my_table values
(id),
(1),
(1),
(20);
select count(*) from my_table
3
but it allows me to upload duplicated value ,
as far as i know primary key should contain unique values ,
did i do something wrong?
you can find your answer here
How to create an Index in Amazon Redshift
one of the answers mention your problem with the primary key
Create an identity key which will act as an auto_increment surrogate key. This'll serve both the purpose - to uniquely identify the records in the table and prevent insertion of duplicate values.
Let's create a dummy table:
create table scratchpad.test_1
(id bigint identity(1,1),
name varchar(10));
insert into scratchpad.test_1(name)
values
('a'),('b'),('c'),('d');
select * from scratchpad.test_1;
The id column acts as a primary key. Deleting any record from the table does not impact the sequencing of other values and the id column can be used to uniquely identify the subsequent row.

Set based insert into two tables with 1 to 0-1 relation

I have two tables, the first has a primary key that is an identity, the second has a primary key that is not, but that key has a foreign key constraint back to the first table's primary key.
If I am inserting one record at a time I can use the Scope_Identity to get the value for the pk just inserted in table 1 that I want to insert into the second table.
My problem is I have many records coming from selects I want to insert in both tables, I've not been able to think of a set based way to do these inserts.
My current solution is to use a cursor, insert in the first table, get key using scope_identity, insert into second table, repeat.
Am I missing a non-cursor solution?
Yes, Look up the output clause in Books online.
I had this problem just this week: someone had introduced a table with a meaningless surrogate key into the schema where naturally keys are used. No doubt I'll fix this soon :) until then, I'm working around it by creating a table of data to INSERT from: this could be a permanent or temporary base table or a derived table (see below), which should suit your desire for a set-based solution anyhow. Use a join between this table and the table with the IDENTITY column on the natural key to find out the auto-generated values. Here's a brief example:
CREATE TABLE Test1
(
surrogate_key INTEGER IDENTITY NOT NULL UNIQUE,
natural_key CHAR(10) NOT NULL CHECK (natural_key NOT LIKE '%[^0-9]%') UNIQUE
);
CREATE TABLE Test2
(
surrogate_key INTEGER NOT NULL UNIQUE
REFERENCES Test1 (surrogate_key),
data_col INTEGER NOT NULL
);
INSERT INTO Test1 (natural_key)
SELECT DT1.natural_key
FROM (
SELECT '0000000001', 22
UNION ALL
SELECT '0000000002', 55
UNION ALL
SELECT '0000000003', 99
) AS DT1 (natural_key, data_col);
INSERT INTO Test2 (surrogate_key, data_col)
SELECT T1.surrogate_key, DT1.natural_key
FROM (
SELECT '0000000001', 22
UNION ALL
SELECT '0000000002', 55
UNION ALL
SELECT '0000000003', 99
) AS DT1 (natural_key, data_col)
INNER JOIN Test1 AS T1
ON T1.natural_key = DT1.natural_key;

How to create a unique index on a NULL column?

I am using SQL Server 2005. I want to constrain the values in a column to be unique, while allowing NULLS.
My current solution involves a unique index on a view like so:
CREATE VIEW vw_unq WITH SCHEMABINDING AS
SELECT Column1
FROM MyTable
WHERE Column1 IS NOT NULL
CREATE UNIQUE CLUSTERED INDEX unq_idx ON vw_unq (Column1)
Any better ideas?
Using SQL Server 2008, you can create a filtered index.
CREATE UNIQUE INDEX AK_MyTable_Column1 ON MyTable (Column1) WHERE Column1 IS NOT NULL
Another option is a trigger to check uniqueness, but this could affect performance.
The calculated column trick is widely known as a "nullbuster"; my notes credit Steve Kass:
CREATE TABLE dupNulls (
pk int identity(1,1) primary key,
X int NULL,
nullbuster as (case when X is null then pk else 0 end),
CONSTRAINT dupNulls_uqX UNIQUE (X,nullbuster)
)
Pretty sure you can't do that, as it violates the purpose of uniques.
However, this person seems to have a decent work around:
http://sqlservercodebook.blogspot.com/2008/04/multiple-null-values-in-unique-index-in.html
It is possible to use filter predicates to specify which rows to include in the index.
From the documentation:
WHERE <filter_predicate> Creates a filtered index by specifying which
rows to include in the index. The filtered index must be a
nonclustered index on a table. Creates filtered statistics for the
data rows in the filtered index.
Example:
CREATE TABLE Table1 (
NullableCol int NULL
)
CREATE UNIQUE INDEX IX_Table1 ON Table1 (NullableCol) WHERE NullableCol IS NOT NULL;
Strictly speaking, a unique nullable column (or set of columns) can be NULL (or a record of NULLs) only once, since having the same value (and this includes NULL) more than once obviously violates the unique constraint.
However, that doesn't mean the concept of "unique nullable columns" is valid; to actually implement it in any relational database we just have to bear in mind that this kind of databases are meant to be normalized to properly work, and normalization usually involves the addition of several (non-entity) extra tables to establish relationships between the entities.
Let's work a basic example considering only one "unique nullable column", it's easy to expand it to more such columns.
Suppose we the information represented by a table like this:
create table the_entity_incorrect
(
id integer,
uniqnull integer null, /* we want this to be "unique and nullable" */
primary key (id)
);
We can do it by putting uniqnull apart and adding a second table to establish a relationship between uniqnull values and the_entity (rather than having uniqnull "inside" the_entity):
create table the_entity
(
id integer,
primary key(id)
);
create table the_relation
(
the_entity_id integer not null,
uniqnull integer not null,
unique(the_entity_id),
unique(uniqnull),
/* primary key can be both or either of the_entity_id or uniqnull */
primary key (the_entity_id, uniqnull),
foreign key (the_entity_id) references the_entity(id)
);
To associate a value of uniqnull to a row in the_entity we need to also add a row in the_relation.
For rows in the_entity were no uniqnull values are associated (i.e. for the ones we would put NULL in the_entity_incorrect) we simply do not add a row in the_relation.
Note that values for uniqnull will be unique for all the_relation, and also notice that for each value in the_entity there can be at most one value in the_relation, since the primary and foreign keys on it enforce this.
Then, if a value of 5 for uniqnull is to be associated with an the_entity id of 3, we need to:
start transaction;
insert into the_entity (id) values (3);
insert into the_relation (the_entity_id, uniqnull) values (3, 5);
commit;
And, if an id value of 10 for the_entity has no uniqnull counterpart, we only do:
start transaction;
insert into the_entity (id) values (10);
commit;
To denormalize this information and obtain the data a table like the_entity_incorrect would hold, we need to:
select
id, uniqnull
from
the_entity left outer join the_relation
on
the_entity.id = the_relation.the_entity_id
;
The "left outer join" operator ensures all rows from the_entity will appear in the result, putting NULL in the uniqnull column when no matching columns are present in the_relation.
Remember, any effort spent for some days (or weeks or months) in designing a well normalized database (and the corresponding denormalizing views and procedures) will save you years (or decades) of pain and wasted resources.