How to Create table in SQL with table level unique key with table level Identity Seed - sql-server-2012

I want a table where Column1 and column2 must be unique along with auto Identity on Column1:
Ex:
Create table Foo (Id int, Name varchar(50));
Id column must be auto increment itself.
Data will passed like
A
A
B
B
Data to be inserted like below:
1 A
2 A
1 B
2 B
How can I achieve the same?

When you're using MySQL and can live with having a table with MyISAM engine, there's a built in functionality:
For MyISAM tables, you can specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In this case, the generated value
for the AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is
useful when you want to put data into ordered groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
In this case (when the AUTO_INCREMENT column is part of a
multiple-column index), AUTO_INCREMENT values are reused if you delete
the row with the biggest AUTO_INCREMENT value in any group. This
happens even for MyISAM tables, for which AUTO_INCREMENT values
normally are not reused.
source

Related

Update column of a table with new foreign key of associated table

Let's say I have a Persons and Books table that were associated.
PERSONS TABLE BOOKS TABLE
uid | userCode | name id| name | owner
------------------------- --------------------------
1 | 0xc! | john 1 | book foo | 0xc!
2 | li5$ | doe 2 | foo book | li5$
3 | 1y&t | temp 3 | ddia | 0xc!
Currently persons.usercode serves as the primary key and hence the foreign key on associated tables. I would like to change the primary key of the persons table to persons.uid. So now I want the books table to look like
PERSONS TABLE BOOKS TABLE
uid | usercode | name id| name | owner
------------------------- --------------------------
1 | 0xc! | john 1 | book foo | 1
2 | li5$ | doe 2 | foo book | 2
3 | 1y&t | temp 3 | ddia | 1
Dropping and adding the new primary key constraint shouldn't be a problem. However, how do I go about updating the entire books.owner column with the new primary key if I have over 10,000 rows in the books table
You need to drop/disable the current foreign key & re-add it. You may also need to find out the name of that primary/foreign key constraint before dropping.
ALTER TABLE "PERSONS"
DROP CONSTRAINT "primary_fkey"
UPDATE BOOKS bk SET owner=(SELECT uid FROM PERSONS WHERE userCode = bk.owner);
ALTER TABLE "PERSONS"
ADD CONSTRAINT "primary_fkey"
FOREIGN KEY ("uid")
REFERENCES BOOKS("owner")
ON UPDATE CASCADE;

Supertype & Subtypes and one to one relationship

I have the following supertype/multiple subtypes tables in SQL Server
supertype: Doctor and subtypes: Paediatrician, Orthopedic and Dentist
create table Doctor
(
DoctorID int primary key,
Name varchar(100),
-- add some other common attributes (all of vendor, sponsor, volunteer have) here.
)
create table Paediatrician
(
PaediatricianId int primary key,
DoctorID int foreign key references Doctor(DoctorID)
-- add some other attributes related to Paediatrician here.
)
create table Orthopedic
(
OrthopedicId int primary key,
DoctorID int foreign key references Doctor(DoctorID)
-- add some other attributes related to Orthopedic here.
)
create table Dentist
(
DentistId int primary key,
DoctorID int foreign key references Doctor(DoctorID)
-- add some other attributes related to Dentisthere.
)
My business logic is that a doctor can be either a Paediatrician, Dentist or an Orthopedic. Cannot be more than one of the subtypes. Based on the above design this is not enforced. I can create Doctor with Id = 1 and then go to Dentist and Orthopedictables and assign DoctorId value of 1 in both tables. How do I enforce it so that a doctor can be present at only one table?
I would arrange this bit differently. I would have 3 tables, a Doctor table (like you already have), a Specialist table and a SpecialistAttributes table.
The Doctor table contains all the Doctors' info, easy.
The Specialist Table contains your SpecialistTypeID and SpecialistDescription etc.
Your 3 example specialists would each be a row in this table.
The SpecialistAttributes table contains all the attributes needed for the specialists. In your Doctor table, you have a foreign key to lookup the SpecialistTypeID, so there can be only 1, then the SpecialistType has a number of SpecislaistAttibutes it can link to.
The other benefit of organising your data this way is that of you need to add any specialists roles or attributes, you don't need to change the structure of your database, just add more rows.
Doctor Table
| ID | Name | Specialist_FK |
---------------------------------
| 1 | Smith | 2 |
| 2 | Davies | 3 |
| 3 | Jones | 3 |
Specialist Table
| ID | Speciality |
----------------------
| 1 | Paediatrician |
| 2 | Orthopedic |
| 3 | Dentist |
SpecialistAttribute Table
| ID | SpecialityID+FK | Description | Other |
------------------------------------------------------------
| 1 | 1 | Paediatrician Info 1 | Other Info |
| 2 | 1 | Paediatrician Info 2 | Other Info |
| 3 | 2 | Orthopedic Info 1 | Other Info |
| 4 | 2 | Orthopedic Info 1 | Other Info |
| 5 | 3 | Dentist Info 1 | Other Info |
| 6 | 4 | Dentist Info 1 | Other Info |
There is no inbuild constraints/feature in the SQL server to handle this. You need to write custom login for it. Either in the procedure or Trigger.
You can write a stored procedure which would be responsible to insert in these tables. before insert, it will validate that if doctor id already exists in any of the tables if yes then an error will be custom raised otherwise procedure will insert the record in the respective table.

SQL Server Conditional Foreign Key

I have two tables in my SQL Server database, Foo and Bar. Table Foo is like so:
+-------+
| Foo |
+-------+
| Id |
| Type |
| Value |
+-------+
The table has values like:
+----+--------+-----------+
| Id | Type | Value |
+----+--------+-----------+
| 1 | Status | New |
| 2 | Status | Old |
| 3 | Type | Car |
| 4 | State | Inventory |
| 5 | State | Sold |
+----+--------+-----------+
The table Bar is like so:
+----------+
| Bar |
+----------+
| Id |
| TypeId |
| StatusId |
| StateId |
+----------+
Where TypeId, StatusId and StateId are all foreign key'ed to the Foo table.
But I want to put a condition on each foreign key where they can only key to the Foo
ids related to it's type. For example, the TypeId column can ONLY foreign key to id
3 on the Foo table. Or the StatusId column can ONLY foreign key to ids 1 or 2.
I know there is a check function in SQL Server but I'm unsure on how to use it correctly. I
tried to do something like this:
CREATE TABLE TEST.dbo.Bar
(
Id int PRIMARY KEY NOT NULL IDENTITY,
TypeId int NOT NULL CHECK (Type='Type'),
CONSTRAINT FK_Bar_Foo_Type FOREIGN KEY (TypeId) REFERENCES Foo (Id, Type)
)
CREATE UNIQUE INDEX Bar_Id_uindex ON TEST.dbo.Bar (Id)
But this didn't work. What am I doing wrong?
The check constraints you are referring to are only used to limit the type of information stored in a key or non key column. So, if you don't want a key column to have a negative value (lets say its a price column, and there is never a negative price) you will use Check constraint.
To better understand the concept of primary and foreign keys:
Primary key uniquely identifies each record in a table.
Foreign key is a value in some table which is a unique identifier (and can also be a primary key) in another table. This means that Foreign key can repeat many times in the table in which it is a foreign key in, and it will definitely be unique in the table that it is created from ( in the table that gives meaning to it).
Now coming to your question, you probably need to use the concept of composite keys. A composite key is basically a group of two or more values that uniquely identify a record, because you cannot enforce limitations on foreign keys in the way you are intending to do, because that defeats the very purpose of a key. Handle some issues with type of data stored in your keys at the application layer instead of database layer.
Looking at the problem in this manner will conceptually resolve some design flaws with your tables as as well.

postgresql update column with least value of column from another table based on condition

I'm trying to run an update query on the column answer_date of a table P. I want to fill each row of answer_date of P with the unique date from create_date column of H where P.ID1 matches with H.ID1 and where P.acceptance_date is not empty.
The query takes a long while to run, so I check the interim changes in answer_date but the entire column is empty like it was created.
Btree indices exists on all the mentioned columns.
Is there something wrong with the query?
UPDATE P
SET answer_date = subquery.date
FROM (SELECT DISTINCT H.create_date as date
FROM H, P
where H.postid=P.acceptance_id
) AS subquery
WHERE P.acceptance_id is not null;
Table schema is as follows:
Table "public.P"
Column | Type | Modifiers | Storage | Stats target | Description
-----------------------+-----------------------------+-----------+----------+--------------+-------------
id | integer | not null | plain | |
acceptance_id | integer | | plain | |
answer_date | timestamp without time zone | | plain | |
Indexes:
"posts_pkey" PRIMARY KEY, btree (id)
"posts_accepted_answer_id_idx" btree (acceptance_id) WITH (fillfactor='100')
and
Table "public.H"
Column | Type | Modifiers | Storage | Stats target | Description
-------------------+-----------------------------+-----------+----------+--------------+-------------
id | integer | not null | plain | |
postid | integer | | plain | |
create_date | timestamp without time zone | not null | plain | |
Indexes:
"H_pkey" PRIMARY KEY, btree (id)
"ph_creation_date_idx" btree (create_date) WITH (fillfactor='100')
Table P as 70 million rows and H has 220 million rows.
Postgres version is 9.6
Hardware is a Windows laptop with 8Gb of RAM.

Need a unique constraint on a table based on 3 columns where 2 columns must have the same value

I am looking to create a unique constraint on a table based on 3 columns where 2 columns must have the same value. For example:
| cat | 4 |5 |
| dog | 4 | 7 |
| cat | 4 | 7 | <--allowed since cat and 4 are the same and 3rd column is different
| cat | 5 | 1 | <--NOT allowed because cat needs to have 4 in second column
| cat | 4 | 5 | <--NOT allowed since all 3 columns are the same as first record
Is there any way to constrain this in sql server?
To make this work, you would have to redesign your tables and normalize them to look like this:
Animal
------
AnimalId int (pk)
AnimalName varchar [your 1st column goes here]
SomeNumber int [your 2nd column goes here]
YourOriginalTable
-----------------
AnimalId int (fk)
SomeOtherNumber int [your 3rd column goes here]
With this table structure, you can now define the following 2 unique constraints to restrict the values the way you want:
Animal (AnimalName)
YourOriginalTable (AnimalId, SomeOtherNumber)