I have a set of parent-child tables (1 to many relationships). I'm building the tables, and have some doubts about the use of PKs and auto-increment.
Parent table has an autonumber PK (is used for storing sales ticket header). One record here means on ticket.
Child table is used for storing ticket details. One record here is one line item in the ticket (e.g. coke, mars bar, etc)
I understand that PK for child table should have 2 fields:
Parent tables's PK
A number that makes the line item unique within this ticket
If I use IDENTITY, it will not "restart" after parent's PK changes.
I'll show it with an example:
A) What SQL does
Parent table
Col1 Col2
1 1000
2 2543
3 3454
Note: Col1 is IDENTITY
Child Table
Col1 Col2 Col3
1 1 Coke
1 2 Mars Bar
2 3 Sprite
3 4 Coke
3 5 Sprite
3 6 Mars Bar
Note: Col1 is taken from Parent Table; Col2 is IDENTITY
B) What I want to achieve
Parent table is the same as above
Child Table
Col1 Col2 Col3
1 1 Coke
1 2 Mars Bar
2 1 Sprite
3 1 Coke
3 2 Sprite
3 3 Mars Bar
Note: Col1 is taken from Parent Table; Col2 resets after change in Col1; Col1 composed with Col2 are unique.
Does SQL Server implement this use of keys? Or should I need to code it?
Just as an example:
create table dbo.tOrders (
OrderID int not null identity primary key,
CustomerID int not null
);
create table dbo.tOrderPos (
OrderID int not null foreign key references dbo.tOrders,
OrderPosNo int null,
ProductID int null
);
create clustered index ciOrderPos on dbo.tOrderPos
(OrderID, OrderPosNo);
go
create trigger dbo.trInsertOrderPos on dbo.tOrderPos for insert
as begin
update opo
set OrderPosNo = isnull(opo2.MaxOrderPosNo,0) + opo.RowNo
from (select OrderID, OrderPosNo,
RowNo = row_number() over (partition by OrderID order by (select 1))
from dbo.tOrderPos opo
where OrderPosNo is null) opo
cross apply
(select MaxOrderPosNo = max(opo2.OrderPosNo)
from dbo.tOrderPos opo2
where opo2.OrderID = opo.OrderID) opo2
where exists (select * from inserted i where i.OrderID = opo.OrderID);
end;
go
declare #OrderID1 int;
declare #OrderID2 int;
insert into dbo.tOrders (CustomerID) values (11);
set #OrderID1 = scope_identity();
insert into dbo.tOrderPos (OrderID, ProductID)
values (#OrderID1, 1), (#OrderID1, 2), (#OrderID1, 3);
insert into dbo.tOrders (CustomerID) values (12);
set #OrderID2 = scope_identity();
insert into dbo.tOrderPos (OrderID, ProductID)
values (#OrderID2, 4), (#OrderID2, 5);
insert into dbo.tOrderPos (OrderID, ProductID)
values (#OrderID1, 6);
select * from dbo.tOrderPos;
go
drop trigger dbo.trInsertOrderPos;
drop table dbo.tOrderPos;
drop table dbo.tOrders;
go
The difficulty has been to allow multiple inserts and delayed inserts.
HTH
Another option is using an instead-of-trigger:
create trigger dbo.trInsertOrderPos on dbo.tOrderPos instead of insert
as begin
insert into dbo.tOrderPos
(OrderID, OrderPosNo, ProductID)
select OrderID,
OrderPosNo =
isnull( (select max(opo.OrderPosNo)
from dbo.tOrderPos opo
where opo.OrderID = i.OrderID), 0) +
row_number() over (partition by OrderID order by (select 1)),
ProductID
from inserted i;
end;
Unfortunately it doesn't seem to be possible to set the OrderPosNo "not null" because multiple inserts would lead to a duplicate key. Therefor I couldn't use a primary key and used a clustered index instead.
You don't have a one-to-many relationship.
You have a many-to-many relationship.
A parent can have many items.
A coke can belong to more than one parent.
You want three tables. The in-between table is sometimes called a junction table.
http://en.wikipedia.org/wiki/Junction_table
Note: In the wiki article they only show two columns in the junction table, I believe a best practice is for that table to also have a unique auto-incrementing field.
Note: The two joining fields are usually made a unique index.
You will have to code the logic for this yourself. You might make the task easier by implementing it through triggers, and using window functions (row_number() over (partition by parent_id order by ...).
You can also let the primary key be simply an identity column (the parent_id doesn't have to be part of the PK), and have a "Sequence_Num" column to keep track of the int that you want to reset with each parent_id. You can even do this and still set a clustered index on the parent_id / sequence_num cols.
IMHO the 2nd option is better because it allows more flexibility without any major drawback. It also makes the window function easier to write because you can order by the surrogate key (the identity column) to preserve the insert order when regenerating the sequence_num's. In both cases you have to manage the sequencing of your "sequenec_num" column yourself.
Related
I want to UPDATE or INSERT a column in PostgreSQL instead of doing INSERT or UPDATE using INSERT ... ON CONFLICT ... because there will be more updates than more inserts and also I have an auto incrementing id column that's defined using SERIAL so it increments the id column everytime it tries to INSERT or UPDATE and that's not what I want, I want the id column to increase only if it's an INSERT so that all ids would be in an order instead
The table is created like this
CREATE TABLE IF NOT EXISTS table_name (
id SERIAL PRIMARY KEY,
user_id varchar(30) NOT NULL,
item_name varchar(50) NOT NULL,
code_uses bigint NOT NULL,
UNIQUE(user_id, item_name)
)
And the query I used was
INSERT INTO table_name
VALUES (DEFAULT, 'some_random_id', 'some_random_name', 1)
ON CONFLICT (user_id, item_name)
DO UPDATE SET code_uses = table_name.code_uses + 1;
Thanks :)
Upserts in PostgreSQL do exactly what you described.
Consider this table and records
CREATE TABLE t (id SERIAL PRIMARY KEY, txt TEXT);
INSERT INTO t (txt) VALUES ('foo'),('bar');
SELECT * FROM t ORDER BY id;
id | txt
----+-----
1 | foo
2 | bar
(2 Zeilen)
Using upserts the id will only increment if a new record is inserted
INSERT INTO t VALUES (1,'foo updated'),(3,'new record')
ON CONFLICT (id) DO UPDATE SET txt = EXCLUDED.txt;
SELECT * FROM t ORDER BY id;
id | txt
----+-------------
1 | foo updated
2 | bar
3 | new record
(3 Zeilen)
EDIT (see coments): this is the expected behaviour of a serial column, since they're nothing but a fancy way to use sequences. Long story short: using upserts the gaps will be inevitable. If you're worried the value might become too big, use bigserial instead and let PostgreSQL do its job.
Related thread: serial in postgres is being increased even though I added on conflict do nothing
I need help with the insert statements for a plethora of tables in our DB.
New to SQL - just basic understanding
Summary:
Table1
Col1 Col2 Col3
1 value1 value1
2 value2 value2
3 value3 value3
Table2
Col1 Col2 Col3
4 value1 value1
5 value2 value2
6 value3 value3
Multiple tables use the same sequence of auto-generated primary keys when user creates a static data record from the GUI.
However, creating a script to upload static data from one environment to the other is something I'm looking for.
Example from one of the tables:
Insert into RULE (PK_RULE,NAME,RULEID,DESCRIPTION)
values
(4484319,'TESTRULE',14,'TEST RULE DESCRIPTION')
How do I design my insert statement so that it reads the last value from the PK column (4484319 here) and auto inserts 4484320 without explicitly mentioning the same?
Note: Our DB has hundreds and thousands of records.
I think there's something similar to (SELECT MAX(ID) + 1 FROM MyTable) which could potentially solve my problem but I don't know how to use it.
Multiple tables use the same sequence of auto-generated primary keys when user creates a static data record from the GUI.
Generally, multiple tables sharing a single sequence of primary keys is a poor design choice. Primary keys only need to be unique per table. If they need to be unique globally there are better options such as UUID primary keys.
Instead, one gives each table their own independent sequence of primary keys. In MySQL it's id bigint auto_increment primary key. In Postgres you'd use bigserial. In Oracle 12c it's number generated as identity.
create table users (
id number generated as identity,
name text not null
);
create table things (
id number generated as identity,
description text not null
);
Then you insert into each, leaving off the id, or setting it null. The database will fill it in from each sequence.
insert into users (name) values ('Yarrow Hock'); -- id 1
insert into users (id, name) values (null, 'Reaneu Keeves'); -- id 2
insert into things (description) values ('Some thing'); -- id 1
insert into things (id, description) values (null, 'Shiny stuff'); -- id 2
If your schema is not set up with auto incrementing, sequenced primary keys, you can alter the schema to use them. Just be sure to set each sequence to the maximum ID + 1. This is by far the most sane option in the long run.
If you really must draw from a single source for all primary keys, create a sequence and use that.
create sequence master_seq
start with ...
Then get the next key with nextval.
insert into rule (pk_rule, name, ruleid, description)
values (master_seq.nextval, 'TESTRULE', 14, 'TEST RULE DESCRIPTION')
Such a sequence goes up to 1,000,000,000,000,000,000,000,000,000 which should be plenty.
The INSERT and UPDATE statements in Oracle have a ...RETURNING...INTO... clause on them which can be used to return just-inserted values. When combined with a trigger-and-sequence generated primary key (Oracle 11 and earlier) or an identity column (Oracle 12 and up) this lets you get back the most-recently-inserted/updated value.
For example, let's say that you have a table TABLE1 defined as
CREATE TABLE TABLE1 (ID1 NUMBER
GENERATED ALWAYS AS IDENTITY
PRIMARY KEY,
COL2 NUMBER,
COL3 VARCHAR2(20));
You then define a function which inserts data into TABLE1 and returns the new ID value:
CREATE OR REPLACE FUNCTION INSERT_TABLE1(pCOL2 NUMBER, vCOL3 VARCHAR2)
RETURNS NUMBER
AS
nID NUMBER;
BEGIN
INSERT INTO TABLE1(COL2, COL3) VALUES (pCOL2, vCOL3)
RETURNING ID1 INTO nID;
RETURN nID;
END INSERT_TABLE1;
which gives you an easy way to insert data into TABLE1 and get the new ID value back.
dbfiddle here
I beginning to learn how to write trigger with this basic database.
I'm also making my very 1st database.
Schema
Team:
TeamID int PK (TeamID int IDENTITY(0,1) CONSTRAINT TeamID_PK PRIMARY KEY)
TeamName nvarchar(100)
History:
HistoryID int PK (HistoryID int IDENTITY(0,1) CONSTRAINT HistoryID_PK PRIMARY KEY)
TeamID int FK REF Team(TeamID)
WinCount int
LoseCount int
My trigger: when a new team is inserted, it should insert a new history row with that team id
CREATE TRIGGER after_insert_Player
ON Team
FOR INSERT
AS
BEGIN
INSERT INTO History (TeamID, WinCount, LoseCount)
SELECT DISTINCT i.TeamID
FROM Inserted i
LEFT JOIN History h ON h.TeamID = i.TeamID
AND h.WinCount = 0 AND h.LoseCount = 0
END
Executed it returns
The select list for the INSERT statement contains fewer items than the insert list. The number of SELECT values must match the number of INSERT columns.
Please help thank. I'm using SQL Server
The error text is the best guide, it is so clear ..
You try inserting one value from i.TeamID into three columns (TeamID,WinCount,LoseCount)
consider these WinCount and LoseCount while inserting.
Note: I Think the structure of History table need to revisit, you should select WinCount and LoseCount as Expressions not as actual columns.
When you specify insert columns, you say which columns you will be filling. But in your case, right after insert you select only one column (team id).
You either have to modify the insert to contain only one column, or select, to retrieve 3 fields as in insert.
If you mention the columns where values have to be inserted(Using INSERT-SELECT).
The SELECT Statement has to contain the same number of columns that have been specified to be inserted. Also, ensure they are of the same data type.(You might face some issues otherwise)
I have a parent table that represents a document of-sorts, with each record in the table having n children records in a child table. Each child record can have n grandchild records. These records are in a published state. When the user wants to modify a published document, we need to clone the parent and all of its children and grandchildren.
The table structure looks like this:
Parent
CREATE TABLE [ql].[Quantlist] (
[QuantlistId] INT IDENTITY (1, 1) NOT NULL,
[StateId] INT NOT NULL,
[Title] VARCHAR (500) NOT NULL,
CONSTRAINT [PK_Quantlist] PRIMARY KEY CLUSTERED ([QuantlistId] ASC),
CONSTRAINT [FK_Quantlist_State] FOREIGN KEY ([StateId]) REFERENCES [ql].[State] ([StateId])
);
Child
CREATE TABLE [ql].[QuantlistAttribute]
(
[QuantlistAttributeId] INT IDENTITY (1, 1),
[QuantlistId] INT NOT NULL,
[Narrative] VARCHAR (500) NOT NULL,
CONSTRAINT [PK_QuantlistAttribute] PRIMARY KEY ([QuantlistAttributeId]),
CONSTRAINT [FK_QuantlistAttribute_QuantlistId] FOREIGN KEY ([QuantlistId]) REFERENCES [ql].[Quantlist]([QuantlistId]),
)
Grandchild
CREATE TABLE [ql].[AttributeReference]
(
[AttributeReferenceId] INT IDENTITY (1, 1),
[QuantlistAttributeId] INT NOT NULL,
[Reference] VARCHAR (250) NOT NULL,
CONSTRAINT [PK_QuantlistReference] PRIMARY KEY ([AttributeReferenceId]),
CONSTRAINT [FK_QuantlistReference_QuantlistAttribute] FOREIGN KEY ([QuantlistAttributeId]) REFERENCES [ql].[QuantlistAttribute]([QuantlistAttributeId]),
)
In my stored procedure, i pass in the QuantlistId I want to clone as #QuantlistId. Since the QuantlistAttribute table has a ForeignKey I can easily clone that as well.
INSERT INTO [ql].[Quantlist] (
[StateId],
[Title],
) SELECT
1,
Title,
FROM [ql].[Quantlist]
WHERE QuantlistId = #QuantlistId
SET #ClonedId = SCOPE_IDENTITY()
INSERT INTO ql.QuantlistAttribute(
QuantlistId
,Narrative)
SELECT
#ClonedId,
Narrative,
FROM ql.QuantlistAttribute
WHERE QuantlistId = #QuantlistId
The trouble comes down to the AttributeReference. If I cloned 30 QuantlistAttribute records, how do I clone the records in the reference table and match them up with the new records I just inserted in to the QuantlistAttribute table?
INSERT INTO ql.AttributeReference(
QuantlistAttributeId,
Reference,)
SELECT
QuantlistAttributeId,
Reference,
FROM ql.QuantlistReference
WHERE ??? I don't have a key to go off of for this.
I thought I could do this with some temporary linking tables that holds the old attribute id's along with the new attribute id's. I don't know how to go about inserting the old Attribute Id's in to a temp table along with their new ones. Inserting the existing Attributes, by QuantlistId, is easy enough, but I can't figure out how to make sure I link the correct new and old Id's together in some way, so that the AttributeReference table can be cloned right. If I could get the QuantlistAttribute new and old Id's linked, I could join on that temp table and figure out how to restore the relationship of the newly cloned references, to the newly cloned attributes.
Any help on this would be awesome. I've spent the last day and a half trying to figure this out with no luck :/
Please excuse some of the SQL inconsistencies. I re-wrote up the sql real quick, trimming out a lot of additional columns, related-tables and constraints that weren't needed for this question.
Edit
After doing a little digging around, I found that OUTPUT might be useful for this. Is there a way to use OUTPUT to map the QuantlistAttributeId records I just inserted, to the QuantlistAttributeId they originated from?
You can use OUTPUT to get the inserted rows.
You can insert the data into QuantlistAttribute based on the order of ORDER BY c.QuantlistAttributeId ASC
Have a temp table/table variable which 3 columns
an id identity column
new QuantlistAttributeId
old QuantlistAttributeId.
Use OUTPUT to insert new identity values of QuantlistAttribute into a temp table/table variable.
The new IDs are generated in the same order as c.QuantlistAttributeId
Use a row_number() ordered by QuantlistAttributeId to match the old QuantlistAttributeId and new QuantlistAttributeIds based on row_number() and id of the table variable and update the values or old QuantlistAttributeId in the table variable
Use the temp table and join with AttributeReference and insert records in one go.
Note:
ORDER BY during INSERT INTO SELECT and ROW_NUMBER() to get matching old QuantlistAttributeId is required because looking at your question, there seems to be no other logical key to map old and new records together.
Query for above Steps
DECLARE #ClonedId INT,#QuantlistId INT = 0
INSERT INTO [ql].[Quantlist] (
[StateId],
[Title]
) SELECT
1,
Title
FROM [ql].[Quantlist]
WHERE QuantlistId = #QuantlistId
SET #ClonedId = SCOPE_IDENTITY()
--Define a table variable to store the new QuantlistAttributeID and use it to map with the Old QuantlistAttributeID
DECLARE #temp TABLE(id int identity(1,1), newAttrID INT,oldAttrID INT)
INSERT INTO ql.QuantlistAttribute(
QuantlistId
,Narrative)
--New QuantlistAttributeId are created in the same order as old QuantlistAttributeId because of ORDER BY
OUTPUT inserted.QuantlistAttributeId,NULL INTO #temp
SELECT
#ClonedId,
Narrative
FROM ql.QuantlistAttribute c
WHERE QuantlistId = #QuantlistId
--This is required to keep new ids generated in the same order as old
ORDER BY c.QuantlistAttributeId ASC
;WITH CTE AS
(
SELECT c.QuantlistAttributeId,
--Use ROW_NUMBER to get matching id which is same as the one generated in #temp
ROW_NUMBER()OVER(ORDER BY c.QuantlistAttributeId ASC) id
FROM ql.QuantlistAttribute c
WHERE QuantlistId = #QuantlistId
)
--Update the old value in #temp
UPDATE T
SET oldAttrID = CTE.QuantlistAttributeId
FROM #temp T
INNER JOIN CTE ON T.id = CTE.id
INSERT INTO ql.AttributeReference(
QuantlistAttributeId,
Reference)
SELECT
T.NewAttrID,
Reference
FROM ql.AttributeReference R
--Use OldAttrID to join with ql.AttributeReference and insert NewAttrID
INNER JOIN #temp T
ON T.oldAttrID = R.QuantlistAttributeId
Hope this helps.
If I have a table
Table
{
ID int primary key identity,
ParentID int not null foreign key references Table(ID)
}
how does one insert first row into a table?
From a business logic point of view not null constraint on ParentID should not be dropped.
In SQL Server, a simple INSERT will do:
create table dbo.Foo
(
ID int primary key identity,
ParentID int not null foreign key references foo(ID)
)
go
insert dbo.Foo (parentId) values (1)
select * from dbo.Foo
results in
ID ParentID
----------- -----------
1 1
If you're trying to insert a value that will be different from your identity seed, the insertion will fail.
UPDATE:
The question is not too clear on what the context is (i.e. is the code supposed to work in a live production system or just a DB setup script) and from the comments it seems hard-coding the ID might not be an option. While the code above should normally work fine in the DB initialization scripts where the hierarchy root ID might need to be known and constant, in case of a forest (several roots with IDs not known in advance) the following should work as intended:
create table dbo.Foo
(
ID int primary key identity,
ParentID int not null foreign key references foo(ID)
)
go
insert dbo.Foo (parentId) values (IDENT_CURRENT('dbo.Foo'))
Then one could query the last identity as usual (SCOPE_IDENTITY, etc.). To address #usr's concerns, the code is in fact transactionally safe as the following example demonstrates:
insert dbo.Foo (parentId) values (IDENT_CURRENT('dbo.Foo'))
insert dbo.Foo (parentId) values (IDENT_CURRENT('dbo.Foo'))
insert dbo.Foo (parentId) values (IDENT_CURRENT('dbo.Foo'))
select * from dbo.Foo
select IDENT_CURRENT('dbo.Foo')
begin transaction
insert dbo.Foo (parentId) values (IDENT_CURRENT('dbo.Foo'))
rollback
select IDENT_CURRENT('dbo.Foo')
insert dbo.Foo (parentId) values (IDENT_CURRENT('dbo.Foo'))
select * from dbo.Foo
The result:
ID ParentID
----------- -----------
1 1
2 2
3 3
currentIdentity
---------------------------------------
3
currentIdentity
---------------------------------------
4
ID ParentID
----------- -----------
1 1
2 2
3 3
5 5
If you need to use an explicit value for the first ID, when you insert your first record, you can disable the checking of the IDENTITY value (see: MSDN: SET IDENTITY_INSERT (Transact-SQL)).
Here's an example that illistrates this:
CREATE TABLE MyTable
(
ID int PRIMARY KEY IDENTITY(1, 1),
ParentID int NOT NULL,
CONSTRAINT MyTable_ID FOREIGN KEY (ParentID) REFERENCES MyTable(ID)
);
SET IDENTITY_INSERT MyTable ON;
INSERT INTO MyTable (ID, ParentID)
VALUES (1, 1);
SET IDENTITY_INSERT MyTable OFF;
WHILE ##IDENTITY <= 5
BEGIN
INSERT INTO MyTable (ParentID)
VALUES (##IDENTITY);
END;
SELECT *
FROM MyTable;
IF OBJECT_ID('MyTable') IS NOT NULL
DROP TABLE MyTable;
It seems like the NOT NULL constraint is not true for the root node in the tree. It simply does not have a parent. So the assumption that ParentID is NOT NULL is broken from the beginning.
I suggest you make it nullable and add an index on ParentID to validate that there is only one with value NULL:
create unique nonclustered index ... on T (ParentID) where (ParentID IS NULL)
It is hard to enforce a sound tree structure in SQL Server. You can get multiple roots for example or cycles in the graph. It is hard to validate all that and it is unclear if it is worth the effort. It might well be, depending on the specific case.