Creating primary key with a static element and an auto-incrementing element without making a composite primary key - sql

The issue
I have two tables. Both will have auto incrementing ID columns which will act as their respective primary key columns.
Table 1 has its ID column auto incrementing beginning with 1 and increasing by 1 for each entry. For reference I have used IDENTITY (1,1) for this.
I would like table two's ID column to behave in the same way but also with a static text/number prefix i.e. M1, M2, M3 or M00001, M00002, M00003 etc.
All of the resources I have found seem to involve the use of a composite primary key; I would like to avoid this.
Additional info
From my reading I have come away with the impression that this method may not be the best or right way to differentiate the primary keys of multiple tables within a database, however, I am struggling to find resources/examples on the best or most common methods to do this. I have explored using composite keys, calculated fields, UUIDs, Hi/Lo algorithm ids and I'm not sure what is the right way to proceed. For context, this is not a big or complicated database.
Even just a link to a good resource on this issue will help me greatly.

In the second table, instead of modifying the index, you can leave it and add a second column with the calculated value. Below is a complete example (I'm using a tabular variable but it looks the same in a normal table) of such a field in the table.
declare #tab as table
(
id int primary key identity(1, 1),
id2 as concat('M', format(id, '00000#')),
someColumn nvarchar(10)
)
insert into #tab (someColumn) values
(N'test 1'), ('test 2'), (N'test 3');
select * from #tab;
The result os the last query is
As you can see the id2 is automatically calculated based on the value of the id index.
If you want to store the id2 then you have to modify the table by adding PERSISTED like in code below
declare #tab as table
(
id int primary key identity(1, 1),
id2 as concat('M', format(id, '00000#')) PERSISTED,
someColumn nvarchar(10)
)
You can find more info about calculated columns here.

There is no suitable solution.
But the following query can be used
IF Not Exists(Select * From sys.tables t where t.name='ids')
Begin
Create Table ids(id int)
Insert into ids values(0)
End
Go
IF Not Exists(Select * From sys.tables t where t.name='tbltextIdentity')
Create Table tbltextIdentity (id varchar(50) primary key ,someColumn nvarchar(10))
Go
--insert into tbltextIdentity values('M000001','Test1')
UPDATE Ids SET Id = id +1 OUTPUT concat('M', format(INSERTED.Id, '00000#')), 'Test1' INTO tbltextIdentity
Go
Select * From tbltextIdentity

Related

Add relation with fixed column value

I like to create a 'conditional' (foreign key) relation between 3 tables. In my case, it's like this (of course it's quite more complex, but I've stripped it down to demonstrate the problem situation):
Table [ItemTable]
Column int Id (PK)
Column str ItemName
Table [ItemGroup]
Column int Id (PK)
Column str GroupName
Table [Settings]
Column int Id (PK)
Column str RefersTo ('I' means item, 'G' means item group)
Column int Reference (foreign key depending on 'RefersTo')
The Goal now is to create Relations with contraints like this:
Settings.Reference refers to ItemTable.Id when Settings.RefersTo equals 'I'
Settings.Reference refers to ItemGroup.Id when Settings.RefersTo equals 'G'
No relation in case if RefersTo is empty (so no constraint in this situation)
It sounds like a refer-here-or-there-relation, but I don't know how to achive with MS SQL. I usually use the grafical designer in Management Studio to create and modify table defintion.
Any help is appreciated. Thank you in advance.
Foreign keys don't have filter clauses in their definition. But you can do this using computed columns:
create table Settings as (
. . .
reference_i as (case when refersto = 'I' then reference end) persisted,
reference_g as (case when refersto = 'G' then reference end) persisted,
constraint fk_settings_reference_index
foreign key (reference_i) references itemTable(id),
constraint fk_settings_reference_group
foreign key (reference_g) references groupTable(id)
);
This is not a good design and if you can, it would be better to change it as #VojtěchDohnal already suggested.
If you cannot change it, you could use a trigger after insert, to check if the corresponding value of Reference comes from the correct table, depending on the current value of RefersTo and if not, stop inserting and throw some error, but using triggers is also not the best way performance-wise.
You cannot use an indexed view (which would have been the best, since it would be schema bound and it would get all new values/deleted values from your items or groups) since your sources are two different ones and you would need a union to generate a full list of posible values and there's a limitation that The SELECT statement in the view definition must not contain UNION
in indexed views.
The last option: You could use an additional table where you keep all data (Type('I', 'G'), Value (Id's from ItemTable for 'I', Id's from ItemGroup for 'G')) with possible Id's for each table and then make your composite foreign key refer to this new table.
The drawback is that in this case you would need to keep track of changes in both ItemTable and ItemGroup tables and update the newly created table accordingly (for newly inserted values, or deleted values) which is not so nice when it comes to maintenance.
For this last scenario the code would be something like:
CREATE TABLE ItemTable (Id INT PRIMARY KEY IDENTITY(1,1), ItemName VARCHAR(100))
CREATE TABLE ItemGroup (Id INT PRIMARY KEY IDENTITY(1,1), GroupName VARCHAR(100))
CREATE TABLE Settings (Id INT PRIMARY KEY IDENTITY(1,1), RefersTo CHAR(1), Reference int)
INSERT INTO ItemTable (ItemName) values ('TestItemName1'), ('TestItemName2'), ('TestItemName3'), ('TestItemName4')
INSERT INTO [ItemGroup] (GroupName) values ('Group1'), ('Group2')
SELECT * FROM ItemTable
SELECT * FROM ItemGroup
SELECT * FROM Settings
CREATE TABLE ReferenceValues (Type char(1), Val INT, PRIMARY KEY (Type, Val))
INSERT INTO ReferenceValues
SELECT 'I' as Type, i.Id as Val
FROM dbo.ItemTable i
UNION
SELECT 'G' as Type, g.Id as Val
FROM dbo.ItemGroup as g
ALTER TABLE dbo.Settings
ADD FOREIGN KEY (RefersTo, Reference) REFERENCES dbo.ReferenceValues(Type, Val);
INSERT INTO Settings (RefersTo, Reference)
VALUES ('I', 1) -- will work
INSERT INTO Settings (RefersTo, Reference)
VALUES ('G', 4) -- will not work
After thinking arround, I came to conclusion to discard the whole idea with one-column-multi-relation thingy.
Answer accepted: Despite on good or bad idea, implementation as desired not possible :)
Thank you all for your answers and comments!

SQL Server Constraint (Limit bit field based on a foreign key)

I need help with constraints in SQL Server. The situation is for each OrderID=1 (foreign key not primary key so there are multiple rows with the same ID) on the table, the bit field can only be 1 for one of those rows, and for each row with OrderID=2, the bit field can only be 1 for one row, etc etc. It should be 0 for all other rows with the same OrderID. Any new records coming in with 1 in the bit field should reject if there is already a row with that OrderID which has the bit field set to 1. Any ideas?
CREATE UNIQUE INDEX ON UnnamedTable (OrderID) WHERE UnnamedBitField=1
It's called a Filtered Index. If you're on a pre-2008 version of SQL Server, you can implement a poor-mans equivalent of a filtered index using an indexed view:
CREATE VIEW UnnamedView
WITH SCHEMABINDING
AS
SELECT OrderID From UnnamedSchema.UnnamedTable WHERE UnnamedBitField=1
GO
CREATE UNIQUE CLUSTERED INDEX ON UnnamedView (OrderID)
You can't really do it as a constraint, since SQL Server only supports column constraints and row constraints. There's no (non-fudging) way to write a constraint that deals with all values in the table.
You could more fully normalize the schema which will help you not have to hunt for the already set bit but use a join. You need to remove the bit field and crate a new table say X containing OrderID and the primary key of your table, with the primary key of X being all those fields.
This means that when you insert you need to insert into your original table and into X f and only if you would have set the bit to 1 on your table. The insert will fail if there is already a row in X which is as if there was already an original row with bit set to 1.
The downside is that this takes up more space than your schema but is easier to maintain as you can't get to the equivalent of having two rows with the bit set to 1.
The only way to do that is to subclass the parent table. You didn't mention it but a common reason for this pattern is to represent one unique active row from the set of all rows with the same common key value. Let's Assume your bit field represents the active Orders....
Then I would create a separate table called ActiveOrders, which will only contain the one row with the bit field set to 1
Create Table ActiveOrders(int Orderid Primary Key Null)
and the other table with all the rows in it, with it's own unique Primary Key OrderId
Create Table AllOrders
(OrderId Integer Primary Key Not Null, ActiveOrderId Integer Not Null,
[All other data fields]
Constraint FK_AllOrders2ActiveOrder
Foreign Key(ActiveOrderId) references ActiveOrders(OrderId))
You now no longer even need the bit field, as the presence of the row in the ActiveOrders table identifies it as the Active Order... To get only the active Orders (the ones that in your scheme would have bit field set to 1), just join the two tables.
I aggree with the other answers and if you can change the schema then do that but if not then I think something like this will do.
CREATE FUNCTION fnMyCheck
(#id INT)
RETURNS INT
AS
BEGIN
DECLARE #i INT
SELECT #i = COUNT(*)
FROM MyTable
WHERE FkCol = #id
AND BitCol = 1
RETURN #i
END
ALTER TABLE YourTable
ADD CONSTRAINT ckMyCheck CHECK (fnMyCheck(FkCol)<=1)
but there are problems that can come from doing using a udf in a check constraint, such as this
Edit to add comment regarding problems with this 'solution':
There are more straightforward issues than what you've linked to.
INSERT INTO YourTable(FkCol,BitCol) VALUES (1,1),(1,0)
followed by
UPDATE YourTable SET BitCol=1
succeeds and leaves two rows with FkCol=1 and BitCol=1

SQL - Field Grouping and temporary data restructruing

I would like to apologize first about my title, because I understand it may be technically incorrect
I currently have a database with multiple tables, 4 of them are relevant in this example.
FORMS
FIELDS
ENTRIES
VALUES
Below is a shortened version of the tables
Create table Form_Master
(
form_id int identity primary key ,
form_name varchar(255) ,
form_description varchar(255),
form_create_date date ,
)
Create table Field_Master
(field_id int identity primary key,
form_ID int foreign key references Form_Master(form_id),
field_name varchar(255),
type_ID int
)
Create table Entry_Master
(
entry_id int identity primary key,
entry_date date,
form_id int foreign key references Form_Master(form_id),
)
Create table Value_Master
(
value_id int identity primary key,
value varchar(255),
field_id int foreign key references Field_Master(field_id),
entry_id int foreign key references Entry_Master(entry_id),
)
The purpose of these tables is to create a dynamic method of capturing and retrieving information - a form is a table, a field is a column, and entry is a row and a value is a cell
Currently when I am retrieving information from a form, I create a temporary table, with columns as such in the field_master, then select all entries linked to the form, and the values linked to those entries, and insert them into the temporary table I have just created.
The reason for the temporary table is to restructure the data into an organised format and display it in a DataGridView.
My problem is one of performance, creating the table as mentioned above is becoming slower as forms exceed fields > 20 or entries linked to a form exceeds > 100
My questions are:
Is there a way to select the data directly from field_master in the format of the temporary table mentioned above?
Do you think I should re-think my database design?
Is there an easier method to do what I am trying to do?
Any input will be appreciated, I do know how to use Google, however in this instance I am not sure what exactly to look for, so even a keyword would be nice.

Will multiply insert requests to the same table with direct query and store-procedure cause collision?

Multiply users can call store procedure(SP), that will make some changes to mytable in SQL Server. This SP should insert some rows to mytable that has reference to itself through parentid column.
TABLE mytable(
id int identity(1,1) primary key,
name varchar(20) not null,
parentId int not null foreign key references mytable(id)
)
in order to insert row to such table, accordingly to other posts, I have 2 ways:
Allow null to parentid column by ALTER TABLE mytable alter column parentid int null;, insert the row, update parentid and than disable null to parentid
Allow IDENTITY by set identity_insert maytable on, insert dummy row with id=-1 and parentid=-1, insert the correct row with reference to -1, update the parentid to SCOPE_IDENTITY() and in the end set IDENTITY to off
The case:
Assume I take the 2nd way. SP managed to set identity_insert mytable on BUT didn't yet finished the execution of the rest SP. At this time, there are other INSERT requests(NOT through SP) to the mytable table like INSERT INTO mytable(name,parentid) VALUES('theateist', -1). No id is specified because they assumed that IDENTITY is off and therefore id is auto-incremental.
The Question:
Will this cause errors while inserting because IDENTITY, in this period of time, is ON and not auto-incremental any more and therefore it will require id specification? If yes, it will be better to use the 1st way, isn't it?
Thank you
identity_insert is a per-connection setting - you won't affect other connections/statements running against this table.
I definitely wouldn't suggest going the first way, if it could be avoided, since it could impact other users of the table - e.g. some other connection could do a broken insert (parentid=null) while the column definition allows it, and then your stored proc will break. Also, setting a column not null forces a full table scan to occur, so this won't work well as the table grows.
If you did stick with method 2, you've still got an issue with what happens if two connections run this stored proc simultaneously - they'll both want to insert the -1 row, at different times, and delete it also. You'll have conflicts.
I'm guessing the problem you're having is inserting the "roots" of the tree(s), since they have no parent, and so you're attempting to have them self referencing. I'd instead probably make the roots have a null parentid permanently. If there's some other key column(s), these could be used in a filtered index or indexed view to ensure that only one root exists for each key.
Imagine that we're building some form of family trees, and ignoring most of the realities of such beasts (such as most families requiring children to have two parents):
CREATE TABLE People (
PersonID int IDENTITY(1,1) not null,
Surname varchar(30) not null,
Forename varchar(30) not null,
ParentID int null,
constraint PK_People PRIMARY KEY (PersonID),
constraint FK_People_Parents FOREIGN KEY (ParentID) references People (PersonID)
)
CREATE UNIQUE INDEX IX_SoleFamilyRoot ON People (Surname) WHERE (ParentID is null)
This ensures that, within each family (as identified by the surname), exactly one person has a null ParentID. Hopefully, you can modify this example to fit your model.
On SQL Server 2005 and earlier, you have to use an indexed view instead.

Set based insert into two tables with 1 to 0-1 relation

I have two tables, the first has a primary key that is an identity, the second has a primary key that is not, but that key has a foreign key constraint back to the first table's primary key.
If I am inserting one record at a time I can use the Scope_Identity to get the value for the pk just inserted in table 1 that I want to insert into the second table.
My problem is I have many records coming from selects I want to insert in both tables, I've not been able to think of a set based way to do these inserts.
My current solution is to use a cursor, insert in the first table, get key using scope_identity, insert into second table, repeat.
Am I missing a non-cursor solution?
Yes, Look up the output clause in Books online.
I had this problem just this week: someone had introduced a table with a meaningless surrogate key into the schema where naturally keys are used. No doubt I'll fix this soon :) until then, I'm working around it by creating a table of data to INSERT from: this could be a permanent or temporary base table or a derived table (see below), which should suit your desire for a set-based solution anyhow. Use a join between this table and the table with the IDENTITY column on the natural key to find out the auto-generated values. Here's a brief example:
CREATE TABLE Test1
(
surrogate_key INTEGER IDENTITY NOT NULL UNIQUE,
natural_key CHAR(10) NOT NULL CHECK (natural_key NOT LIKE '%[^0-9]%') UNIQUE
);
CREATE TABLE Test2
(
surrogate_key INTEGER NOT NULL UNIQUE
REFERENCES Test1 (surrogate_key),
data_col INTEGER NOT NULL
);
INSERT INTO Test1 (natural_key)
SELECT DT1.natural_key
FROM (
SELECT '0000000001', 22
UNION ALL
SELECT '0000000002', 55
UNION ALL
SELECT '0000000003', 99
) AS DT1 (natural_key, data_col);
INSERT INTO Test2 (surrogate_key, data_col)
SELECT T1.surrogate_key, DT1.natural_key
FROM (
SELECT '0000000001', 22
UNION ALL
SELECT '0000000002', 55
UNION ALL
SELECT '0000000003', 99
) AS DT1 (natural_key, data_col)
INNER JOIN Test1 AS T1
ON T1.natural_key = DT1.natural_key;