Can you access the auto increment value in MySQL within one statement? - sql

I have a MySQL database which contains a table of users. The primary key of the table is 'userid', which is set to be an auto increment field.
What I'd like to do is when I insert a new user into the table is to use the same value that the auto increment is creating in the 'userid' field in a different field, 'default_assignment'.
e.g.
I'd like a statement like this:
INSERT INTO users ('username','default_assignment') VALUES ('barry', value_of_auto_increment_field())
so I create user 'Barry', the 'userid' is generated as being 16 (for example), but I also want the 'default_assignment' to have the same value of 16.
Is there any way to achieve this please?
Thanks!
Update:
Thanks for the replies. The default_assignment field isn't redundant. The default_assigment can reference any user within the users table. When creating a user I already have a form that allows a selection of another user as the default_assignment, however there are cases where it needs to be set to the same user, hence my question.
Update:
Ok, I've tried out the update triggers suggestion but still can't get this to work. Here's the trigger I've created:
CREATE TRIGGER default_assignment_self BEFORE INSERT ON `users`
FOR EACH ROW BEGIN
SET NEW.default_assignment = NEW.userid;
END;
When inserting a new user however the default_assignment is always set to 0.
If I manually set the userid then the default_assignment does get set to the userid.
Therefore the auto assignment generation process clearly happens after the trigger takes effect.

there's no need to create another table, and max() will have problems acording to the auto_increment value of the table, do this:
CREATE TRIGGER trigger_name BEFORE INSERT ON tbl FOR EACH ROW
BEGIN
DECLARE next_id;
SET next_id = (SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='tbl');
SET NEW.field = next_id;
END
I declare the next_id variable because usually it will be used in some other way(*), but you could do straight new.field=(select ...)
CREATE TRIGGER trigger_name BEFORE INSERT ON tbl FOR EACH ROW
BEGIN
SET NEW.field=(SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='tbl');
END
Also in cases of (SELECT string field) you can use CAST value;
CREATE TRIGGER trigger_name BEFORE INSERT ON tbl FOR EACH ROW
BEGIN
SET NEW.field=CAST((SELECT aStringField FROM information_schema.TABLES WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='tbl') AS UNSIGNED);
END
(*) To auto-name an image:
SET NEW.field = CONCAT('image_', next_id, '.gif');
(*) To create a hash:
SET NEW.field = CONCAT( MD5( next_id ) , MD5( FLOOR( RAND( ) *10000000 ) ) );

try this
INSERT INTO users (default_assignment) VALUES (LAST_INSERT_ID()+1);

seeing that last_insert_id() wouldn't work in this case, yes, the trigger would be the only way to accomplish that.
I do ask myself though: What do you need this functionality for? Why do you store the users id twice? Personally, I don't like storing redundant data like this and I'd probably solve this in application code by making that ominous default_assignment column NULL and using the user id in my application code if default_assignment was NULL.

Actually I just tried to do the same thing as was suggested above. But it seems Mysql doesent generate the inserted ID before the row actually gets commited. So NEW.userid will always return 0 in a Before insert trigger.
The above also wont work unless it is a BEFORE INSERT trigger, since you cant update values in a AFTER INSERT query.
From a Mysql Forum Post It seems the only way to handle this is using an additional table as a sequence. So that your trigger can pull in the values from an external source.
CREATE TABLE `lritstsequence` (
`idsequence` int(11) NOT NULL auto_increment,
PRIMARY KEY (`idsequence`)
) ENGINE=InnoDB;
CREATE TABLE `lritst` (
`id` int(10) unsigned NOT NULL auto_increment,
`bp_nr` decimal(10,0) default '0',
`descr` varchar(128) default NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `dir1` (`bp_nr`)
) ENGINE=InnoDB;
DELIMITER $$
DROP TRIGGER /*!50032 IF EXISTS */ `lritst_bi_set_bp_nr`$$
CREATE TRIGGER `lritst_bi_set_bp_nr` BEFORE INSERT ON `lritst`
FOR EACH ROW
BEGIN
DECLARE secuencia INT;
INSERT INTO lritstsequence (idsequence) VALUES (NULL);
SET secuencia = LAST_INSERT_ID();
SET NEW.id = secuencia;
SET NEW.bp_nr = secuencia;
END;$$
DELIMITER ;
INSERT INTO lritst (descr) VALUES ('test1');
INSERT INTO lritst (descr) VALUES ('test2');
INSERT INTO lritst (descr) VALUES ('test3');
SELECT * FROM lritst;
Result:
id bp_nr descr
------ ------ ------
1 1 test1
2 2 test2
3 3 test3
This was copied from forums.mysql.com/read.php?99,186171,186241#msg-186241 but Im not allowed to post links yet.

The only I found that would solve this problem without an extra table would be to calculate self the next number and put that in the fields required.
CREATE TABLE `Temp` (
`id` int(11) NOT NULL auto_increment,
`value` varchar(255) ,
PRIMARY KEY (`idsequence`)
) ENGINE=InnoDB;
CREATE TRIGGER temp_before_insert BEFORE INSERT ON `Temp`
FOR EACH ROW
BEGIN
DECLARE m INT;
SELECT IFNULL(MAX(id), 0) + 1 INTO m FROM Temp;
SET NEW.value = m;
-- NOT NEEDED but to be safe that no other record can be inserted in the meanwhile
SET NEW.id = m;
END;

basically, the solution is like Resegue said.
But if you want it in one statement, you will use one of the below ways:
1. One long statement:
INSERT INTO `t_name`(field) VALUES((SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='t_name'))
or for text with number:
INSERT INTO `t_name`(field) VALUES(CONCAT('Item No. ',CONVERT((SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='t_name') USING utf8)))
it looks more clearly in PHP:
$pre_name='Item No. ';
$auto_inc_id_qry = "(SELECT AUTO_INCREMENT FROM information_schema.TABLES
WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='$table')";
$new_name_qry = "CONCAT('$pre_name',CONVERT($auto_inc_id_qry USING utf8))";
mysql_query("INSERT INTO `$table`(title) VALUES($new_name_qry)");
2. Using function: (not tested yet)
CREATE FUNCTION next_auto_inc(table TINYTEXT) RETURNS INT
BEGIN
DECLARE next_id INT;
SELECT AUTO_INCREMENT FROM information_schema.TABLES
WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME=table INTO next_id;
RETURN next_id;
END
INSERT INTO users ('username','default_assignment')
VALUES ('barry', next_auto_inc('users'))

$ret = $mysqli->query("SELECT Auto_increment FROM information_schema.tables WHERE table_schema = DATABASE() ");l
while ($row = mysqli_fetch_array($ret)) {
$user_id=$row['Auto_increment'];
}

You can do this reliably using a simple subquery:
INSERT INTO users ('username','default_assignment')
SELECT 'barry', Auto_increment FROM information_schema.tables WHERE TABLE_NAME='users'

I tested the above trigger idea with 10 concurrent threads doing inserts and I got over 1000 cases of 2 or 3 duplicates after ~25k inserted.
DROP TABLE IF EXISTS test_table CASCADE;
CREATE TABLE `test_table` (
`id` INT NOT NULL AUTO_INCREMENT,
`update_me` VARCHAR(36),
`otherdata` VARCHAR(36) NOT NULL,
PRIMARY KEY (`id`)
)
ENGINE = InnoDB
DEFAULT CHARSET = utf8
COMMENT 'test table for trigger testing';
delimiter $$
DROP TRIGGER IF EXISTS setnum_test_table;
$$
CREATE TRIGGER setnum_test_table
BEFORE INSERT ON test_table FOR EACH ROW
-- SET OLD.update_me = CONCAT(NEW.id, 'xyz');
BEGIN
DECLARE next_id INT;
SET next_id = (SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='test_table' LOCK IN SHARE MODE );
-- SET NEW.update_me = CONCAT(next_id, 'qrst');
SET NEW.update_me = next_id;
END
$$
delimiter ;
-- SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='test_table'
INSERT INTO test_table (otherdata) VALUES ('hi mom2');
SELECT count(*) FROM test_table;
SELECT * FROM test_table;
-- select count(*) from (
select * from (
SELECT count(*) as cnt ,update_me FROM test_table group by update_me) q1
where cnt > 1
order by cnt desc
I used 10 of:
while true ; do echo "INSERT INTO test_table (otherdata) VALUES ('hi mom2');" | mysql --user xyz testdb ; done &
And ran the last query to watch for duplicates
example output:
'3', '4217'
'3', '13491'
'2', '10037'
'2', '14658'
'2', '5080'
'2', '14201'
...
Note 'LOCK IN SHARE MODE' didn't change anything. With and without gave duplicates at about the same rate. It seems that MySQL AUTO_INCREMENT doesn't work like Postgres' next_val() and is NOT concurrency safe.

I know this post is from 2010, but I couldn't find a good solution.
I've solved this by creating a separate table that holds the counters. When I need to generate an unique identifier for a column I just call a Stored proc:
CREATE DEFINER=`root`#`localhost` PROCEDURE `IncrementCounter`(in id varchar(255))
BEGIN
declare x int;
-- begin;
start transaction;
-- Get the last counter (=teller) and mark record for update.
select Counter+1 from tabel.Counter where CounterId=id into x for update;
-- check if given counter exists and increment value, otherwise create it.
if x is null then
set x = 1;
insert into tabel.Counters(CounterId, Counter) values(id, x);
else
update tabel.Counters set Counter = x where CounterId = id;
end if;
-- select the new value and commit the transaction
select x;
commit;
END
The 'for update' statement locks the row in the counters table. This avoids duplicates being made by multiple threads.

Related

T-SQL Trigger - Audit Column Change

Given a simple table, with an ID what is the correct way to audit a column being changed. I am asking after looking after various answers which seem not to be working.
Here is what I have:
Create Table Tbl_Audit
(
AuditId int identity(1,1) not null,
Tbl_Id int not null.
Tbl_Old_ColumnValue varchar(255),
Tbl_New_ColumnValue varchar(255)
)
GO
Create Trigger Tr_Tbl_ColumnChanged on Tbl
after insert, update
As
begin
if(update(ColumnName))
begin
insert into tbl_audit
(
Tbl_Id,
Tbl_Old_ColumnName,
Tbl_New_ColumnName
)
select
tbl.PKId,
tbl.ColumnName,
i.ColumnName,
from
Tbl tbl join
inserted i
on tbl.PKId = i.PKId
end
What I see is thousands of examples where Tbl_Old_ColumnValue = Tbl_New_ColumnValue , which is not what I want.
I would expect to run:
select top 10 * from tbl_audit where Tbl_Old_ColumnValue !=Tbl_New_ColumnValue
But this returns no results.
In order to get results of columns that actually changed, I need to run a very expensive query:
select top 10
old.AuditId,
old.Tbl_Old_ColumnValue,
new.Tbl_Old_ColumnValue as [Tbl_New_ColumnValue]
from tbl_audit [old]
join Tbl_Audit [new]
on [ol].Tbl_Id= [new].Tbl_Id and [old].AuditId != [new].AuditId
where [old].Tbl_Old_ColumnValue != [new].Tbl_Old_ColumnValue
Results:
AuditId Tbl_Id Tbl_Old_ColumnValue Tbl_New_ColumnValue
10051 1 old_value old_value
10052 1 new_value new_value
But that doesn't produce what I expect:
AuditId Tbl_Id Tbl_Old_ColumnValue Tbl_New_ColumnValue
10057 1 old_value Some New Value
Oddly, If I modify the column directly via SSMS using:
update Tbl set Tbl.ColumnValue = 'Some New Value'
I see what I expect from my trigger:
AuditId Tbl_Id Tbl_Old_ColumnValue Tbl_New_ColumnValue
10057 1 old_value Some New Value
What am I doing wrong?
Also, how do I eliminate auditing of row where update(ColumnName) is actually false. IE, the ColumnName (even if being set) is not audit when it is being set to the previous/old value.
update(ColumnName) doesn't mean that the value has changed, just that that column was involved in the insert/update - and it will always be involved in an insert. You need to compare the old and new values using inserted and deleted e.g.
insert into tbl_audit
(
Tbl_Id,
Tbl_Old_ColumnName,
Tbl_New_ColumnName
)
select
tbl.PKId,
tbl.ColumnName,
i.ColumnName,
from
inserted i
left join deleted d on d.PKId = i.PKId
-- Insert d.PKId is null, there are no records in deleted
where d.PKId is null
-- Change from null to value
or (i.ColumnName is null and d.ColumnName is not null)
-- Change from value to null
or (i.ColumnName is not null and d.ColumnName is null)
-- Change in value
or i.ColumnName <> d.ColumnName;
You can potentially simplify the null check using coalesce and a suitable value which will never actually occur in your data.
The documentation is actually pretty good on all this.
And if the column is not always included in an update, then the update(ColumnName) test is still worth doing because it speeds up the trigger, and triggers should be as fast as possible. Personally I short circuit out early e.g. if not update(ColumnName) return;
Obviously you need to adapt that logic to handle all the columns you are auditing.

Enumerate the multiple rows in a multi-update Trigger

I have something like the table below:
CREATE TABLE updates (
id INT PRIMARY KEY IDENTITY (1, 1),
name VARCHAR (50) NOT NULL,
updated DATETIME
);
And I'm updating it like so:
INSERT INTO updates (name, updated)
VALUES
('fred', '2020-11-11),
('fred', '2020-11-11'),
...
('bert', '2020-11-11');
I need to write an after update Trigger and enumerate all the name(s) that were added and add each one to another table but can't work out how enumerate each one.
EDIT: - thanks to those who pointed me in the right direction, I know very little SQL.
What I need to do is something like this
foreach name in inserted
look it up in another table and
retrieve a count of the updates a 'name' has done
add 1 to the count
and update it back into the other table
I can't get to my laptop at the moment, but presumably I can do something like:
BEGIN
SET #count = (SELECT UCount from OTHERTAB WHERE name = ins.name)
SET #count = #count + 1
UPDATE OTHERTAB SET UCount = #count WHERE name = ins.name
SELECT ins.name
FROM inserted ins;
END
and that would work for each name in the update?
Obviously I'll have to read up on set based SQL processing.
Thanks all for the help and pointers.
Based on your edits you would do something like the following... set based is a mindset, so you don't need to compute the count in advance (in fact you can't). It's not clear whether you are counting in the same table or another table - but I'm sure you can work it out.
Points:
Use the Inserted table to determine what rows to update
Use a sub-query to calculate the new value if its a second table, taking into account the possibility of null
If you are really using the same table, then this should work
BEGIN
UPDATE OTHERTAB SET
UCount = COALESCE(UCount,0) + 1
WHERE [name] in (
SELECT I.[name]
FROM Inserted I
);
END;
If however you are using a second table then this should work:
BEGIN
UPDATE OTHERTAB SET
UCount = COALESCE((SELECT UCount+1 from OTHERTAB T2 WHERE T2.[name] = OTHERTAB.[name]),0)
WHERE [name] in (
SELECT I.[name]
FROM Inserted I
);
END;
Using inserted and set-based approach(no need for loop):
CREATE TRIGGER trg
ON updates
AFTER INSERT
AS
BEGIN
INSERT INTO tab2(name)
SELECT name
FROM inserted;
END

SQL - Unique key across 2 columns of same table?

I use SQL Server 2016. I have a database table called "Member".
In that table, I have these 3 columns (for the purpose of my question):
idMember [INT - Identity - Primary Key]
memEmail
memEmailPartner
I want to prevent a row to use an email that already exists in the table.
Both email columns are not mandatory, so they can be left blank (NULL).
If I create a new Member:
If not blank, the values entered for "memEmail" and "memEmailPartner" (independently) should not be found in any other rows in columns memEmail nor memEmailPartner.
So if I want to create a row with email (dominic#email.com) I must not find any occurrences of that value in memEmail or memEmailPartner.
If I update an existing Member:
I must not find any occurrences of that value in memEmail or memEmailPartner, with the exception that I am updating the row (idMembre) which already have the value in memEmail or memEmailPartner.
--
From what I read on Google, it should be possible to do something with a Function-Based Check Constraint but I can't make that work.
Anyone have a solution to my problem ?
Thank you.
I may have misunderstood exactly what you were asking but it looks like you want a simple upsert query with IF EXISTS conditions.
DECLARE #emailAddress VARCHAR(255)= 'dominic#email.com', --dummy value
#id INT= 2; --dummy value
IF NOT EXISTS
(
SELECT 1
FROM #Member
WHERE memEmail = #emailAddress
OR memEmailPartner = #emailAddress
)
BEGIN
SELECT 'insert';
END;
ELSE IF EXISTS
(
SELECT 1
FROM #Member
WHERE idMember = #id
)
BEGIN
SELECT 'update';
END;
A trigger is the traditional way of doing doing what you're asking for. Here's a simple demo;
--if object_id('member') is not null drop table member
go
create table member (
idMember INT Identity Primary Key,
memEmail varchar(100),
memEmailPartner varchar(100)
)
go
create trigger trg_member on member after insert, update as
begin
set nocount on
if exists (select 1 from member m join inserted i on i.memEmail = m.memEmail and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmail = m.memEmailPartner and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmailPartner = m.memEmail and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmailPartner = m.memEmailPartner and i.idMember <> m.idMember)
begin
raiserror('Email addresses must be unique.', 16, 1)
rollback
end
end
go
insert member(memEmail, memEmailPartner) values('a#a.com', null), ('b#b.com', null), (null, 'c#c.com'), (null, 'd#d.com')
go
select * from member
insert member(memEmail, memEmailPartner) values('a#a.com', null) -- should fail
go
insert member(memEmail, memEmailPartner) values(null, 'a#a.com') -- should fail
go
insert member(memEmail, memEmailPartner) values('c#c.com', null) -- should fail
go
insert member(memEmail, memEmailPartner) values(null, 'c#c.com') -- should fail
go
insert member(memEmail, memEmailPartner) values('e#e.com', null) -- should work
go
insert member(memEmail, memEmailPartner) values(null, 'f#f.com') -- should work
go
select * from member
-- Make sure updates still work!
update member set memEmail = memEmail, memEmailPartner = memEmailPartner
I've not tested this extensively but it should be enough to get you started if you want to try this approach.
StuartLC notes the potential for the UDF check constraint to fail in set based updates and/or various other conditions, triggers don't have this problem.
Stuart also suggests reconsidering whether this should really be a database constraint or managed through business logic elsewhere. I'm inclined to agree - my gut feel here is that sooner or later you will come across a situation that requires email addresses to be reused, or in some other way not strictly unique.
TL;DR
The wisdom of applying this kind of business rule logic in the database needs to be reconsidered - this check is likely a better candidate for your application, or a stored procedure which acts as an insert gate keeper instead of direct new row inserts into the table.
Ignoring the Warnings
That said, I do believe that what you want is however possible in a constraint UDF, albeit with potentially atrocious performance consequences*1, and likely prone to race conditions in set based updates
Here's a user defined function which applies the unique email logic across both columns. Note that by the time the constraint is checked, that the row is IN the table already, hence the new row itself needs to be excluded from the duplicate checks.
My code also is depedent on ANSI NULL behaviour, i.e. that the predicates NULL = NULL and X IN (NULL) both return NULL, and hence are excluded from the failure check (in order to meet your requirement that NULLS do not fail the rule).
We also need to check for the insert of BOTH new columns being non-null, but duplicated.
So here's the a UDF doing the checking:
CREATE FUNCTION dbo.CheckUniqueEmails(#id int, #memEmail varchar(50),
#memEmailPartner varchar(50))
RETURNS bit
AS
BEGIN
DECLARE #retval bit;
IF #memEmail = #memEmailPartner
OR EXISTS (SELECT 1 FROM MyTable WHERE memEmail IS NOT NULL
AND memEmail IN(#memEmail, #memEmailPartner) AND idMember <> #id)
OR EXISTS (SELECT 1 FROM MyTable WHERE memEmailPartner IS NOT NULL
AND memEmailPartner IN(#memEmail, #memEmailPartner) AND idMember <> #id)
SET #retval = 0
ELSE
SET #retval = 1;
RETURN #retval;
END;
GO
Which is then enforced in a CHECK constraint:
ALTER TABLE MyTable ADD CHECK (dbo.CheckUniqueEmails(
idMember, memEmail, memEmailPartner) = 1);
I've put a SQLFiddle up here
Uncomment the 'failed' test cases to ensure that the above check constraint is working.
I haven't tested this with updates, and as per Martin's advice on the link, this will likely break on an insert with multiple rows.
*1 - we'll need indexes on BOTH email address columns.

Generating the Next Id when Id is non-AutoNumber

I have a table called Employee. The EmpId column serves as the primary key. In my scenario, I cannot make it AutoNumber.
What would be the best way of generating the the next EmpId for the new row that I want to insert in the table?
I am using SQL Server 2008 with C#.
Here is the code that i am currently getting, but to enter Id's in key value pair tables or link tables (m*n relations)
Create PROCEDURE [dbo].[mSP_GetNEXTID]
#NEXTID int out,
#TABLENAME varchar(100),
#UPDATE CHAR(1) = NULL
AS
BEGIN
DECLARE #QUERY VARCHAR(500)
BEGIN
IF EXISTS (SELECT LASTID FROM LASTIDS WHERE TABLENAME = #TABLENAME and active=1)
BEGIN
SELECT #NEXTID = LASTID FROM LASTIDS WHERE TABLENAME = #TABLENAME and active=1
IF(#UPDATE IS NULL OR #UPDATE = '')
BEGIN
UPDATE LASTIDS
SET LASTID = LASTID + 1
WHERE TABLENAME = #TABLENAME
and active=1
END
END
ELSE
BEGIN
SET #NEXTID = 1
INSERT INTO LASTIDS(LASTID,TABLENAME, ACTIVE)
VALUES(#NEXTID+1,#TABLENAME, 1)
END
END
END
Using MAX(id) + 1 is a bad idea both performance and concurrency wise.
Instead you should resort to sequences which were design specifically for this kind of problem.
CREATE SEQUENCE EmpIdSeq AS bigint
START WITH 1
INCREMENT BY 1;
And to generate the next id use:
SELECT NEXT VALUE FOR EmpIdSeq;
You can use the generated value in a insert statement:
INSERT Emp (EmpId, X, Y)
VALUES (NEXT VALUE FOR EmpIdSeq, 'x', 'y');
And even use it as default for your column:
CREATE TABLE Emp
(
EmpId bigint PRIMARY KEY CLUSTERED
DEFAULT (NEXT VALUE FOR EmpIdSeq),
X nvarchar(255) NULL,
Y nvarchar(255) NULL
);
Update: The above solution is only applicable to SQL Server 2012+. For older versions you can simulate the sequence behavior using dummy tables with identity fields:
CREATE TABLE EmpIdSeq (
SeqID bigint IDENTITY PRIMARY KEY CLUSTERED
);
And procedures that emulates NEXT VALUE:
CREATE PROCEDURE GetNewSeqVal_Emp
#NewSeqVal bigint OUTPUT
AS
BEGIN
SET NOCOUNT ON
INSERT EmpIdSeq DEFAULT VALUES
SET #NewSeqVal = scope_identity()
DELETE FROM EmpIdSeq WITH (READPAST)
END;
Usage exemple:
DECLARE #NewSeqVal bigint
EXEC GetNewSeqVal_Emp #NewSeqVal OUTPUT
The performance overhead of deleting the last inserted element will be minimal; still, as pointed out by the original author, you can optionally remove the delete statement and schedule a maintenance job to delete the table contents off-hour (trading space for performance).
Adapted from SQL Server Customer Advisory Team Blog.
Working SQL Fiddle
The above
select max(empid) + 1 from employee
is the way to get the next number, but if there are multiple user inserting into the database, then context switching might cause two users to get the same value for empid and then add 1 to each and then end up with repeat ids. If you do have multiple users, you may have to lock the table while inserting. This is not the best practice and that is why the auto increment exists for database tables.
I hope this works for you. Considering that your ID field is an integer
INSERT INTO Table WITH (TABLOCK)
(SELECT CASE WHEN MAX(ID) IS NULL
THEN 1 ELSE MAX(ID)+1 END FROM Table), VALUE_1, VALUE_2....
Try following query
INSERT INTO Table VALUES
((SELECT isnull(MAX(ID),0)+1 FROM Table), VALUE_1, VALUE_2....)
you have to check isnull in on max values otherwise it will return null in final result when table contain no rows .

Checking sql unique value with constraint

I have a situation where a table has three columns ID, Value and status. For a distinct ID there should be only one status with value 1 and it should be allowed for ID to have more then one status with value 0. Unique key would prevent ID of having more then one status (0 or 1).
Is there a way to solve this, maybe using constraints?
Thanks
You can create an indexed view that will uphold your constraint of keeping ID unique for [Status] = 1.
create view dbo.v_YourTable with schemabinding as
select ID
from dbo.YourTable
where [Status] = 1
go
create unique clustered index UX_v_UniTest_ID on v_YourTable(ID)
In SQL Server 2008 you could use a unique filtered index instead.
If the table can have duplicate ID values, then a check constraint wouldn't work for your situation. I think the only way would be to use a trigger. If you are looking for an example then I can post one. But in summary, use a trigger to test if the inserted/updated ID has a status of 1 that is duplicated across the same ID.
EDIT: You could always use a unique constraint on ID and Value. I'm thinking that will give you what you are looking for.
You could put this into an insert/ update trigger to check to make sure only one combination exists with the 1 value; if your condition is not met, you could throw a trappable error and force the operation to roll back.
If you can use NULL instead of 0 for a zero-status, then you can use a UNIQUE constraint on the pair and it should work. Since NULL is not an actual value (NULL != NULL), then rows with multiple nulls should not conflict.
IMHO, this basically is a normalisation problem. The column named "id" does not uniquely address a row, so it can never be a PK. At least a new (surrogate) key(element) is needed. The constraint itself cannot be expressed as an expression "within the row", so it has to be expressed in terms of a FK.
So it breaks down into two tables:
One with PK=id, and a FK REFERENCING two.sid
Two with PK= surrogate key, and FK id REFERENCING one.id
The original payload "value" also lives here.
The "one bit variable" disappears, because it can be expressed in terms of EXISTS. (effectively table one points to the row that holds the token)
[I expect the Postgres rule system could be used to use the above two-tables-model to emulate the intended behaviour of the OP. But that would be an ugly hack...]
EDIT/UPDATE:
Postgres supports partial/conditional indices. (don't know about ms-sql)
DROP TABLE tmp.one;
CREATE TABLE tmp.one
( sid INTEGER NOT NULL PRIMARY KEY -- surrogate key
, id INTEGER NOT NULL
, status INTEGER NOT NULL DEFAULT '0'
/* ... payload */
);
INSERT INTO tmp.one(sid,id,status) VALUES
(1,1,0) , (2,1,1) , (3,1,0)
, (4,2,0) , (5,2,0) , (6,2,1)
, (7,3,0) , (8,3,0) , (9,3,1)
;
CREATE UNIQUE INDEX only_one_non_zero ON tmp.one (id)
WHERE status > 0 -- "partial index"
;
\echo this should succeed
BEGIN ;
UPDATE tmp.one SET status = 0 WHERE sid=2;
UPDATE tmp.one SET status = 1 WHERE sid=1;
COMMIT;
\echo this should fail
BEGIN ;
UPDATE tmp.one SET status = 1 WHERE sid=4;
UPDATE tmp.one SET status = 0 WHERE sid=9;
COMMIT;
SELECT * FROM tmp.one ORDER BY sid;
I came up with a solution
First create a function
CREATE FUNCTION [dbo].[Check_Status] (#ID int)
RETURNS INT
AS
BEGIN
DECLARE #r INT;
SET #r =
(SELECT SUM(status) FROM dbo.table where ID= #ID);
RETURN #r;
END
Second create a constraint in table
([dbo].[Check_Status]([ID])<(2))
In this way one ID could have single status (1) and as many as possible status (0).
create function dbo.IsValueUnique
(
#proposedValue varchar(50)
,#currentId int
)
RETURNS bit
AS
/*
--EXAMPLE
print dbo.IsValueUnique() -- fail
print dbo.IsValueUnique(null) -- fail
print dbo.IsValueUnique(null,1) -- pass
print dbo.IsValueUnique('Friendly',1) -- pass
*/
BEGIN
DECLARE #count bit
set #count =
(
select count(1)
from dbo.MyTable
where #proposedValue is not null
and dbo.MyTable.MyPkColumn != #currentId
and dbo.MyTable.MyColumn = #proposedValue
)
RETURN case when #count = 0 then 1 else 0 end
END
GO
ALTER TABLE MyTable
WITH CHECK
add constraint CK_ColumnValueIsNullOrUnique
CHECK ( 1 = dbo.IsValueNullOrUnique([MyColumn],[MyPkColumn]) )
GO