I have a table which does not have any auto increments. I have to insert data into this table, each time incrementing the ID by 1000.
So I have
SELECT #maxId = Max(ID)
INSERT INTO TABLE1
(ID, DATA)
VALUES
(#maxId + 1000, DATA),
(#maxId + 2000, DATA),
(#maxId + 3000, DATA)
Instead of explicitly incrementing the ID for each insert, is there a way to have it 'auto-increment'. I can not use one of the LAST_INSERT_ID() or something like that, simply because the ID's are created in a weird way
You can declare the field this way:
MyID INT IDENTITY (0,1000);
This will auto increment each record by 1000.
For example:
CREATE TABLE MyTable
(
MyID INT IDENTITY(0,1000),
SField VARCHAR(128)
);
INSERT INTO MyTable (SField) VALUES ('TEST');
INSERT INTO MyTable (SField) VALUES ('TEST1');
INSERT INTO MyTable (SField) VALUES ('TEST2');
SELECT * FROM MyTable
Will yield the following result:
| MyID | SField |
-----------------
| 0 | TEST |
| 1000 | TEST1 |
| 2000 | TEST2 |
You can also do this using ROW_NUMBER():
with v(data) as (
select v.*, row_number() over (order by (select null)) as seqnum
from (values(data), (data), (data)) v
)
insert into table1 (id, data)
select #maxid + seqnum * 1000, data
from v;
There is nothing stopping you from doing the following and getting the data inserted correctly.
insert into table1(ID,DATA)
VALUES ((select max(id) from table1 as T) +1000, DATA),
((select max(id) from table1 as T) +1000, DATA);
Or is it something else that you meant?
You can get a race condition using max(id) if 2 users are trying to insert at the same time - they could both end up with the same id value. You could try using GUID's instead of integer ID's (uniqueidentifier type). Use NEWID() function which always returns a new unique GUID number. It's a bit of a pain to convert from integer keys to GUID keys, but it's worth it. There is a slight performance hit, however, and they are much harder to read! One nice advantage is that you can import fresh data from production into your test database without having to worry about duplicate keys.
you could always just create a new sequence on the fly an drop it each time after you use it..
CREATE SEQUENCE CountBy1000
START WITH 1000
INCREMENT BY 1000 ;
INSERT INTO Table1
VALUES ((select max(id) from table1 as T) + NEXT VALUE FOR CountBy1000, DATA),
((select max(id) from table1 as T) + NEXT VALUE FOR CountBy1000, DATA);
DROP SEQUENCE CountBy1000;
Related
I have the table like
ID RANDOM_ID
1 123
10 456
25 789
1 1112
55 1314
10 1516
I want the result to be like :
ID RANDOM_ID
1 123
10 456
25 789
1 123
55 1314
10 456
The same ID should have same random_ids. I'm using the update statement to generate the Random_IDs after creating the table.
CREATE TABLE [RANDOMID_TABLE]([ID] [int] NULL, [RANDOM_ID] [int] NULL)
GO
INSERT INTO [RANDOMID_TABLE] ([ID])
select distinct ABC_ID from RANDOMID_ABC
GO
******** This is the update statement for the RANDOM_ID column in
[RANDOMID_TABLE] table ************
UPDATE [RANDOMID_TABLE]
SET RANDOM_ID = abs(checksum(NewId()) % 1000000)
Is there something else that I need to add to the update statement?
Please advise.
Why would you use update for this? Just generate the values when you insert them:
insert into [RANDOMID_TABLE] (ID, RANDOM_ID)
select ABC_ID, abs(checksum(NewId()) % 1000000)
from RANDOMID_ABC
group by ABC_ID;
EDIT:
If your problem is collisions, then fix how you do the assignment. Just assign a number . . . randomly:
insert into [RANDOMID_TABLE] (ID, RANDOM_ID)
select ABC_ID, row_number() over (order by newid())
from RANDOMID_ABC
group by ABC_ID;
This is guaranteed to not return duplicates.
At a total guess, are you simpling wanting to UPDATE the table so that all the values of a specific ID to have the same value for Random_ID? Like this?
CREATE TABLE YourTable (ID int, Random_ID int);
INSERT INTO YourTable
VALUES(1 ,123),
(10,456),
(25,789),
(1 ,1112),
(55,1314),
(10,1516);
GO
WITH CTE AS(
SELECT ID,
Random_ID,
MIN(Random_ID) OVER (PARTITION BY ID) AS Min_Random_ID
FROM YourTable)
UPDATE CTE
SET Random_ID = Min_Random_ID;
GO
SELECT *
FROM YourTable;
GO
DROP TABLE YourTable;
Here is the script you need with use of temporary table (you need it to persist your random results for each unique ID):
DECLARE #Tbl TABLE (ID INT, RANDOM_ID INT)
INSERT #Tbl (Id) VALUES(1), (10), (25), (1), (55), (10)
SELECT Id, abs(checksum(NewId()) % 1000000) AS Random_Id INTO #distinctData FROM #Tbl GROUP BY Id
SELECT D.* FROM #Tbl T JOIN #distinctData D ON D.ID = T.ID
DROP TABLE #distinctData
Obviously, you don't need the first two rows where I create and initialize data table
Result:
Id Random_Id
1 354317
1 62026
10 532304
10 604768
25 874209
55 718643
You want one random value per ID. So one should think that the following would work:
with ids as
(
select distinct id
from randomid_table
)
, ids_with_rnd as
(
select id, abs(checksum(NewId()) % 1000000) as rnd
from ids
)
update randomid_table
set random_id =
(
select rnd
from ids_with_rnd
where ids_with_rnd.id = randomid_table.id
);
It doesn't however. SQL Server is somewhat buggy here and still creates different numbers for the same ID.
So, your best bet may be: do your update that does create different values (your original update statement). Then correct the data as follows:
update randomid_table
set random_id =
(
select min(random_id)
from randomid_table rt2
where rt2.id = randomid_table.id
);
Demo: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=504236db66fba0f12dc7e407a51451f8
I want to copy rows from the table within the table itself. But before inserting I need to modify a varchar column appending the value of identity column to it.
My table structure is:
secID docID secName secType secBor
1 5 sec-1 G 9
2 5 sec-2 H 12
3 5 sec-3 G 12
4 7 sec-4 G 12
5 7 sec-5 H 9
If I want to copy data of say docID 5, currently this runs through a loop one row at a time.
I can write my query as
insert into tableA (docID, secName, secType, secBor)
select 8, secName, secType, secBor from tableA where docID = 5
But how can I set value of secName before hand so that it becomes sec-<value of secID column>?
Don't try to guess the value of identity column. In your case you could simply create a computed column secName AS CONCAT('sec-', secID). There is no further need to update that column.
DB Fiddle
It is also possible to create an AFTER INSERT trigger to update the column.
Since SQL Server does not have GENERATED ALWAYS AS ('Sec - ' + id) the only simple option I see is to use a trigger.
Adding to my comment something like:
insert into tableA (docID, secName, secType, secBor)
select
ROW_NUMBER() OVER (ORDER BY DocID),
'Sec -'+ ROW_NUMBER() OVER (ORDER BY DocID),
secType, secBor
from tableA
where docID = 5
In SQL Server 2012 and later, you can achieve this by using the new sequence object.
CREATE SEQUENCE TableAIdentitySeqeunce
START WITH 1
INCREMENT BY 1 ;
GO
create table TableA
(
secId int default (NEXT VALUE FOR TableAIdentitySeqeunce) not null primary key,
varcharCol nvarchar(50)
)
declare #nextId int;
select #nextId = NEXT VALUE FOR TableAIdentitySeqeunce
insert TableA (secId, varcharCol)
values (#nextId, N'Data #' + cast(#nextId as nvarchar(50)))
In a simplified scenario I have table T that looks somthing like:
Key Value
1 NULL
1 NULL
1 NULL
2 NULL
2 NULL
3 NULL
3 NULL
I also have a very time-consuming function Foo(Key) which must be considered as a black box (I must use it, I can't change it).
I want to update table T but in a more efficient way than
UPDATE T SET Value = dbo.Foo(Key)
Basically I would execute Foo only one time for each Key.
I tried something like
WITH Tmp1 AS
(
SELECT DISTINCT Key FROM T
)
, Tmp2 AS
(
SELECT Key, Foo(Key) Value FROM Tmp1
)
UPDATE T
SET T.Value = Tmp2.Value
FROM T JOIN Tmp2 ON T.Key = Tmp2.Key
but unexpectedly computing time doesn't change at all, because Sql Server seems to run Foo again on every row.
Any idea to solve this without other temporary tables?
One method is to use a temporary table. You don't have much control over how SQL Server decides to optimize its queries.
If you don't want a temporary table, you could do two updates:
with toupdate as (
select t.*, row_number() over (partition by id order by id) as seqnum
from t
)
update toupdate
set value = db.foo(key)
where seqnum = 1;
Then you can run a similar update again:
with toupdate as (
select t.*, max(value) over (partition by id) as as keyvalue
from t
)
update toupdate
set value = keyvalue
where value is null;
You might try it like this:
CREATE FUNCTION dbo.Foo(#TheKey INT)
RETURNS INT
AS
BEGIN
RETURN (SELECT #TheKey*2);
END
GO
CREATE TABLE #tbl(MyKey INT,MyValue INT);
INSERT INTO #tbl(MyKey) VALUES(1),(1),(1),(2),(2),(3),(3),(3);
SELECT * FROM #tbl;
DECLARE #tbl2 TABLE(MyKey INT,TheFooValue INT);
WITH DistinctKeys AS
(
SELECT DISTINCT MyKey FROM #tbl
)
INSERT INTO #tbl2
SELECT MyKey,dbo.Foo(MyKey) TheFooValue
FROM DistinctKeys;
UPDATE #tbl SET MyValue=TheFooValue
FROM #tbl
INNER JOIN #tbl2 AS tbl2 ON #tbl.MyKey=tbl2.MyKey;
SELECT * FROM #tbl2;
SELECT * FROM #tbl;
GO
DROP TABLE #tbl;
DROP FUNCTION dbo.Foo;
I want to update a column's value in this way
new value = old value + row_number() * 1000
also for row_number I want to use order by old value
but I didn't find any solution.
sample data
column
1
3
5
after update query it should be
column
1001
2003
3005
CREATE VOLATILE TABLE test, NO FALLBACK
(MyCol SMALLINT NOT NULL)
PRIMARY INDEX (MyCol)
ON COMMIT PRESERVE ROWS;
INSERT INTO test VALUES (1);
INSERT INTO test VALUES (3);
INSERT INTO test VALUES (5);
SELECT MyCol FROM test;
UPDATE test
FROM (SELECT MyCol
, ROW_NUMBER() OVER (ORDER BY MyCol) AS RowNum_
FROM test) DT1
SET MyCol = test.MyCol + (RowNum * 1000)
WHERE test.MyCol DT1.MyCol;
SELECT MyCol FROM TEST;
I'm using SQL Server 2012, and I have a primary key column of type bigint.
Sometimes on new insertions it takes a huge jump of 1000 or 10000 for new primary key.
For example:
ID
--
1
2
3
4
5
6
7
8
9
10001
10002
Why does this happen? Is this a bug?
This is the behaviour of SQL Server 2012.
To fix this issue, you need to make sure, you add a NO CACHE option in sequence creation / properties like this.
create sequence Sequence1
as int
start with 1
increment by 1
no cache
go
create table Table1
(
id int primary key,
col1 varchar(50)
)
go
create trigger Trigger1
on Table1
instead of insert
as
insert Table1
(ID, col1)
select next value for Sequence1
, col1
from inserted
go
insert Table1 (col1) values ('row1');
insert Table1 (col1) values ('row2');
insert Table1 (col1) values ('row3');
select *
from Table1
Hope this helps..