I am working in SQL Server 2017, and I have a table of the form:
tbl_current
COL1 COL2
-----------
A 1
B 3
C 56
which I want to periodically insert into the table tbl_release.
This table would have an extra ID column, which I'd like to auto-increment with each "batch insertion". For example, let's say I perform the the ingestion of tbl_current into tbl_release, it would look like this:
tbl_release
ID COL1 COL2
----------------
1 A 1
1 B 3
1 C 56
Now, let's say I perform another ingestion with the same data, it'd look like:
tbl_release
ID COL1 COL2
----------------
1 A 1
1 B 3
1 C 56
2 A 1
2 B 3
2 C 56
What is the best way to achieve this? Is there some SQL Server feature that would allow to achieve this, or do I need to run some sub-queries?
I'd personallly use a sequence for this. Assuming the insert into your ephemeral table is done, it'd looks omething like this:
declare #ID int = next value for sequence dbo.mySequence;
insert into tbl_release
(ID, col1, col2)
select #ID, col1, col2
from tbl_current;
You can try this using MAX() function as shown below.
Declare #maxId int
set #maxId = (Select isnull(id, 0) from tbl_Current)
set #maxId = #maxId + 1
--Now to insert
insert into tbl_release values (#maxId, <Column1Value>, <Column2Value>)
For multiple insert you can try this
INSERT INTO tbl_release
SELECT #maxId, col1, col2 FROM tbl_Current
If your table's Id column is identity then you can also use the Scope_Identity to get the max value of Id column.
Your id is really an object. I would strongly suggest that you give it a full table:
create table batches (
batch_id int identity(1, 1) primary key,
batch_start_date datetime,
. . .
);
Then, your existing table should be structured as:
create table releases (
release_id int identity(1, 1) primary key,
batch_id int not null references batches(batch_id),
col1 char(1),
col2 int
);
That way, your database has referential integrity.
Related
I have an table where i would like to query the following:
The data comes in batches . This data is combined with an id.
This ID only gets send ones when the new batch comes in. After that the ID only changes when there is a new batch . In the mean time the value stays null
What i need to do is if new data comes in and it has the same id as the previous batch i have to continue the insert with null in the id field instead of pushing a new row with the same id value.
Beneath is a simplistic view of the table
ID Values
1 10
null 20
null 20
null 20
null 20
2 20
null 20
null 20
null 20
null 20
1 20
null 20
If you could help me point in a directions that would help me a lot.
Maybe to clearify the id value is a set of tags. So there are some definied tags(100 or more) and when a new batch comes the batch gets a tag with it. And if that tag is the same as the previous the null has to continue instead of inserting the same tag
You'll need to add an identity field (or a timestamp) in order to be able to query the latest ID.
ALTER TABLE MyTable ADD MyIdent INT IDENTITY(1, 1) NOT NULL
Then on your insert (if your Id value is NULL) you can call
INSERT INTO MyTable (Id, Values)
SELECT TOP 1 Id, #ValuesVariable
FROM MyTable
WHERE Id IS NOT NULL
ORDER BY MyIdent DESC
This below Sp may helps to inert data try this
IF OBJECT_ID('tempdb..#Temp')IS NOT NULL
DROP TABLE #Temp
CREATE TABLE #Temp (ID INT,[Values] INT)
CREATE PROCEDURE usp_Insert
(
#Id INT,
#Values INT
)
AS
BEGIN
IF NOT EXISTS (SELECT 1 FROM #Temp WHERE ID = #ID)
BEGIN
INSERT INTO #Temp(ID,[Values])
SELECT #Id,#Values
END
ELSE
INSERT INTO #Temp(ID,[Values])
SELECT NULL,#Values
END
EXEC usp_Insert 2,12
SELECT * FROM #Temp
I want to copy rows from the table within the table itself. But before inserting I need to modify a varchar column appending the value of identity column to it.
My table structure is:
secID docID secName secType secBor
1 5 sec-1 G 9
2 5 sec-2 H 12
3 5 sec-3 G 12
4 7 sec-4 G 12
5 7 sec-5 H 9
If I want to copy data of say docID 5, currently this runs through a loop one row at a time.
I can write my query as
insert into tableA (docID, secName, secType, secBor)
select 8, secName, secType, secBor from tableA where docID = 5
But how can I set value of secName before hand so that it becomes sec-<value of secID column>?
Don't try to guess the value of identity column. In your case you could simply create a computed column secName AS CONCAT('sec-', secID). There is no further need to update that column.
DB Fiddle
It is also possible to create an AFTER INSERT trigger to update the column.
Since SQL Server does not have GENERATED ALWAYS AS ('Sec - ' + id) the only simple option I see is to use a trigger.
Adding to my comment something like:
insert into tableA (docID, secName, secType, secBor)
select
ROW_NUMBER() OVER (ORDER BY DocID),
'Sec -'+ ROW_NUMBER() OVER (ORDER BY DocID),
secType, secBor
from tableA
where docID = 5
In SQL Server 2012 and later, you can achieve this by using the new sequence object.
CREATE SEQUENCE TableAIdentitySeqeunce
START WITH 1
INCREMENT BY 1 ;
GO
create table TableA
(
secId int default (NEXT VALUE FOR TableAIdentitySeqeunce) not null primary key,
varcharCol nvarchar(50)
)
declare #nextId int;
select #nextId = NEXT VALUE FOR TableAIdentitySeqeunce
insert TableA (secId, varcharCol)
values (#nextId, N'Data #' + cast(#nextId as nvarchar(50)))
I created the table 'test':
create table test
(
column1 varchar(10),
column2 varchar(10),
)
and added the values
insert into test values('value1','value2')
insert into test values('value1','value2')
But now I need to create a column that will be a primary key, but I can not use the 'Identity' command because the control will be done by the application.
alter table test add ID int
How do I populate values that are null so they stay in sequence? Where as they are null.
result from 'select * from test':
column1 column2 ID
value1 value2 NULL
value1 value2 NULL
Try this
;WITH cte
AS
(
SELECT *, ROW_NUMBER() OVER(ORDER BY column1, column2) AS RowNum FROM test
)
UPDATE cte
SET iID = RowNum
Now check your table records
SELECT * FROM test
You can do this:
add a nullable column Id
update Id with a value
set Id as not null
make Id the primary key.
create table test (
column1 varchar(10)
, column2 varchar(10)
);
insert into test values
('value1','value2')
,('value1','value2');
alter table test add Id int null;
update t set Id = rn
from (
select *
, rn = row_number() over (order by column1, column2)
from test
) as t;
alter table test alter column Id int not null;
alter table test
add constraint pk_test primary key clustered (Id);
select * from test;
test setup: http://rextester.com/DCB57058
results:
+---------+---------+----+
| column1 | column2 | Id |
+---------+---------+----+
| value1 | value2 | 1 |
| value1 | value2 | 2 |
+---------+---------+----+
Add temp identity column, copy values from this column to ID, then drop temp column:
CREATE TABLE #test
(
column1 varchar(10),
column2 varchar(10),
)
INSERT INTO #test
VALUES
('aa','aa'),
('bb','bb')
ALTER TABLE #test
ADD ID INT
ALTER TABLE #test
ADD TempID INT IDENTITY(1,1)
UPDATE t
SET
t.ID = t.TempID
FROM #test t
ALTER TABLE #test
DROP COLUMN TempID
SELECT *
FROM #test t
DROP TABLE #test
In the first place, you can't create a PK with NULL values (check MSDN here).
If you want to make your application to create your PK value, you have to give it at INSERT time or give some "Default" value(s) before your application edit it. The second option is dangerous for various reasons (trust your application to ensure unicity? how will you make a lot of INSERT in a short time? etc).
I'm using SQL Server 2012, and I have a primary key column of type bigint.
Sometimes on new insertions it takes a huge jump of 1000 or 10000 for new primary key.
For example:
ID
--
1
2
3
4
5
6
7
8
9
10001
10002
Why does this happen? Is this a bug?
This is the behaviour of SQL Server 2012.
To fix this issue, you need to make sure, you add a NO CACHE option in sequence creation / properties like this.
create sequence Sequence1
as int
start with 1
increment by 1
no cache
go
create table Table1
(
id int primary key,
col1 varchar(50)
)
go
create trigger Trigger1
on Table1
instead of insert
as
insert Table1
(ID, col1)
select next value for Sequence1
, col1
from inserted
go
insert Table1 (col1) values ('row1');
insert Table1 (col1) values ('row2');
insert Table1 (col1) values ('row3');
select *
from Table1
Hope this helps..
Here is my sample table, the primary key is a composite key of Akey+Bkey
Akey Bkey ItemSequence
---- ---- ------------
1 1 1
1 5 2
1 7 3
2 7 1
3 2 1
3 3 2
Akey is generated from a SQL 2012 Sequence object ASequence. In most cases I insert one row at a time and when necessary I call NEXT VALUE FOR ASequence. However I need to do an insert from a statement like:
SELECT DENSE_RANK() OVER ( ORDER BY Something) as AKey,
Bkey, Sequence
FROM TABLEB
The OVER clause of the NEXT VALUE does not work this way as I need to be able to insert records as a SET but only increment the Sequence once per DENSE_RANK set.
So we have the ALTER SEQUENCE command and with this I am able to set the sequence to what I want. The caveat to this is that it must be a constant and will not accept a variable. My workaround to this was:
DECLARE #startingID INT
DECLARE #sql VARCHAR(MAX)
DECLARE #newSeed INT
SET #startingID = NEXT VALUE FOR ASequence
INSERT TABLEA
SELECT DENSE_RANK() OVER ( ORDER BY Something) + #startingID as AKey,
Bkey, Sequence
FROM TABLEB
SELECT #newSeed = MAX(Akey) FROM TABLEA
SET #sql = ‘ALTER SEQUENCE ASEQUENCE RESTART WITH ‘ + cast(#newSeed+1 as varchar(10))
EXEC(#sql)
Seems terrible to have DML statements in Dynamic SQL like this. Is there a better way to do this?
This should do it:
INSERT TABLEA
SELECT NEXT VALUE FOR ASequence OVER(ORDER BY Something) as AKey,
Bkey, Seq
FROM TABLEB
Or, how about this:
CREATE TABLEA
(
GroupID INT,
AKey INT,
BKey INT,
ItemSequence INT,
CONSTRAINT PK_TABLEA PRIMARY KEY CLUSTERED
(
GroupID,
AKey,
BKey
)
)
DECLARE #GroupID INT
SET #GroupID = NEXT VALUE FOR ASequence
INSERT TABLEA
SELECT #GroupID, DENSE_RANK() OVER ( ORDER BY Something) as AKey,
Bkey, Sequence
FROM TABLEB
and if you need the value of AKey as it is in your example, you can do GroupID+AKey here.