I need help with the insert statements for a plethora of tables in our DB.
New to SQL - just basic understanding
Summary:
Table1
Col1 Col2 Col3
1 value1 value1
2 value2 value2
3 value3 value3
Table2
Col1 Col2 Col3
4 value1 value1
5 value2 value2
6 value3 value3
Multiple tables use the same sequence of auto-generated primary keys when user creates a static data record from the GUI.
However, creating a script to upload static data from one environment to the other is something I'm looking for.
Example from one of the tables:
Insert into RULE (PK_RULE,NAME,RULEID,DESCRIPTION)
values
(4484319,'TESTRULE',14,'TEST RULE DESCRIPTION')
How do I design my insert statement so that it reads the last value from the PK column (4484319 here) and auto inserts 4484320 without explicitly mentioning the same?
Note: Our DB has hundreds and thousands of records.
I think there's something similar to (SELECT MAX(ID) + 1 FROM MyTable) which could potentially solve my problem but I don't know how to use it.
Multiple tables use the same sequence of auto-generated primary keys when user creates a static data record from the GUI.
Generally, multiple tables sharing a single sequence of primary keys is a poor design choice. Primary keys only need to be unique per table. If they need to be unique globally there are better options such as UUID primary keys.
Instead, one gives each table their own independent sequence of primary keys. In MySQL it's id bigint auto_increment primary key. In Postgres you'd use bigserial. In Oracle 12c it's number generated as identity.
create table users (
id number generated as identity,
name text not null
);
create table things (
id number generated as identity,
description text not null
);
Then you insert into each, leaving off the id, or setting it null. The database will fill it in from each sequence.
insert into users (name) values ('Yarrow Hock'); -- id 1
insert into users (id, name) values (null, 'Reaneu Keeves'); -- id 2
insert into things (description) values ('Some thing'); -- id 1
insert into things (id, description) values (null, 'Shiny stuff'); -- id 2
If your schema is not set up with auto incrementing, sequenced primary keys, you can alter the schema to use them. Just be sure to set each sequence to the maximum ID + 1. This is by far the most sane option in the long run.
If you really must draw from a single source for all primary keys, create a sequence and use that.
create sequence master_seq
start with ...
Then get the next key with nextval.
insert into rule (pk_rule, name, ruleid, description)
values (master_seq.nextval, 'TESTRULE', 14, 'TEST RULE DESCRIPTION')
Such a sequence goes up to 1,000,000,000,000,000,000,000,000,000 which should be plenty.
The INSERT and UPDATE statements in Oracle have a ...RETURNING...INTO... clause on them which can be used to return just-inserted values. When combined with a trigger-and-sequence generated primary key (Oracle 11 and earlier) or an identity column (Oracle 12 and up) this lets you get back the most-recently-inserted/updated value.
For example, let's say that you have a table TABLE1 defined as
CREATE TABLE TABLE1 (ID1 NUMBER
GENERATED ALWAYS AS IDENTITY
PRIMARY KEY,
COL2 NUMBER,
COL3 VARCHAR2(20));
You then define a function which inserts data into TABLE1 and returns the new ID value:
CREATE OR REPLACE FUNCTION INSERT_TABLE1(pCOL2 NUMBER, vCOL3 VARCHAR2)
RETURNS NUMBER
AS
nID NUMBER;
BEGIN
INSERT INTO TABLE1(COL2, COL3) VALUES (pCOL2, vCOL3)
RETURNING ID1 INTO nID;
RETURN nID;
END INSERT_TABLE1;
which gives you an easy way to insert data into TABLE1 and get the new ID value back.
dbfiddle here
Related
I am using PGsql and transferred some data from another database into my new one. The table has records starting at PK 200. The tables primary key (bigint - autoincrementing) is currently starting at 0. If I continue to insert records, eventually it will reach 200. My question is, will these records create an issue when trying to insert record 200? or will PGsql know the conflict, then find the next available AI index (say 234)?
Thanks! If it will cause a conflict, how can I set the current index of my table to the last index of data? (like 234).
My question is, will these records create an issue when trying to insert record 200?
Assuming that you have a serial column or the-like: yes, this will create an issue. The serial has no knowledge that some sequences are not available, sp this will result in a duplicate key error. Meanwhile the sequence increments also on such errors, and the next call will return the next number, and so on.
This is easily reproducible:
create table t (id serial primary key, val text);
insert into t (id, val) values (2, 'blocker');
-- 1 rows affected
insert into t (val) values ('foo');
-- 1 rows affected
insert into t (val) values ('bar');
-- ERROR: duplicate key value violates unique constraint "t_pkey"
-- DETAIL: Key (id)=(2) already exists.
insert into t (val) values ('baz');
-- 1 rows affected
select * from t order by id;
id | val
-: | :------
1 | foo
2 | blocker
3 | baz
One solution is to reset the sequence: the only safe starting point is the high watermark of the table:
select setval(
't_id_seq',
coalesce((select max(id) + 1 from t), 1),
false
);
Demo on DB Fiddlde: you can uncomment the setval() statement to see how it avoids the error.
Expecting data like below in a sql-server table:
A resource with id1 will entries for different versions and can also have different names for different versions.
But Name cannot be shared among resources. Once id1 use NameX, other resource should not be able to use same name.
Please suggest sql-table constraints I can define to achieve this:
Id Name Version
------------------
id1 Name1 1
id1 Name1 2
id1 NameA 3
id1 NameX 4
id2 Name2 1
id2 NameX 2 --invalid record, NameX is already used for id1
You can use an indexed view with a couple of unique indexes to ensure that each name only appears once per id value in the view and then to make the complete set of names unique:
create table dbo.Ix (ID varchar(20) not null, Name varchar(20) not null,
Version int not null)
go
create view dbo.DRI_Ix_Unique_Names
with schemabinding
as
select
Id,Name,COUNT_BIG(*) as Cnt
from
dbo.Ix
group by
ID,Name
go
create unique clustered index IX_DRI_IX_Unique_Names on dbo.DRI_Ix_Unique_Names (Id,Name)
go
create unique nonclustered index IX_DRI_IX_Unique_Names_Only on
dbo.DRI_Ix_Unique_Names(Name)
go
insert into dbo.Ix(ID,Name,Version) values
('id1','Name1',1)
go
insert into dbo.Ix(ID,Name,Version) values
('id1','Name1',2)
go
insert into dbo.Ix(ID,Name,Version) values
('id1','NameA',3)
go
insert into dbo.Ix(ID,Name,Version) values
('id1','NameX',4)
go
insert into dbo.Ix(ID,Name,Version) values
('id2','Name2',1)
go
insert into dbo.Ix(ID,Name,Version) values
('id2','NameX',2)
This results in five successful inserts followed by an error because the final insert violates the nonclustered unique index.
I'm not sure how the version column factors into your requirements and am not using it in any of the constraints.
create a trigger that checks the existence of the values before inserting a new record and throw an error if the record exists
like this
CREATE TRIGGER ti_CheckRecord
on YourTable before insert
begin
if exists(select 1 from inserted where exists(select 1 from yourtable where name = inserted.name and id <> inserted.id))
begin
--write your error code here
end
end
I'm just starting to wade into backend development after my first few months on the job as a front end dev. I'm working with postgreSQL and can't seem to wrap my head around the nextval() function. I read this, but it's not clear to me.
http://www.postgresql.org/docs/current/interactive/functions-sequence.html
what are the benefits/use cases for nexval()?
NEXTVAL is a function to get the next value from a sequence.
Sequence is an object which returns ever-increasing numbers, different for each call, regardless of transactions etc.
Each time you call NEXTVAL, you get a different number.
This is mainly used to generate surrogate primary keys for you tables.
You can create a table like this:
CREATE SEQUENCE mysequence;
CREATE TABLE mytable (id BIGINT NOT NULL PRIMARY KEY, value INT);
and insert values like this:
INSERT
INTO mytable (id, value)
VALUES
(NEXTVAL('mysequence'), 1),
(NEXTVAL('mysequence'), 2);
and see what you get:
SELECT * FROM mytable;
id | value
----+-------
1 | 1
2 | 2
PostgreSQL offers a nice syntax sugar for this:
CREATE TABLE mytable (id BIGSERIAL PRIMARY KEY, value INT);
which is equivalent to
CREATE SEQUENCE mytable_id_seq; -- table_column_'seq'
CREATE TABLE mytable (id BIGINT NOT NULL PRIMARY KEY DEFAULT NEXTVAL('mytable_id_seq'), value INT); -- it's not null and has a default value automatically
and can be used like this:
INSERT
INTO mytable (value)
VALUES (1),
(2); -- you can omit id, it will get filled for you.
Note that even if you rollback your insert statement or run concurrent statements from two different sessions, the returned sequence values will never be the same and never get reused (read the fine print in the docs though under CYCLE).
So you can be sure all the values of your primary keys will be generated unique within the table.
I have a table that auto-increments its primary key. How can I return what this value currently is using SQL in HSQLDB?
I found this answer, but it doesn't give a full explanation of how to get it from a specific table.
If the primary key column is declared as IDENTITY, then I don't see a way to get the current value, except for calling the IDENTITY() as described in the other answer, which doesn't give the answer for the specific table.
An alternative is to create the primary key column to use a specific sequence generator instead of IDENTITY. You can then select the current value of the sequence from the INFORMATION_SCHEMA.SEQUENCE table.
The sample below shows how this would work.
create sequence test_seq;
create table test (
id integer generated by default as sequence test_seq,
value varchar(10));
insert into test (value) values ('foo');
insert into test (value) values ('bar');
insert into test (value) values ('bash');
select * from test;
id value
0 'foo'
1 'bar'
2 'bash'
select next_value from information_schema.sequences where sequence_name = 'TEST_SEQ'
3
I've had this come up a couple times in my career, and none of my local peers seems to be able to answer it. Say I have a table that has a "Description" field which is a candidate key, except that sometimes a user will stop halfway through the process. So for maybe 25% of the records this value is null, but for all that are not NULL, it must be unique.
Another example might be a table which must maintain multiple "versions" of a record, and a bit value indicates which one is the "active" one. So the "candidate key" is always populated, but there may be three versions that are identical (with 0 in the active bit) and only one that is active (1 in the active bit).
I have alternate methods to solve these problems (in the first case, enforce the rule code, either in the stored procedure or business layer, and in the second, populate an archive table with a trigger and UNION the tables when I need a history). I don't want alternatives (unless there are demonstrably better solutions), I'm just wondering if any flavor of SQL can express "conditional uniqueness" in this way. I'm using MS SQL, so if there's a way to do it in that, great. I'm mostly just academically interested in the problem.
If you are using SQL Server 2008 a Index filter would maybe your solution:
http://msdn.microsoft.com/en-us/library/ms188783.aspx
This is how I enforce a Unique Index with multiple NULL values
CREATE UNIQUE INDEX [IDX_Blah] ON [tblBlah] ([MyCol]) WHERE [MyCol] IS NOT NULL
In the case of descriptions which are not yet completed, I wouldn't have those in the same table as the finalized descriptions. The final table would then have a unique index or primary key on the description.
In the case of the active/inactive, again I might have separate tables as you did with an "archive" or "history" table, but another possible way to do it in MS SQL Server at least is through the use of an indexed view:
CREATE TABLE Test_Conditionally_Unique
(
my_id INT NOT NULL,
active BIT NOT NULL DEFAULT 0
)
GO
CREATE VIEW dbo.Test_Conditionally_Unique_View
WITH SCHEMABINDING
AS
SELECT
my_id
FROM
dbo.Test_Conditionally_Unique
WHERE
active = 1
GO
CREATE UNIQUE CLUSTERED INDEX IDX1 ON Test_Conditionally_Unique_View (my_id)
GO
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (1, 0)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (1, 0)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (1, 0)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (1, 1)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (2, 0)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (2, 1)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (2, 1) -- This insert will fail
You could use this same method for the NULL/Valued descriptions as well.
Thanks for the comments, the initial version of this answer was wrong.
Here's a trick using a computed column that effectively allows a nullable unique constraint in SQL Server:
create table NullAndUnique
(
id int identity,
name varchar(50),
uniqueName as case
when name is null then cast(id as varchar(51))
else name + '_' end,
unique(uniqueName)
)
insert into NullAndUnique default values
insert into NullAndUnique default values -- Works
insert into NullAndUnique default values -- not accidentally :)
insert into NullAndUnique (name) values ('Joel')
insert into NullAndUnique (name) values ('Joel') -- Boom!
It basically uses the id when the name is null. The + '_' is to avoid cases where name might be numeric, like 1, which could collide with the id.
I'm not entirely aware of your intended use or your tables, but you could try using a one to one relationship. Split out this "sometimes" unique column into a new table, create the UNIQUE index on that column in the new table and FK back to the original table using the original tables PK. Only have a row in this new table when the "unique" data is supposed to exist.
OLD tables:
TableA
ID pk
Col1 sometimes unique
Col...
NEW tables:
TableA
ID
Col...
TableB
ID PK, FK to TableA.ID
Col1 unique index
Oracle does. A fully null key is not indexed by a Btree in index in Oracle, and Oracle uses Btree indexes to enforce unique constraints.
Assuming one wished to version ID_COLUMN based on the ACTIVE_FLAG being set to 1:
CREATE UNIQUE INDEX idx_versioning_id ON mytable
(CASE active_flag WHEN 0 THEN NULL ELSE active_flag END,
CASE active_flag WHEN 0 THEN NULL ELSE id_column END);