Let's say I have two tables that implement a very simple invoice system (note: the schema can't be changed):
create table invoices(
id serial primary key,
parent_invoice_id int null references invoices(id),
name text not null
);
create table line_items(
id serial primary key,
invoice_id int not null references invoices(id),
amount int not null
);
The user has the ability to "clone" an invoice and have it refer to the original "parent" invoice. In the system, the invoice is required directly after the clone (however the line_items are not required). Therefore, after cloning the invoice, the new invoice must be returned. Here's the SQL I'm using to clone an invoice:
with new_invoice_row as (
insert into invoices (parent_invoice_id, name)
values (12345/*invoice_to_clone_id*/, 'Hello World')
returning *
),
new_line_item_rows as (
insert into line_items (invoice_id, amount)
select
new_invoice_row.id, line_item.amount
from line_items
cross join new_invoice_row
where
line_item.invoice_id = 12345/*invoice_to_clone_id*/
returning id
)
select * from new_invoice_row;
Questions:
Is the cross join going to perform well? I was thinking of being able to just remove the cross join to reduce having to do the join, but it wouldn't run (error: missing FROM-clause entry for table "new_invoice_row"):
...
insert into line_items (invoice_id, amount)
select
new_invoice_row.id, line_item.amount
from line_items
where
line_item.invoice_id = 12345
returning id
...
Is there anyway that the returning id part of the new_line_item_rows statement can be removed? The new line items aren't needed, so I'd like to avoid the extra overhead if it can improve performance.
Should I stop using a query and move all of this into a function? The system was originally using a MS SQL database, so I'm more familiar with using declare and having multiple statements use the variable.
The first query can return only id and parent_invoice_id.
Use the second value in order to avoid re-writing the argument (as a protection against typos).
Cross join is necessary and correct.
You can skip returning * in the second query.
A function is not necessary, although it may be convenient to use.
with new_invoice_row as (
insert into invoices (parent_invoice_id, name)
values (12345, 'Hello World')
returning id, parent_invoice_id
),
new_line_item_rows as (
insert into line_items (invoice_id, amount)
select
new_invoice_row.id, line_items.amount
from line_items
cross join new_invoice_row
where
line_items.invoice_id = new_invoice_row.parent_invoice_id
)
select * from new_invoice_row;
create table invoices(
id serial primary key,
parent_invoice_id int null references invoices(id),
name text not null
);
INSERT INTO invoices(parent_invoice_id, name) VALUES
( NULL, 'One')
,( 1, 'two')
,( NULL, 'three')
;
create table line_items(
id serial primary key,
invoice_id int not null references invoices(id),
amount int not null
);
INSERT INTO line_items (invoice_id, amount) VALUES
(1, 10)
,(1, 11)
,(2, 21)
,(2, 22)
,(3, 33)
;
-- for demonstration purposes: the clone+insert as a prepared statement
-- (this is *not* necessary, only convenient)
PREPARE clone_the_invoice (INTEGER, text, INTEGER) AS
WITH new_invoice_row as (
INSERT into invoices (parent_invoice_id, name)
VALUES ( $1 /*invoice_to_clone_id*/, $2 /*name */ )
RETURNING id)
, new_line_item_rows as (
INSERT into line_items (invoice_id, amount)
SELECT new_invoice_row.id, $3 /* amount */
FROM new_invoice_row
RETURNING id
)
SELECT * FROM new_line_item_rows
;
-- call the prepared statement.
-- This will clone invoice#2,
-- and insert one row in items, referring to the cloned row
-- it returns the new item's id, which is sufficient to
-- find the invoice.id too, when needed.
-- -----------------------------------------------------------------
EXECUTE clone_the_invoice (2, 'four', 123);
-- Chek the result
SELECT
iv.id
, iv.parent_invoice_id
, iv.name
, li.id AS lineid
, li.amount
FROM invoices iv
JOIN line_items li ON li.invoice_id = iv.id
;
Result:
CREATE TABLE
INSERT 0 3
CREATE TABLE
INSERT 0 5
PREPARE
id
----
6
(1 row)
id | parent_invoice_id | name | lineid | amount
----+-------------------+-------+--------+--------
1 | | One | 1 | 10
1 | | One | 2 | 11
2 | 1 | two | 3 | 21
2 | 1 | two | 4 | 22
3 | | three | 5 | 33
4 | 2 | four | 6 | 123
(6 rows)
And for non-trivial cases, the FKs will need a supporting index (this is not added automatically, so you should do this manually)
CREATE INDEX ON invoices (parent_invoice_id);
CREATE INDEX ON line_items (invoice_id);
Update: if you insist on returning the new invoice, here you go:
PREPARE clone_the_invoice2 (INTEGER, text, integer) AS
WITH new_invoice_row as (
INSERT into invoices (parent_invoice_id, name)
VALUES ( $1 /*invoice_to_clone_id*/, $2 )
RETURNING *
)
, new_line_item_rows as (
INSERT into line_items (invoice_id, amount)
SELECT new_invoice_row.id, $3
FROM new_invoice_row
RETURNING *
)
SELECT iv.*
FROM new_invoice_row iv
JOIN new_line_item_rows new ON new.invoice_id = iv.id
;
UPDATE 2 (it appears the OP wants the detail lines to be cloned, too:
-- Clone an invoice
-- INCLUDING all associated line_items
-- --------------------------------------
PREPARE clone_the_invoice3 (INTEGER, text) AS
WITH new_invoice_row as (
INSERT into invoices (parent_invoice_id, name)
VALUES ( $1 /*invoice_to_clone_id*/
, $2 /* name */
)
RETURNING *
)
, new_line_item_rows as (
INSERT into line_items (invoice_id, amount)
SELECT cl.id -- the cloned invoice
, it.amount
FROM line_items it
CROSS JOIN new_invoice_row cl
WHERE it.invoice_id = $1 -- The original invoice
RETURNING *
)
SELECT iv.*
FROM new_invoice_row iv
JOIN new_line_item_rows new ON new.invoice_id = iv.id
;
EXECUTE clone_the_invoice3 (2, 'four');
Related
Assume that there are two tables:
CREATE TABLE products (id SERIAL, name TEXT);
CREATE TABLE comments (id SERIAL, product_id INT, txt TEXT);
I would like to insert multiple comments for the same product. But I don't know the product_id yet, only the product name.
So I could do:
INSERT INTO comments (txt, product_id) VALUES
( 'cool', (SELECT id from products WHERE name='My product name') ),
( 'great', (SELECT id from products WHERE name='My product name') ),
...
( 'many comments later', (SELECT id from products WHERE name='My product name') );
I'd like to reduce the repetition. How to do this?
I tried this but it inserts no rows:
INSERT INTO
comments (txt, product_id)
SELECT
x.txt,
p.id
FROM
(
VALUES
('Great product'),
('I love it'),
...
('another comment')
) x (txt)
JOIN products p ON p.name = 'My product name';
Your query works just fine. The only way it inserts zero rows is if there is no product in the table products for a given string - in your query named My product name. However, #a_horse_with_no_name's suggestion to use a CROSS JOIN might simplify your query a bit. You can combine it with a CTE to collect all comments and then CROSS JOIN it with the record you filtered in from table products.
CREATE TABLE products (id SERIAL, name TEXT);
CREATE TABLE comments (id SERIAL, product_id INT, txt TEXT);
INSERT INTO products VALUES (1, 'My product name'),(2,'Another product name');
WITH j (txt) AS (
VALUES ('Great product'),('I love it'),('another comment')
)
INSERT INTO comments (product_id,txt)
SELECT id,j.txt FROM products
CROSS JOIN j WHERE name = 'My product name';
SELECT * FROM comments;
id | product_id | txt
----+------------+-----------------
1 | 1 | Great product
2 | 1 | I love it
3 | 1 | another comment
Check this db<>fiddle
My query Inserts a value and returns the new row inserted
INSERT INTO
event_comments(date_posted, e_id, created_by, parent_id, body, num_likes, thread_id)
VALUES(1575770277, 1, '9e028aaa-d265-4e27-9528-30858ed8c13d', 9, 'December 7th', 0, 'zRfs2I')
RETURNING comment_id, date_posted, e_id, created_by, parent_id, body, num_likes, thread_id
I want to join the created_by with the user_id from my user's table.
SELECT * from users WHERE user_id = created_by
Is it possible to join that new returning row with another table row?
Consider using a WITH structure to pass the data from the insert to a query that can then be joined.
Example:
-- Setup some initial tables
create table colors (
id SERIAL primary key,
color VARCHAR UNIQUE
);
create table animals (
id SERIAL primary key,
a_id INTEGER references colors(id),
animal VARCHAR UNIQUE
);
-- provide some initial data in colors
insert into colors (color) values ('red'), ('green'), ('blue');
-- Store returned data in inserted_animal for use in next query
with inserted_animal as (
-- Insert a new record into animals
insert into animals (a_id, animal) values (3, 'fish') returning *
) select * from inserted_animal
left join colors on inserted_animal.a_id = colors.id;
-- Output
-- id | a_id | animal | id | color
-- 1 | 3 | fish | 3 | blue
Explanation:
A WITH query allows a record returned from an initial query, including data returned from a RETURNING clause, which is stored in a temporary table that can be accessed in the expression that follows it to continue work on it, including using a JOIN expression.
You were right, I misunderstood
This should do it:
DECLARE mycreated_by event_comments.created_by%TYPE;
INSERT INTO
event_comments(date_posted, e_id, created_by, parent_id, body, num_likes, thread_id)
VALUES(1575770277, 1, '9e028aaa-d265-4e27-9528-30858ed8c13d', 9, 'December 7th', 0, 'zRfs2I')
RETURNING created_by into mycreated_by
SELECT * from users WHERE user_id = mycreated_by
I have a one-to-many relationship which I've converted to a many-to-many relationship.
Example:
Main Table (
Id int,
Code varchar(2)
)
Secondary Table (
Id int,
Name varchar(250),
MainId int
)
I have the following entries in the Main table:
Id Code
1 A
2 B
3 C
Secondary table:
Id Name MainId
1 Foo 1
2 Bar 1
3 Foo 2
4 Bar 2
5 Bar 3
Since the values in the column 'Name' in the 'Secondary' table are repeated quite often, the db size has grown considerably, I've decided to convert into a many-to-many relationship and only reference unique 'Name' entries.
As a first step I've created the following join table:
MainSecondary Table (
MainId int,
SecondaryId int,
)
For the final step I need to update the existing references and delete duplicate records based on the 'Name' column, which is where I'm stuck (over a million records).
The intended outcome should be:
Main table:
Id Code
1 A
2 B
3 C
Secondary table:
Id Name
1 Foo
2 Bar
MainSecondary table:
MainId SecondaryId
1 (A) 1 (Foo)
1 (A) 2 (Bar)
2 (B) 1 (Foo)
2 (B) 2 (Bar)
3 (C) 1 (Foo)
Set-up
create table main
(
id int,
code varchar(2)
);
create table secondary
(
id int,
name varchar(250),
main_id int
);
insert into main (id, code) values (1, 'A');
insert into main (id, code) values (2, 'B');
insert into main (id, code) values (3, 'C');
insert into secondary (id, name, main_id) values (1, 'Foo', 1);
insert into secondary (id, name, main_id) values (2, 'Bar', 1);
insert into secondary (id, name, main_id) values (3, 'Foo', 2);
insert into secondary (id, name, main_id) values (4, 'Bar', 2);
insert into secondary (id, name, main_id) values (5, 'Bar', 3);
Create new_secondary table
create table new_secondary
(
id int,
name varchar(250)
);
Create new relationship table: main_secondary
create table main_secondary
(
main_id int,
secondary_id int
);
Populate new_secondary table, removing duplicates
insert into new_secondary
(
id,
name
)
select
min(id),
name
from
secondary
group by
name;
Populate main_secondary relationship table
insert into main_secondary
(
main_id,
secondary_id
)
select distinct
a.main_id,
b.id as secondary_id
from
secondary a
join
new_secondary b
on a.name = b.name;;
Check the results
select
a.id as main_id,
a.code,
c.id as secondary_id,
c.name
from
main a
join
main_secondary b
on a.id = b.main_id
join
secondary c
on c.id = b.secondary_id;
Results
main_id code secondary_id name
----------- ---- ------------ -------
1 A 1 Foo
2 B 1 Foo
1 A 2 Bar
2 B 2 Bar
3 C 2 Bar
(5 rows affected)
3 (C) 2 (Bar) is different from your example, but I think it's correct.
You would need to drop the old secondary table and rename the new_secondary table (when you are sure everything is OK) to keep things tidy.
Suppose I have two tables in my Postgres database:
create table transactions
(
id bigint primary key,
doc_id bigint not null,
-- lots of other columns...
amount numeric not null
);
-- same columns
create temporary table updated_transactions
(
id bigint primary key,
doc_id bigint not null,
-- lots of other columns...
amount numeric not null
);
Both tables have just a primary key, and no unique indexes.
I need to upsert rows from updated_transactions into transactions using the following rules:
id column values in transactions and updated_transactions don't match
other columns like doc_id, etc (except of the amount) should match
when a matching row is found, update both amount and id columns
when a matching row is not found, insert it
id values in updated_transactions are taken from a sequence.
A business object just populates updated_transactions and then merges the
new or updated rows from it into transactions using an upsert query.
So my old unchanged transactions keep their ids intact, and the updated ones
are assigned new ids.
In MSSQL and Oracle, it would be a merge statement similar to this:
merge into transactions t
using updated_transactions ut on t.doc_id = ut.doc_id, ...
when matched then
update set t.id = ut.id, t.amount = ut.amount
when not matched then
insert (t.id, t.doc_id, ..., t.amount)
values (ut.id, ut.doc_id, ..., ut.amount);
In PostgreSQL, I suppose it should be something like this:
insert into transactions(id, doc_id, ..., amount)
select coalesce(t.id, ut.id), ut.doc_id, ... ut.amount
from updated_transactions ut
left join transactions t on t.doc_id = ut.doc_id, ....
on conflict
on constraint transactions_pkey
do update
set amount = excluded.amount, id = excluded.id
The problem is with the do update clause: excluded.id is an old value
from transactions table, while I need a new value from updated_transactions.
ut.id value is inaccessible for the do update clause, and the only thing I can
use is the excluded row. But the excluded row has only coalesce(t.id, ut.id)
expression which returns old id values for the existing rows.
Is it possible to update both id and amount columns using the upsert query?
Create unique index on those columns you use as key and pass its name in your upsert expression, so that it uses it instead of pkey.
Then it will insert row if no matches were found, using ID from updated_transactions. If it finds match, then you can use excluded.id to get ID from updated_transactions.
I think that left join transactions is redundant.
So it would look kinda like this:
insert into transactions(id, doc_id, ..., amount)
select ut.id, ut.doc_id, ... ut.amount
from updated_transactions ut
on conflict
on constraint transactions_multi_column_unique_index
do update
set amount = excluded.amount, id = excluded.id
Looks like the task can be accomplished using writable CTEs instead of the plain upsert.
First, I'll post the easier version of the query that answers the original question as it was asked. This solution assumes that doc_id, unit_id columns address a candidate key, but doesn't require a unique index on these columns.
Test data:
create temp table transactions
(
id bigint primary key,
doc_id bigint,
unit_id bigint,
amount numeric
);
create temp table updated_transactions
(
id bigint primary key,
doc_id bigint,
unit_id bigint,
amount numeric
);
insert into transactions(id, doc_id, unit_id, amount)
values (1, 1, 1, 10), (2, 1, 2, 15), (3, 1, 3, 10);
insert into updated_transactions(id, doc_id, unit_id, amount)
values (6, 1, 1, 11), (7, 1, 2, 15), (8, 1, 4, 20);
The query to merge updated_transactions into transactions:
with new_values as
(
select ut.id new_id, t.id old_id, ut.doc_id, ut.unit_id, ut.amount
from updated_transactions ut
left join transactions t
on t.doc_id = ut.doc_id and t.unit_id = ut.unit_id
),
updated as
(
update transactions tr
set id = nv.new_id, amount = nv.amount
from new_values nv
where id = nv.old_id
returning tr.*
)
insert into transactions(id, doc_id, unit_id, amount)
select ut.new_id, ut.doc_id, ut.unit_id, ut.amount
from new_values ut
where ut.new_id not in (select id from updated);
The results:
select * from transactions
-- id | doc_id | unit_id | amount
------+--------+---------+-------
-- 3 | 1 | 3 | 10 -- not changed
-- 6 | 1 | 1 | 11 -- updated
-- 7 | 1 | 2 | 15 -- updated
-- 8 | 1 | 4 | 20 -- inserted
In my real application doc_id, unit_id aren't always unique, so they don't represent a candidate key. To match the rows I take into account the row number, calculated for the rows sorted by their ids. So here's my second solution.
Test data:
-- the tables are the same as above
insert into transactions(id, doc_id, unit_id, amount)
values (1, 1, 1, 10), (2, 1, 1, 15), (3, 1, 3, 10);
insert into updated_transactions(id, doc_id, unit_id, amount)
values (6, 1, 1, 11), (7, 1, 1, 15), (8, 1, 4, 20);
The merge query:
with trans as
(
select id, doc_id, unit_id, amount,
row_number() over(partition by doc_id, unit_id order by id) row_num
from transactions
),
updated_trans as
(
select id, doc_id, unit_id, amount,
row_number() over(partition by doc_id, unit_id order by id) row_num
from updated_transactions
),
new_values as
(
select ut.id new_id, t.id old_id, ut.doc_id, ut.unit_id, ut.amount
from updated_trans ut
left join trans t
on t.doc_id = ut.doc_id and t.unit_id = ut.unit_id and t.row_num = ut.row_num
),
updated as
(
update transactions tr
set id = nv.new_id, amount = nv.amount
from new_values nv
where id = nv.old_id
returning tr.*
)
insert into transactions(id, doc_id, unit_id, amount)
select ut.new_id, ut.doc_id, ut.unit_id, ut.amount
from new_values ut
where ut.new_id not in (select id from updated);
The results:
select * from transactions;
-- id | doc_id | unit_id | amount
------+--------+---------+-------
-- 3 | 1 | 3 | 10 -- not changed
-- 6 | 1 | 1 | 11 -- updated
-- 7 | 1 | 1 | 15 -- updated
-- 8 | 1 | 4 | 20 -- inserted
References:
Insert on duplicate update in PostgreSQL
Upserting via Writeable CTE
Waiting for 9.1 — Writable CTE
Why is UPSERT so complicated?
I have to write a query wherein i need to allocate a ID (unique key) for a particular record which is not being used / is not being generated / does not exist in database.
In short, I need to generate an id for a particular record and show it on print screen.
E. g.:
ID Name
1 abc
2 def
5 ghi
So, the thing is that it should return ID=3 as the next immediate which is not being generated yet, and after this generation of the id, I will store this data back to database table.
It's not an HW: I am doing a project, and I have a requirement where I need to write this query, so I need some help to achieve this.
So please guide me how to make this query, or how to achieve this.
Thanks.
I am not able to add comments,, so thats why i am writing my comments here..
I am using MySQL as the database..
My steps would be like this:-
1) Retrieve the id from the database table which is not being used..
2) As their are no. of users (website based project), so i want no concurrency to happen,, so if one ID is generated to one user, then it should lock the database, until the same user recieves the id and store the record for that id.. After that, the other user can retrieve the ID whichever is not existing.. (Major requirement)..
How can i achive all these things in MySQL,, Also i suppose Quassnoi's answer will be worth,, but its not working in MySQL.. so plz explain the bit about the query as it is new to me.. and will this query work in MySQL..
I named your table unused.
SELECT id
FROM (
SELECT 1 AS id
) q1
WHERE NOT EXISTS
(
SELECT 1
FROM unused
WHERE id = 1
)
UNION ALL
SELECT *
FROM (
SELECT id + 1
FROM unused t
WHERE NOT EXISTS
(
SELECT 1
FROM unused ti
WHERE ti.id = t.id + 1
)
ORDER BY
id
LIMIT 1
) q2
ORDER BY
id
LIMIT 1
This query consists of two parts.
The first part:
SELECT *
FROM (
SELECT 1 AS id
) q
WHERE NOT EXISTS
(
SELECT 1
FROM unused
WHERE id = 1
)
selects a 1 is there is no entry in the table with this id.
The second part:
SELECT *
FROM (
SELECT id + 1
FROM unused t
WHERE NOT EXISTS
(
SELECT 1
FROM unused ti
WHERE ti.id = t.id + 1
)
ORDER BY
id
LIMIT 1
) q2
selects a first id in the table for which there is no next id.
The resulting query selects the least of these two values.
Depends on what you mean by "next id" and how it's generated.
If you're using a sequence or identity in the database to generate the id, it's possible that the "next id" is not 3 or 4 but 6 in the case you've presented. You have no way of knowing whether or not there were values with id of 3 or 4 that were subsequently deleted. Sequences and identities don't necessarily try to reclaim gaps; once they're gone you don't reuse them.
So the right thing to do is to create a sequence or identity column in your database that's automatically incremented when you do an INSERT, then SELECT the generated value.
The correct way is to use an identity column for the primary key. Don't try to look at the rows already inserted, and pick an unused value. The Id column should hold a number large enough that your application will never run out of valid new (higher) values.
In your description , if you are skipping values that you are trying to use later, then you are probably giving some meaning to the values. Please reconsider. You probably should only use this field as a look up (a reference) value from another table.
Let the database engine assign the next higher value for your ID. If you have more than one process running concurrently, you will need to use LAST_INSERT_ID() function to determine the ID that the database generated for your row. You can use LAST_INSERT_ID() function within the same transaction before you commit.
Second best (but not good!) is to use the max value of the index field plus one. You would have to do a table lock to manage the concurrency issues.
/*
This is a query script I wrote to illustrate my method, and it was created to solve a Real World problem where we have multiple machines at multiple stores creating transfer transactions in their own databases,
that are then synced to other databases on the store (this happens often, so getting the Nth free entry for the Nth machine should work) where the transferid is the PK and then those are synced daily to a MainFrame where the maximum size of the key (which is the TransactionID and StoreID) is limited.
*/
--- table variable declarations
/* list of used transaction ids (this is just for testing, it will be the view or table you are reading the transaction ids from when implemented)*/
DECLARE #SampleTransferIDSourceTable TABLE(TransferID INT)
/* Here we insert the used transaction numbers*/
DECLARE #WorkTable TABLE (WorkTableID INT IDENTITY (1,1), TransferID INT)
/*this is the same table as above with an extra column to help us identify the blocks of unused row numbers (modifying a table variable is not a good idea)*/
DECLARE #WorkTable2 TABLE (WorkTableID INT , TransferID INT, diff int)
--- Machine ID declared
DECLARE #MachineID INT
-- MachineID set
SET #MachineID = 5
-- put in some rows with different sized blocks of missing rows.
-- comment out the inserts after two to the bottom to see how it handles no gaps or make
-- the #MachineID very large to do the same.
-- comment out early rows to test how it handles starting gaps.
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 1 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 2 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 4 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 5 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 6 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 9 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 10 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 20 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 21 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 24 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 25 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 30 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 31 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 33 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 39 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 40 )
INSERT #SampleTransferIDSourceTable ( TransferID ) VALUES ( 50 )
-- copy the transaction ids into a table with an identiy item.
-- When implemented add where clause before the order by to limit to the local StoreID
-- Zero row added so that it will find gaps before the lowest used row.
INSERT #WorkTable (TransferID)
SELECT 0
INSERT #WorkTable (TransferID)
SELECT TransferID FROM #SampleTransferIDSourceTable ORDER BY TransferID
-- copy that table to the new table with the diff column
INSERT #WorkTable2
SELECT WorkTableID,TransferID,TransferID - WorkTableID
FROM #WorkTable
--- gives us the (MachineID)th unused ID or the (MachineID)th id beyond the highest id used.
IF EXISTS (
SELECT Top 1
GapStart.TransferID + #MachineID - (GapStart.diff + 1)
FROM #WorkTable2 GapStart
INNER JOIN #WorkTable2 GapEnd
ON GapStart.WorkTableID = GapEnd.WorkTableID - 1
AND GapStart.diff < GapEnd.diff
AND gapEnd.diff >= (#MachineID - 1)
ORDER BY GapStart.TransferID
)
SELECT Top 1
GapStart.TransferID + #MachineID - (GapStart.diff + 1)
FROM #WorkTable2 GapStart
INNER JOIN #WorkTable2 GapEnd
ON GapStart.WorkTableID = GapEnd.WorkTableID - 1
AND GapStart.diff < GapEnd.diff
AND gapEnd.diff >= (#MachineID - 1)
ORDER BY GapStart.TransferID
ELSE
SELECT MAX(TransferID) + #MachineID FROM #SampleTransferIDSourceTable
Should work under MySql.
SELECT TOP 100
T1.ID + 1 AS FREE_ID
FROM TABLE1 T1
LEFT JOIN TABLE2 T2 ON T2.ID = T1.ID + 1
WHERE T2.ID IS NULL
are you allowed to have a utility table? if so i would create a table like so:
CREATE TABLE number_helper (
n INT NOT NULL
,PRIMARY KEY(n)
);
Fill it with all positive 32 bit integers (assuming the id you need to generate is a positive 32 bit integer)
Then you can select like so:
SELECT MIN(h.n) as nextID
FROM my_table t
LEFT JOIN number_helper h ON h.n = t.ID
WHERE t.ID IS NULL
Haven't actually tested this but it should work.