Is it Possible to recover from a temporal table? - sql

is it Possible to recover from a temporal table?
I defined 2 tables like this:
create table lib.x(
"ID" INTEGER GENERATED ALWAYS AS IDENTITY (
START WITH 1 INCREMENT BY 1
NO MINVALUE NO MAXVALUE
NO CYCLE NO ORDER
CACHE 20
),
char char(1),
row_start TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW BEGIN IMPLICITLY hidden,
row_end TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW END IMPLICITLY hidden,
row_id TIMESTAMP(12) GENERATED ALWAYS AS TRANSACTION START ID IMPLICITLY hidden,
PERIOD SYSTEM_TIME(row_start, row_end)
);
create table lib.x_history like lib.x;
alter TABLE lib.x
ADD VERSIONING USE HISTORY TABLE lib.x_history;
then I did this:
insert into lib.x(char) values('a'), ('b'), ('c');
delete from lib.x where id = 2;
Is it possible to restore the char 'b' with the ID 2?

Yes and no. It is not a system restore, but of course you can query the old state and use it to insert into the regular table. See the section "Querying system-period temporal data" in the Db2 docs.
You would first construct a query to search AS OF. Later, you could use it as input to an insert statement. Thus, you restore the value, but it is treated as deleting the value and inserting it as new.

Related

How to create an insert trigger based on primary key?

I am stuck on a problem creating a trigger in SQL Server. I have MyTable with the following columns (simplified):
Sn - Identity, not nullable;
SomeId - int, not nullable;
miscellaneous other columns
The Sn column is nothing special, values 1,2...n.
SomeId needs to be 1000 + Sn, i.e. this is what I want the trigger to do on insert.
The problem I am having is a standard trigger fails if I don't include something for SomeId (error is that null is not allowed), if that trigger is using after insert. Maybe I am meant to use instead of insert, but I am having trouble getting that to work correct or find details about it.
The other factor here - I am not even sure if it is possible for what I am trying to do to work. I.e when a new row is being created and SQL Server generates an Sn (Identity column), can a trigger be part of that process and also compute the SomeId value (which needs the Sn value) before inserting?
If not, as a fallback I could either make the SomeId column nullable (not desirable), or always insert 0 into it (and let the trigger fire afterwards to update it), but that would be a bit grim also.
No need for a trigger. Just use a SEQUENCE to generate the ID values instead of an IDENTITY column, eg
create sequence seq_MyTable
start with 1
increment by 1
go
CREATE TABLE MyTable
(
Sn int not null primary key default (next value for seq_MyTable),
SomeID int not null unique default (next value for seq_MyTable) + 1000,
Name VARCHAR(50) NOT NULL
)
insert into MyTable(Name)
values ('A'),('B'),('C')
select * from Mytable

Increment by 1 in sequence numbering and dynamically partitioned tables

I am using dynamically created table partitions to store event information in a Postgresql 13 database. The master table from which the child tables inhert their structure contains an id field with an auto-incrementing sequence. The sequence, master table and trigger for inserts look as follows:
CREATE SEQUENCE event_id_seq
INCREMENT 1
START 1
MINVALUE 1
MAXVALUE 9223372036854775807
CACHE 1;
CREATE TABLE event_master
(
id bigint NOT NULL DEFAULT nextval('event_id_seq'::regclass),
event jsonb,
insert_time as timestamp
)
CREATE TRIGGER insert_event_trigger
BEFORE INSERT
ON event_master
FOR EACH ROW
EXECUTE PROCEDURE event_insert_function();
Additionally, the event_insert_function() uses the following code to insert new rows posted to the master table:
EXECUTE format('INSERT INTO %I (event, insert_time) VALUES($1,$2)', partition_name) using NEW.event, NEW.insert_time);
When looking at the sequence numbers in the id field, I only get every other number, i.e. 1,3,5,7, ...
Based on some related information I found, I assume this has something to do with Postgresql counting the initial insert into the master table and the triggered insert into the child table as two occurences. So my first question is, whether this is correct, and if so what's the rational behind it and why not "pass through" the insert from master to child?
More importantly though, what do I need to do to set up a properly incrementing sequence (i.e. returning 1,2,3,4 ...)?

SQL Server Unique Composite Key of Two Field With Second Field Auto-Increment

I have the following problem, I want to have Composite Primary Key like:
PRIMARY KEY (`base`, `id`);
for which when I insert a base the id to be auto-incremented based on the previous id for the same base
Example:
base id
A 1
A 2
B 1
C 1
Is there a way when I say:
INSERT INTO table(base) VALUES ('A')
to insert a new record with id 3 because that is the next id for base 'A'?
The resulting table should be:
base id
A 1
A 2
B 1
C 1
A 3
Is it possible to do it on the DB exactly since if done programmatically it could cause racing conditions.
EDIT
The base currently represents a company, the id represents invoice number. There should be auto-incrementing invoice numbers for each company but there could be cases where two companies have invoices with the same number. Users logged with a company should be able to sort, filter and search by those invoice numbers.
Ever since someone posted a similar question, I've been pondering this. The first problem is that DBs don't provide "partitionable" sequences (that would restart/remember based on different keys). The second is that the SEQUENCE objects that are provided are geared around fast access, and can't be rolled back (ie, you will get gaps). This essentially this rules out using a built-in utility... meaning we have to roll our own.
The first thing we're going to need is a table to store our sequence numbers. This can be fairly simple:
CREATE TABLE Invoice_Sequence (base CHAR(1) PRIMARY KEY CLUSTERED,
invoiceNumber INTEGER);
In reality the base column should be a foreign-key reference to whatever table/id defines the business(es)/entities you're issuing invoices for. In this table, you want entries to be unique per issued-entity.
Next, you want a stored proc that will take a key (base) and spit out the next number in the sequence (invoiceNumber). The set of keys necessary will vary (ie, some invoice numbers must contain the year or full date of issue), but the base form for this situation is as follows:
CREATE PROCEDURE Next_Invoice_Number #baseKey CHAR(1),
#invoiceNumber INTEGER OUTPUT
AS MERGE INTO Invoice_Sequence Stored
USING (VALUES (#baseKey)) Incoming(base)
ON Incoming.base = Stored.base
WHEN MATCHED THEN UPDATE SET Stored.invoiceNumber = Stored.invoiceNumber + 1
WHEN NOT MATCHED BY TARGET THEN INSERT (base) VALUES(#baseKey)
OUTPUT INSERTED.invoiceNumber ;;
Note that:
You must run this in a serialized transaction
The transaction must be the same one that's inserting into the destination (invoice) table.
That's right, you'll still get blocking per-business when issuing invoice numbers. You can't avoid this if invoice numbers must be sequential, with no gaps - until the row is actually committed, it might be rolled back, meaning that the invoice number wouldn't have been issued.
Now, since you don't want to have to remember to call the procedure for the entry, wrap it up in a trigger:
CREATE TRIGGER Populate_Invoice_Number ON Invoice INSTEAD OF INSERT
AS
DECLARE #invoiceNumber INTEGER
BEGIN
EXEC Next_Invoice_Number Inserted.base, #invoiceNumber OUTPUT
INSERT INTO Invoice (base, invoiceNumber)
VALUES (Inserted.base, #invoiceNumber)
END
(obviously, you have more columns, including others that should be auto-populated - you'll need to fill them in)
...which you can then use by simply saying:
INSERT INTO Invoice (base) VALUES('A');
So what have we done? Mostly, all this work was about shrinking the number of rows locked by a transaction. Until this INSERT is committed, there are only two rows locked:
The row in Invoice_Sequence maintaining the sequence number
The row in Invoice for the new invoice.
All other rows for a particular base are free - they can be updated or queried at will (deleting information out of this kind of system tends to make accountants nervous). You probably need to decide what should happen when queries would normally include the pending invoice...
you can use the trigger for before insert and assign the next value by taking the max(id) with "base" filter which is "A" in this case.
That will give you the max(id) value as 2 and than increment it by max(id)+1. now push the new value to the "id" field. before insert.
I think this may help you
MSSQL Triggers: http://msdn.microsoft.com/en-in/library/ms189799.aspx
Test Table
CREATE TABLE MyTable
( base CHAR(1),
id INT
)
GO
Trigger Definition
CREATE TRIGGER dbo.tr_Populate_ID
ON dbo.MyTable
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO MyTable (base,id)
SELECT i.base, ISNULL(MAX(mt.id),0) +1 AS NextValue
FROM inserted i left join MyTable mt
on i.base = mt.base
GROUP BY i.base
END
Test
Execute the following statement multiple times and you will see the next values available in that group will be assigned to ID.
INSERT INTO MyTable VALUES
('A'),
('B'),
('C')
GO
SELECT * FROM MyTable
GO

DB2 locking when no record yet exists

I have a table, something like:
create table state {foo int not null, bar int not null, baz varchar(32)};
create unique index on state(foo,bar);
I'd like to lock for a unique record in this table. However, if there's no existing record I'd like to prevent anyone else from inserting a record, but without inserting myself.
I'd use "FOR UPDATE WITH RS USE AND KEEP EXCLUSIVE LOCKS" but that only seems to work if the record exists.
A) You can let DB2 create every ID number. Let's say you have defined your Customer table
CREATE TABLE Customers
( CustomerID Int NOT NULL
GENERATED ALWAYS AS IDENTITY
PRIMARY KEY
, Name Varchar(50)
, Billing_Type Char(1)
, Balance Dec(9,2) NOT NULL DEFAULT
);
Insert rows without specifying the CustomerID, since DB2 will always produce the value for you.
INSERT INTO Customers
(Name, Billing_Type)
VALUES
(:cname, :billtype);
If you need to know what the last value assigned in your session was, you can then use the IDENTITY_VAL_LOCAL() function.
B) In my environment, I generally specify GENERATED BY DEFAULT. This is in part due to the nature of our principle programming language, ILE RPG-IV, where developers have traditionally to allowed the compiler to use the entire record definition. This leads me to I can tell everyone to use a sequence to generate ID values for a given table or set of tables.
You can grant select to only you, but if there are others with secadm or other privileges, they could insert.
You can do something with a trigger, something like check the current session, and if the user is your user, then it inserts the row.
if (SESSION_USER <> 'Alex) then
rollback -- or generate an exception
end if;
It seems that you also want to keep just one row, then, you can control that also in a trigger:
select count(0) into value from state
if (value > 1) then
rollback -- or generate an exception
end if;

Doing UPSERT when row is referenced by a FK

Let's say that I have a table of items, and for each item, there can be additional information stored for it, which goes into a second table. The additional information is referenced by a FK in the first table, which can be NULL (if the item doesn't have additional info).
TABLE item (
...
item_addtl_info_id INTEGER
)
CONSTRAINT fk_item_addtl_info FOREIGN KEY (item_addtl_info)
REFERENCES addtl_info (addtl_info_id)
TABLE addtl_info (
addtl_info_id INTEGER NOT NULL
GENERATED BY DEFAULT
AS IDENTITY (
INCREMENT BY 1
NO CACHE
),
addtl_info_text VARCHAR(100)
...
CONSTRAINT pk_addtl_info PRIMARY KEY (addtl_info_id)
)
What is the "best practice" to update an item's additional info (in IBM DB2 SQL, preferably)?
It should be an UPSERT operation, meaning that if additional info does not yet exist then a new record is created in the second table, but if it does, then it is only updated, and the FK in the first table does not change.
So imperatively, this is the logic:
UPSERT(item, item_info):
CASE WHEN item.item_addtl_info_id IS NULL THEN
INSERT INTO addtl_info (item_info)
UPDATE item.item_addtl_info_id (addtl_info.addtl_info_id)
^^^^^^^^^^^^^
ELSE
UPDATE addtl_info (item_info)
END
My main problem is how to get the newly inserted addtl_info row's id (underlined above). In a stored proc I can request the id from a sequence and store it in a variable, but maybe there is a more straightforward way. Isn't it something that comes up all the time when programming databases?
I mean, I'm really not interested in what the id of the addtl_info record is as long as it remains unique and is referenced properly. So using sequences seems a bit of an overkill to me in this case.
As a matter of fact, this UPSERT operation should be part of the SQL language as a standard operation (maybe it is, and I just don't know about it?)...
The syntax I was looking for is:
SELECT * FROM NEW TABLE ( INSERT INTO phone_book VALUES ( 'Peter Doe','555-2323' ) )
from Wikipedia (http://en.wikipedia.org/wiki/Insert_%28SQL%29)
This is how to refer to the record that was just inserted in the table.
My colleague called this construct an "in-place trigger", which what it really is...
Here is the first version that I put together as a compound SQL statement:
begin atomic
declare addtl_id integer;
set addtl_id = (select item_addtl_info_id from item where item.item_id = XXX);
if addtl_id is null
then
set addtl_id = (select addtl_info_id from new table
(insert into addtl_info
(addtl_info_text)
values ('My brand new additional info')
)
);
update item set item.item_addtl_info_id = addtl_id
where item.item_id = XXX;
else
update addtl_info set addtl_info_text = 'My updated additional info'
where addtl_info.addtl_info_id = addtl_id;
end if;
end
XXX being equal to the item id to be updated - this code can now be easily inserted into a sproc, and XXX can be converted to an input parameter.
I also tried using MERGE INTO, but I couldn't figure out a syntax for updating a table different from what was specified as the target.