DB2 storing results from final table clause - sql

The FINAL TABLE clause is great for getting values back from DML in DB2, for example:
SELECT id
FROM FINAL TABLE
(
INSERT INTO mySchema.myTable (val)
VALUES ('data')
)
However, there doesn't seem to be a way to store the results of this query into another table, persisting the contents somewhere. For example, both of the following fail with the error "Data change table reference not allowed where specified." (I am running DB2 for i v7.1):
CREATE TABLE mySchema.otherTable AS (
SELECT id
FROM FINAL TABLE
(
INSERT INTO mySchema.myTable (val)
VALUES ('data')
)
) WITH DATA
After creating mySchema.otherTable in a separate CREATE TABLE statement, this also fails:
INSERT INTO mySchema.otherTable (ID)
SELECT id
FROM FINAL TABLE
(
INSERT INTO mySchema.myTable (val)
VALUES ('data')
)

Not sure if this works on i Series, but DB2 for LUW allows you to do this:
with i1 (id) as (
SELECT id
FROM FINAL TABLE
(
INSERT INTO mySchema.myTable (val)
VALUES ('data')
)
)
select * from new table (
INSERT INTO mySchema.otherTable (ID)
select id from i1
)

I tried to use the FINAL TABLE technique today on an IBM i at OS V7R1 and it wouldn't work as described on DB2 for LUW, when attempting to feed the identity column value to a second insert. I anticipate we'll get this ability eventually.
As an alternative, I was able to route the assigned identity column value to an SQL Global Variable using a SET command, and then use that Global Variable to assign the same identity column value to 2 subsequent inserts to 2 related association tables. For non-compiled SQL scripting, that is a good technique to use for a server side solution until we get the same ability as described on DB2 for LUW. A temp table would work as well.
create variable MY_SCHEMA.MY_TABLE_ID
;
set MY_SCHEMA.MY_TABLE_ID =
( select ID
from final table ( insert into MY_SCHEMA.MY_TABLE values ('data') ) )
;
insert into MY_SCHEMA.MY_OTHER_TABLE ( ID, DATA )
values( MY_SCHEMA.MY_TABLE_ID, 'more data' )
;
From the V7R1 SQL Reference manual:
Global variables have a session scope. This means that although they are
available to all sessions that are active on the database, their value is private for
each session.
For compiled SQL stored procedures, a variable with a SELECT INTO works fine too.

Related

How to Insert new Record into Table if the Record is not Present in the Table in Teradata

I want to insert a new record if the record is not present in the table
For that I am using below query in Teradata
INSERT INTO sample(id, name) VALUES('12','rao')
WHERE NOT EXISTS (SELECT id FROM sample WHERE id = '12');
When I execute the above query I am getting below error.
WHERE NOT EXISTS
Failure 3706 Syntax error: expected something between ')' and the 'WHERE' keyword.
Can anyone help with the above issue. It will be very helpful.
You can use INSERT INTO ... SELECT ... as follows:
INSERT INTO sample(id,name)
select '12','rao'
WHERE NOT EXISTS (SELECT id FROM sample WHERE id = '12');
You can also create the primary/unique key on id column to avoid inserting duplicate data in id column.
I would advise writing the query as:
INSERT INTO sample (id, name)
SELECT id, name
FROM (SELECT 12 as id, 'rao' as name) x
WHERE NOT EXISTS (SELECT 1 FROM sample s WHERE s.id = x.id);
This means that you do not need to repeat the constant value -- such repetition can be a cause of errors in queries. Note that I removed the single quotes. id looks like a number so treat it as a number.
The uniqueness of ids is usually handled using a unique constraint or index:
alter table sample add constraint unq_sample_id unique (id);
This makes sure that the database ensures uniqueness. Your approach can fail if two inserts are run at the same time with the same id. An attempt to insert a duplicates returns an error (which the exists can then avoid).
In practice, id columns are usually generated automatically by the database. So the create table statement would look more like:
id integer generated by default as identity
And the insert would look like:
insert into sample (name)
values (name);
If id is the Primary Index of the table you can use MERGE:
merge into sample as tgt
using VALUES('12','rao') as src (id, name)
on src.id = tgt.id
when not matched
then insert (src.id,src.name)

Tricky problem with primary key in INSERT statement

I have table in a SQL Server database with autoincrementing primary key [ID]. Is there some way to include the [ID] column in the INSERT statement so that the database would ignore it? Some trick with table configuration?
I am not working on PC (on Omron NJ PLC), so I can't write statements myself. Instead they are mapped from Structs. And, if it is possible, I want to use same Structs for both INSERT and SELECT (where I need [ID] for a later UPDATE). Also I have no desire of generating index myself. Although it would be lesser evil.
In SQL Server, you need to provide a value for columns that are inserted. There is a special value called DEFAULT that inserts the default value. However, it cannot be used with IDENTITY columns.
The normal insert method is to simply leave out the column:
insert into t (<all columns but id>)
values (<all values for other columns>);
Even a trigger on the tables doesn't get around this limitation, but there is a trick you can use:
Create a view on the table selecting all columns.
Create an instead of insert trigger on the table.
Insert into the view instead of the table.
This looks like:
create view v_t as
select * from t;
create trigger trig_v on v_t instead of insert as
begin
insert into t ( . . . ) -- all columns except id
select . . . -- all columns except id
from inserted;
end;
insert into v_t -- I recommend listing the columns but not required
values (NULL, . . . );
Here is a db<>fiddle.
If you want to insert explicit value for the ID on an identity, you can use the SET INSERT_IDENTITY ON statement before your insert. After that you will enable again the identity by the statement SET INSERT_IDENTITY OFF.
Hope this helps.

How do I add an auto incrementing column to an existing vertica table?

I have a table that currently has the following structure
id, row1
(null), 232
(null), 4455
(null), 16
I'd like for id to be an auto incrementing primary key, as follows:
id, row1
1, 232
2, 4455
3, 16
I've read the documentation and it looks like the function that I need is AUTO_INCREMENT and that I can edit the table using an ALTER TABLE statement. However, I can't seem to get the syntax quite right. How do I go about doing this? Is it even possible with a pre-existing table?
What you need to do is the following:
create a new sequence:
CREATE SEQUENCE sequence_auto_increment START 1;
create a new table:
create table tab2 as select * from tab1 limit 0;
insert the data:
insert /*+ direct */ into tab2
select NEXTVAL('sequence_auto_increment'),row1 from tab1;
as #Kermit mentioned the best way to do it in Vertica is to recreate the table(once) instead of multiple times, use the direct hint so you skip the WOS storage(much faster)
As for the column constraint that #Nazmul created, i won't use it Vertica doesn't care to much about constraints, you will need to force him to insert what you want and default constraints are not the way.
You need to update your exiting data something like below
UPDATE table
SET id = table2.id
FROM
(
SELECT row1, RANK() OVER (ORDER BY val) as id
FROM t1;
) as table2
where table.primaryKey = table2.primaryKey
Then you do alter your table using below syntax
-- get the value to start sequence at
SELECT MAX(id) FROM t2;
-- create the sequence
CREATE SEQUENCE seq1 START 5;
-- syntax as of 6.1
-- modify the column to add next value for future rows
ALTER TABLE t2 ALTER COLUMN id SET DEFAULT NEXTVAL('seq1');
If you want to use the Auto_Increment feature,
1)Copy data to temp table
2)Recreate the base table with the column using auto increment
3)Copy back the data to for other columns
If you just want the numbers in, refer the other answer by Nazmul

SELECT * FROM NEW TABLE equivalent in Postgres

In DB2 I can do a command that looks like this to retrieve information from the inserted row:
SELECT *
FROM NEW TABLE (
INSERT INTO phone_book
VALUES ( 'Peter Doe','555-2323' )
) AS t
How do I do that in Postgres?
There are way to retrieve a sequence, but I need to retrieve arbitrary columns.
My desire to merge a select with the insert is for performance reasons. This way I only need to execute one statement to insert values and select values from the insert. The values that are inserted come from a subselect rather than a values clause. I only need to insert 1 row.
That sample code was lifted from Wikipedia Insert Article
A plain INSERT ... RETURNING ... does the job and delivers best performance.
A CTE is not necessary.
INSERT INTO phone_book (name, number)
VALUES ( 'Peter Doe','555-2323' )
RETURNING * -- or just phonebook_id, if that's all you need
Aside: In most cases it's advisable to add a target list.
The Wikipedia page you quoted already has the same advice:
Using an INSERT statement with RETURNING clause for PostgreSQL (since
8.2). The returned list is identical to the result of a SELECT.
PostgreSQL supports this kind of behavior through a returning clause in a common table expression. You generally shouldn't assume that something like this will improve performance simply because you're executing one statement instead of two. Use EXPLAIN to measure performance.
create table test (
test_id serial primary key,
col1 integer
);
with inserted_rows as (
insert into test (c1) values (3)
returning *
)
select * from inserted_rows;
test_id col1
--
1 3
Docs

Can I keep old keys linked to new keys when making a copy in SQL?

I am trying to copy a record in a table and change a few values with a stored procedure in SQL Server 2005. This is simple, but I also need to copy relationships in other tables with the new primary keys. As this proc is being used to batch copy records, I've found it difficult to store some relationship between old keys and new keys.
Right now, I am grabbing new keys from the batch insert using OUTPUT INTO.
ex:
INSERT INTO table
(column1, column2,...)
OUTPUT INSERTED.PrimaryKey INTO #TableVariable
SELECT column1, column2,...
Is there a way like this to easily get the old keys inserted at the same time I am inserting new keys (to ensure I have paired up the proper corresponding keys)?
I know cursors are an option, but I have never used them and have only heard them referenced in a horror story fashion. I'd much prefer to use OUTPUT INTO, or something like it.
If you need to track both old and new keys in your temp table, you need to cheat and use MERGE:
Data setup:
create table T (
ID int IDENTITY(5,7) not null,
Col1 varchar(10) not null
);
go
insert into T (Col1) values ('abc'),('def');
And the replacement for your INSERT statement:
declare #TV table (
Old_ID int not null,
New_ID int not null
);
merge into T t1
using (select ID,Col1 from T) t2
on 1 = 0
when not matched then insert (Col1) values (t2.Col1)
output t2.ID,inserted.ID into #TV;
And (actually needs to be in the same batch so that you can access the table variable):
select * from T;
select * from #TV;
Produces:
ID Col1
5 abc
12 def
19 abc
26 def
Old_ID New_ID
5 19
12 26
The reason you have to do this is because of an irritating limitation on the OUTPUT clause when used with INSERT - you can only access the inserted table, not any of the tables that might be part of a SELECT.
Related - More explanation of the MERGE abuse
INSERT statements loading data into tables with an IDENTITY column are guaranteed to generate the values in the same order as the ORDER BY clause in the SELECT.
If you want the IDENTITY values to be assigned in a sequential fashion
that follows the ordering in the ORDER BY clause, create a table that
contains a column with the IDENTITY property and then run an INSERT ..
SELECT … ORDER BY query to populate this table.
From: The behavior of the IDENTITY function when used with SELECT INTO or INSERT .. SELECT queries that contain an ORDER BY clause
You can use this fact to match your old with your new identity values. First collect the list of primary keys that you intend to copy into a temporary table. You can also include your modified column values as well if needed:
select
PrimaryKey,
Col1
--Col2... etc
into #NewRecords
from Table
--where whatever...
Then do your INSERT with the OUTPUT clause to capture your new ids into the table variable:
declare #TableVariable table (
New_ID int not null
);
INSERT INTO #table
(Col1 /*,Col2... ect.*/)
OUTPUT INSERTED.PrimaryKey INTO #NewIds
SELECT Col1 /*,Col2... ect.*/
from #NewRecords
order by PrimaryKey
Because of the ORDER BY PrimaryKey statement, you will be guaranteed that your New_ID numbers will be generated in the same order as the PrimaryKey field of the copied records. Now you can match them up by row numbers ordered by the ID values. The following query would give you the parings:
select PrimaryKey, New_ID
from
(select PrimaryKey,
ROW_NUMBER() over (order by PrimaryKey) OldRow
from #NewRecords
) PrimaryKeys
join
(
select New_ID,
ROW_NUMBER() over (order by New_ID) NewRow
from #NewIds
) New_IDs
on OldRow = NewRow