How to update table with sequentional on table without primary key? - sql

In DB2 on Linux v11.1 I have a table:
COL1 COL2 "COLn 50 more columns"
A A
A A
B A
B B
etc 3 million rows
There can be multiple rows with the same rows, like first two rows in my sample (so obvious there is no primary key on table).
Now I have to add new column ID and set for every row unique sequential number.
The result should be:
COL1 COL2 "COLn 50 more columns" ID
A A 1
A A 2
B A 3
B B 4
etc 3 million rows
How to write such an update statement to update ID column?
Regards

Here is one way to do it, using an identity column , and it assumes that there is not an existing Primary Key or identity column.
alter table myschema.mytab add column id integer not null default 0 ;
alter table myschema.mytab alter column id drop default ;
alter table myschema.mytab alter column id set generated always as identity ;
update myschema.mytab set id = default ;
-- optional, if you want the new ID column to be a surrogate primary key
alter table myschema.mytab add constraint pkey primary key(id) ;
reorg table myschema.mytab ;
runstats on table myschema.mytab with distribution and detailed indexes all;

Try this:
alter table myschema.mytab add column id integer not null default 0 ;
UPDATE (SELECT ID, ROWNUMBER() OVER() RN FROM myschema.mytab) SET ID = RN;
-- Or even simplier:
-- UPDATE myschema.mytab SET ID = ROWNUMBER() OVER();

Related

Missing Keyword Error in Oracle SQL Database [duplicate]

I was wondering how can I add an identity column to existing oracle table? I am using oracle 11g. Suppose I have a table named DEGREE and I am going to add an identity column to that.
FYI table is not empty.
You can not do it in one step. Instead,
Alter the table and add the column (without primary key constraint)
ALTER TABLE DEGREE ADD (Ident NUMBER(10));
Fill the new column with data which will fulfill the primary key constraint (unique/not null), e.g. like
UPDATE DEGREE SET Ident=ROWNUM;
Alter the table and add the constraint to the column
ALTER TABLE DEGREE MODIFY (Ident PRIMARY KEY);
After that is done, you can set up a SEQUENCE and a BEFORE INSERT trigger to automatically set the id value for new records.
From Oracle 12c you would use an identity column.
For example, say your table is called demo and has 3 columns and 100 rows:
create table demo (col1, col2, col3)
as
select dbms_random.value(1,10), dbms_random.value(1,10), dbms_random.value(1,10)
from dual connect by rownum <= 100;
You could add an identity column using:
alter table demo add demo_id integer generated by default on null as identity;
update demo set demo_id = rownum;
Then reset the internal sequence to match the data and prevent manual inserts:
alter table demo modify demo_id generated always as identity start with limit value;
and define it as the primary key:
alter table demo add constraint demo_pk primary key (demo_id);
This leaves the new column at the end of the column list, which shouldn’t normally matter (except for tables with a large number of columns and row chaining issues), but it looks odd when you describe the table. However, we can at least tidy up the dictionary order using the invisible/visible hack:
SQL> desc demo
Name Null? Type
-------------------------------- -------- ----------------------
COL1 NUMBER
COL2 NUMBER
COL3 NUMBER
DEMO_ID NOT NULL NUMBER(38)
begin
for r in (
select column_name from user_tab_columns c
where c.table_name = 'DEMO'
and c.column_name <> 'DEMO_ID'
order by c.column_id
)
loop
execute immediate 'alter table demo modify '||r.column_name||' invisible';
execute immediate 'alter table demo modify '||r.column_name||' visible';
end loop;
end;
/
SQL> desc demo
Name Null? Type
-------------------------------- -------- ----------------------
DEMO_ID NOT NULL NUMBER(38)
COL1 NUMBER
COL2 NUMBER
COL3 NUMBER
One thing you can't do (as of Oracle 18.0) is alter an existing column to make it into an identity column, so you have to either go through a process like the one above but copying the existing values and finally dropping the old column, or else define a new table explicitly with the identity column in place and copy the data across in a separate step. Otherwise you'll get:
-- DEMO_ID column exists but is currently not an identity column:
alter table demo modify demo_id generated by default on null as identity start with limit value;
-- Fails with:
ORA-30673: column to be modified is not an identity column
add the column
alter table table_name add (id INTEGER);
create a sequence table_name_id_seq with start with clause, using number of rows in the table + 1 or another safe value(we don't want duplicate ids);
lock the table (no inserts)
alter table table_name lock exclusive mode;
fill the column
update table_name set id = rownum; --or another logic
add a trigger to automaticaly put the id on insert using the sequence(you can find examples on internet, for example this answer)
When you'll fire the create trigger the lock will be released. (it automatically commits).
Also, you may add unique constraint on the id column, it is best to do so.
For Oracle :
CREATE TABLE new_table AS (SELECT ROWNUM AS id, ta.* FROM old_table ta)
remember this id column is not auto incremented

Adding a NOT NULL column to a Redshift table

I'd like to add a NOT NULL column to a Redshift table that has records, an IDENTITY field, and that other tables have foreign keys to.
In PostgreSQL, you can add the column as NULL, fill it in, then ALTER it to be NOT NULL.
In Redshift, the best I've found so far is:
ALTER TABLE my_table ADD COLUMN new_column INTEGER;
-- Fill that column
CREATE TABLE my_table2 (
id INTEGER IDENTITY NOT NULL SORTKEY,
(... all the fields ... )
new_column INTEGER NOT NULL,
PRIMARY KEY(id)
) DISTSTYLE all;
UNLOAD ('select * from my_table')
to 's3://blah' credentials '<aws-auth-args>' ;
COPY my_table2
from 's3://blah' credentials '<aws-auth-args>'
EXPLICIT_IDS;
DROP table my_table;
ALTER TABLE my_table2 RENAME TO my_table;
-- For each table that had a foreign key to my_table:
ALTER TABLE another_table ADD FOREIGN KEY(my_table_id) REFERENCES my_table(id)
Is this the best way of achieving this?
You can achieve this w/o having to load to S3.
modify the existing table to create the desired column w/ a default value
update that column in some way (in my case it was copying from another column)
create a new table with the column w/o a default value
insert into the new table (you must list out the columns rather than using (*) since the order may be the same (say if you want the new column in position 2)
drop the old table
rename the table
alter table to give correct owner (if appropriate)
ex:
-- first add the column w/ a default value
alter table my_table_xyz
add visit_id bigint NOT NULL default 0; -- not null but default value
-- now populate the new column with whatever is appropriate (the key in my case)
update my_table_xyz
set visit_id = key;
-- now create the new table with the proper constraints
create table my_table_xzy_new
(
key bigint not null,
visit_id bigint NOT NULL, -- here it is not null and no default value
adt_id bigint not null
);
-- select all from old into new
insert into my_table_xyz_new
select key, visit_id, adt_id
from my_table_xyz;
-- remove the orig table
DROP table my_table_xzy_events;
-- rename the newly created table to the desired table
alter table my_table_xyz_new rename to my_table_xyz;
-- adjust any views, foreign keys or permissions as required

SQL Server Database unique number generation on any record insertion

I have like 11 columns in my database table and i am inserting data in 10 of them. i want to have a unique number like "1101 and so on" in the 11th column.
Any idea what should i do?? Thanks in advance.
SQL Server 2012 and above you can generate Sequence
Create SEQUENCE RandomSeq
start with 1001
increment by 1
Go
Insert into YourTable(Id,col1...)
Select NEXT VALUE FOR RandomSeq,col1....
or else you can use Identity
Identity(seed,increment)
You can start the seed from 1101 and increment the sequence by 1
Create table YourTable
(
id INT IDENTITY(1101,1),
Col varchar(10)
)
If you want to have that unique number in a different field then you can manipulate that field with primary key and insert that value.
If you want in primary key value, then open the table in design mode, go to 'Identity specification', set 'identity increment' and 'identity seed' as you want.
Alternatively you can use table script like,
CREATE TABLE Persons
(
ID int IDENTITY(12,1) PRIMARY KEY,
FName varchar(255) NOT NULL,
)
here the primary key will start seeding from 12 and seed value will be 1.
If you have your table definition already in place you can alter the column and add Computed column marked as persisted as:
ALTER TABLE tablename drop column column11;
ALTER TABLE tablename add column11 as '11'
+right('000000'+cast(ID as varchar(10)), 2) PERSISTED ;
--You can change the right operator value from 2 to any as per the requirements.
--Also replace ID with the identity column in your table.
create table inc
(
id int identity(1100,1),
somec char
)

In Postgresql, force unique on combination of two columns

I would like to set up a table in PostgreSQL such that two columns together must be unique. There can be multiple values of either value, so long as there are not two that share both.
For instance:
CREATE TABLE someTable (
id int PRIMARY KEY AUTOINCREMENT,
col1 int NOT NULL,
col2 int NOT NULL
)
So, col1 and col2 can repeat, but not at the same time. So, this would be allowed (Not including the id)
1 1
1 2
2 1
2 2
but not this:
1 1
1 2
1 1 -- would reject this insert for violating constraints
CREATE TABLE someTable (
id serial PRIMARY KEY,
col1 int NOT NULL,
col2 int NOT NULL,
UNIQUE (col1, col2)
)
autoincrement is not postgresql. You want a integer primary key generated always as identity (or serial if you use PG 9 or lower. serial was soft-deprecated in PG 10).
If col1 and col2 make a unique and can't be null then they make a good primary key:
CREATE TABLE someTable (
col1 int NOT NULL,
col2 int NOT NULL,
PRIMARY KEY (col1, col2)
)
Create unique constraint that two numbers together CANNOT together be repeated:
ALTER TABLE someTable
ADD UNIQUE (col1, col2)
If, like me, you landed here with:
a pre-existing table,
to which you need to add a new column, and
also need to add a new unique constraint on the new column as well as an old one, AND
be able to undo it all (i.e. write a down migration)
Here is what worked for me, utilizing one of the above answers and expanding it:
-- up
ALTER TABLE myoldtable ADD COLUMN newcolumn TEXT;
ALTER TABLE myoldtable ADD CONSTRAINT myoldtable_oldcolumn_newcolumn_key UNIQUE (oldcolumn, newcolumn);
---
ALTER TABLE myoldtable DROP CONSTRAINT myoldtable_oldcolumn_newcolumn_key;
ALTER TABLE myoldtable DROP COLUMN newcolumn;
-- down
Seems like regular UNIQUE CONSTRAINT :)
CREATE TABLE example (
a integer,
b integer,
c integer,
UNIQUE (a, c));
More here

Updating foreign keys while inserting into new table

I have table A(id).
I need to
create table B(id)
add a foreign key to table A that references to B.id
for every row in A, insert a row in B and update A.b_id with the newly inserted row in B
Is it possible to do it without adding a temporary column in B that refers to A? The below does work, but I'd rather not have to make a temporary column.
alter table B add column ref_id integer references(A.id);
insert into B (ref_id) select id from A;
update A set b_id = B.id from B where B.ref_id = A.id;
alter table B drop column ref_id;
Assuming that:
1) you're using postgresql 9.1
2) B.id is a serial (so actually an int with a default value of nextval('b_id_seq')
3) when inserting to B, you actually add other fields from A otherwise the insert is useless
...I think something like this would work:
with n as (select nextval('b_id_seq') as newbid,a.id as a_id from a),
l as (insert into b(id) select newbid from n returning id as b_id)
update a set b_id=l.b_id from l,n where a.id=n.a_id and l.b_id=n.newbid;
Add the future foreign key column, but without the constraint itself:
ALTER TABLE A ADD b_id integer;
Fill the new column with values:
WITH cte AS (
SELECT
id
ROW_NUMBER() OVER (ORDER BY id) AS b_ref
FROM A
)
UPDATE A
SET b_id = cte.b_ref
FROM cte
WHERE A.id = cte.id;
Create the other table:
CREATE TABLE B (
id integer CONSTRAINT PK_B PRIMARY KEY
);
Add rows to the new table using the referencing column of the existing one:
INSERT INTO B (id)
SELECT b_id
FROM A;
Add the FOREIGN KEY constraint:
ALTER TABLE A
ADD CONSTRAINT FK_A_B FOREIGN KEY (b_id) REFERENCES B (id);
PostgeSQL dialect.
You might use an anonymous code block like this
do $$
declare
category_cursor cursor for select id from schema1.categories;
r_category bigint;
setting_id bigint;
begin
open category_cursor;
loop fetch category_cursor into r_category;
exit when not found;
insert into schema2.setting(field)
values ('field_value') returning id into setting_id;
update schema1.categories set category_setting_id = setting_id
where category_id = r_category;
end loop;
end; $$
Let assume we have two tables first - categories, second - settings which must be applied to these categories.
First step - declare cursor(collect ids from categories), and variabels where we store temporary data
Loop cursor inserting values 'field_value' into settings
Store id in variable setting_id
Update table categories with setting_id