How to update a column's value using row number in Teradata - sql

I want to update a column's value in this way
new value = old value + row_number() * 1000
also for row_number I want to use order by old value
but I didn't find any solution.
sample data
column
1
3
5
after update query it should be
column
1001
2003
3005

CREATE VOLATILE TABLE test, NO FALLBACK
(MyCol SMALLINT NOT NULL)
PRIMARY INDEX (MyCol)
ON COMMIT PRESERVE ROWS;
INSERT INTO test VALUES (1);
INSERT INTO test VALUES (3);
INSERT INTO test VALUES (5);
SELECT MyCol FROM test;
UPDATE test
FROM (SELECT MyCol
, ROW_NUMBER() OVER (ORDER BY MyCol) AS RowNum_
FROM test) DT1
SET MyCol = test.MyCol + (RowNum * 1000)
WHERE test.MyCol DT1.MyCol;
SELECT MyCol FROM TEST;

Related

Adding Random Id for each unique value in table

I have the table like
ID RANDOM_ID
1 123
10 456
25 789
1 1112
55 1314
10 1516
I want the result to be like :
ID RANDOM_ID
1 123
10 456
25 789
1 123
55 1314
10 456
The same ID should have same random_ids. I'm using the update statement to generate the Random_IDs after creating the table.
CREATE TABLE [RANDOMID_TABLE]([ID] [int] NULL, [RANDOM_ID] [int] NULL)
GO
INSERT INTO [RANDOMID_TABLE] ([ID])
select distinct ABC_ID from RANDOMID_ABC
GO
******** This is the update statement for the RANDOM_ID column in
[RANDOMID_TABLE] table ************
UPDATE [RANDOMID_TABLE]
SET RANDOM_ID = abs(checksum(NewId()) % 1000000)
Is there something else that I need to add to the update statement?
Please advise.
Why would you use update for this? Just generate the values when you insert them:
insert into [RANDOMID_TABLE] (ID, RANDOM_ID)
select ABC_ID, abs(checksum(NewId()) % 1000000)
from RANDOMID_ABC
group by ABC_ID;
EDIT:
If your problem is collisions, then fix how you do the assignment. Just assign a number . . . randomly:
insert into [RANDOMID_TABLE] (ID, RANDOM_ID)
select ABC_ID, row_number() over (order by newid())
from RANDOMID_ABC
group by ABC_ID;
This is guaranteed to not return duplicates.
At a total guess, are you simpling wanting to UPDATE the table so that all the values of a specific ID to have the same value for Random_ID? Like this?
CREATE TABLE YourTable (ID int, Random_ID int);
INSERT INTO YourTable
VALUES(1 ,123),
(10,456),
(25,789),
(1 ,1112),
(55,1314),
(10,1516);
GO
WITH CTE AS(
SELECT ID,
Random_ID,
MIN(Random_ID) OVER (PARTITION BY ID) AS Min_Random_ID
FROM YourTable)
UPDATE CTE
SET Random_ID = Min_Random_ID;
GO
SELECT *
FROM YourTable;
GO
DROP TABLE YourTable;
Here is the script you need with use of temporary table (you need it to persist your random results for each unique ID):
DECLARE #Tbl TABLE (ID INT, RANDOM_ID INT)
INSERT #Tbl (Id) VALUES(1), (10), (25), (1), (55), (10)
SELECT Id, abs(checksum(NewId()) % 1000000) AS Random_Id INTO #distinctData FROM #Tbl GROUP BY Id
SELECT D.* FROM #Tbl T JOIN #distinctData D ON D.ID = T.ID
DROP TABLE #distinctData
Obviously, you don't need the first two rows where I create and initialize data table
Result:
Id Random_Id
1 354317
1 62026
10 532304
10 604768
25 874209
55 718643
You want one random value per ID. So one should think that the following would work:
with ids as
(
select distinct id
from randomid_table
)
, ids_with_rnd as
(
select id, abs(checksum(NewId()) % 1000000) as rnd
from ids
)
update randomid_table
set random_id =
(
select rnd
from ids_with_rnd
where ids_with_rnd.id = randomid_table.id
);
It doesn't however. SQL Server is somewhat buggy here and still creates different numbers for the same ID.
So, your best bet may be: do your update that does create different values (your original update statement). Then correct the data as follows:
update randomid_table
set random_id =
(
select min(random_id)
from randomid_table rt2
where rt2.id = randomid_table.id
);
Demo: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=504236db66fba0f12dc7e407a51451f8

Increment Variable in SQL

I have a table which does not have any auto increments. I have to insert data into this table, each time incrementing the ID by 1000.
So I have
SELECT #maxId = Max(ID)
INSERT INTO TABLE1
(ID, DATA)
VALUES
(#maxId + 1000, DATA),
(#maxId + 2000, DATA),
(#maxId + 3000, DATA)
Instead of explicitly incrementing the ID for each insert, is there a way to have it 'auto-increment'. I can not use one of the LAST_INSERT_ID() or something like that, simply because the ID's are created in a weird way
You can declare the field this way:
MyID INT IDENTITY (0,1000);
This will auto increment each record by 1000.
For example:
CREATE TABLE MyTable
(
MyID INT IDENTITY(0,1000),
SField VARCHAR(128)
);
INSERT INTO MyTable (SField) VALUES ('TEST');
INSERT INTO MyTable (SField) VALUES ('TEST1');
INSERT INTO MyTable (SField) VALUES ('TEST2');
SELECT * FROM MyTable
Will yield the following result:
| MyID | SField |
-----------------
| 0 | TEST |
| 1000 | TEST1 |
| 2000 | TEST2 |
You can also do this using ROW_NUMBER():
with v(data) as (
select v.*, row_number() over (order by (select null)) as seqnum
from (values(data), (data), (data)) v
)
insert into table1 (id, data)
select #maxid + seqnum * 1000, data
from v;
There is nothing stopping you from doing the following and getting the data inserted correctly.
insert into table1(ID,DATA)
VALUES ((select max(id) from table1 as T) +1000, DATA),
((select max(id) from table1 as T) +1000, DATA);
Or is it something else that you meant?
You can get a race condition using max(id) if 2 users are trying to insert at the same time - they could both end up with the same id value. You could try using GUID's instead of integer ID's (uniqueidentifier type). Use NEWID() function which always returns a new unique GUID number. It's a bit of a pain to convert from integer keys to GUID keys, but it's worth it. There is a slight performance hit, however, and they are much harder to read! One nice advantage is that you can import fresh data from production into your test database without having to worry about duplicate keys.
you could always just create a new sequence on the fly an drop it each time after you use it..
CREATE SEQUENCE CountBy1000
START WITH 1000
INCREMENT BY 1000 ;
INSERT INTO Table1
VALUES ((select max(id) from table1 as T) + NEXT VALUE FOR CountBy1000, DATA),
((select max(id) from table1 as T) + NEXT VALUE FOR CountBy1000, DATA);
DROP SEQUENCE CountBy1000;

Find missing numbers in a column

I have this column in T-SQL:
1
2
3
7
10
have SQl a function for detect the missing numbers in the sequence 4,5,6 and 8,9
I have try
something like
if ( a-b >1 ) then we have a missing number
with coalesce but i dont understand .
Thanks by any orientation
You can try this:
DELCARE #a
SET #a = SELECT MIN(number) FROM table
WHILE (SELECT MAX(number) FROM table ) > #a
BEGIN
IF #a NOT IN ( SELECT number FROM table )
PRINT #a
SET #a=#a+1
END
The following query will identify where each sequence starts and the number that are missing:
select t.col + 1 as MissingStart, (nextval - col - 1) as MissingSequenceLength
from (select t.col,
(select min(t.col) from t t2 where t2.col > t.col) as nextval
from t
) t
where nextval - col > 1
This is using a correlated subquery to get the next value in the table.
I know this is a late answer, but here is a query that uses recursive table expressions to get the missing values between the minimum and maximum values in a table:
WITH CTE AS
(
--This is called once to get the minimum and maximum values
SELECT nMin = MIN(t.ID), MAX(t.ID) as 'nMax'
FROM Test t
UNION ALL
--This is called multiple times until the condition is met
SELECT nMin + 1, nMax
FROM CTE
WHERE nMin < nMax
)
--Retrieves all the missing values in the table.
SELECT c.nMin
FROM CTE c
WHERE NOT EXISTS
(
SELECT ID
FROM Test
WHERE c.nMin = ID
)
This was tested with the following schema:
CREATE TABLE Test
(
ID int NOT NULL
)
INSERT INTO Test
Values(1)
INSERT INTO Test
Values(2)
INSERT INTO Test
Values(3)
INSERT INTO Test
Values(7)
INSERT INTO Test
Values(10)

Oracle - updating a sorted table

I found an old table without a primary key, and in order to add one, I have to add a new column and fill it with sequence values. I have another column which contains the time of when the record was created, so I want to insert the sequence values to the table sorted by the column with the time.
I'm not sure how to do it. I tried using PL\SQL - I created a cursor for a query that returns the table with an ORDER BY, and then update for each record the cursor returns but it didn't work.
Is there a smart working way to do this?
Thanks in advance.
Another option is just to use a correlated subquery, with the wrinkle of a nested subquery to generate the row number. Setting up some sample data:
create table t42 (datefield date);
insert into t42 (datefield) values (sysdate - 7);
insert into t42 (datefield) values (sysdate + 6);
insert into t42 (datefield) values (sysdate - 5);
insert into t42 (datefield) values (sysdate + 4);
insert into t42 (datefield) values (sysdate - 3);
insert into t42 (datefield) values (sysdate + 2);
select * from t42;
DATEFIELD
---------
12-JUL-12
25-JUL-12
14-JUL-12
23-JUL-12
16-JUL-12
21-JUL-12
Then adding and populating the new column:
alter table t42 add (id number);
update t42 t1 set t1.id = (
select rn from (
select rowid, row_number() over (order by datefield) as rn
from t42
) t2
where t2.rowid = t1.rowid
);
select * from t42 order by id;
DATEFIELD ID
--------- ----------
12-JUL-12 1
14-JUL-12 2
16-JUL-12 3
21-JUL-12 4
23-JUL-12 5
25-JUL-12 6
Since this is a synthetic key, making it match the order of another column seems a bit pointless, but I guess doesn't do any harm.
To complete the task:
alter table t42 modify id not null;
alter table t42 add constraint t42_pk primary key (id);
First of all, create new field and allow null values.
Then, update field from other table or query. Best approach is to use merge statement.
Here a sample from documentation:
MERGE INTO bonuses D
USING (SELECT employee_id, salary, department_id FROM employees
WHERE department_id = 80) S
ON (D.employee_id = S.employee_id)
WHEN MATCHED THEN UPDATE SET D.bonus = D.bonus + S.salary*.01
DELETE WHERE (S.salary > 8000)
WHEN NOT MATCHED THEN INSERT (D.employee_id, D.bonus)
VALUES (S.employee_id, S.salary*.01)
WHERE (S.salary <= 8000);
Finally, set as non null this new field and promote it to primary key.
Here sample sentences:
ALTER TABLE
customer
MODIFY
(
your_new_field varchar2(100) not null
)
;
ALTER TABLE
customer
ADD CONSTRAINT customer_pk PRIMARY KEY (your_new_field)
;
One simple way is to create a new table, with new column an all other columns:
create table newt (
newtID int primary key not null,
. . .
)
Then insert all the old data into it:
insert into newt
select row_number() over (order by <CreatedAt>), t.*
from t
(You can substitute all the columns in, instead of using "*". Having the columns by name is the better practice. This is shorter, plus, I don't know the column names.)
If you alter the table to add the column, then the column will appear at the end. I find that quite awkward for the primary key. If you do that, though, you can update it as:
with t as (select row_number() over (order by <CreatedAt>) as seqnum, t.*
from t
)
update t
set newtID = seqnum

oracle sql precision,scale ,insert calculate and drop

table = mytable
temp col = tempcol
col = mycol
currently contains 5000 rows various values from 99999.99999 to 0.00001
I need to keep the data create a script to create a temp column,round the values to 7,3 update mycol to a null value, modify my column from 10,5 to 7,3 return the data to mycol, drop the temp column. Job done.
so far
SELECT mycol
INTO tempcol
FROM mytable
update mytable set mycol = null
alter table mytable modify mycol number (7,3)
SELECT tempcol
INTO mycol
FROM mytable
drop tempcol
can you please fill in the missing gaps are direct me to a solution.
Well first of all a NUMBER(10,5) can store results from -99999 to 99999 while NUMBER(7,3) interval is only [-9999,9999] so you will potentially encounter conversion errors. You probably want to change the column into a NUMBER(8,3).
Now your plan seems sound: you can not reduce the precision or the scale of a column while there is data in that column, so you will store data into a temporary column. I would do it like this:
SQL> CREATE TABLE mytable (mycol NUMBER(10,5));
Table created
SQL> /* populate table */
2 INSERT INTO mytable
3 (SELECT dbms_random.value(0, 1e10)/1e5
4 FROM dual CONNECT BY LEVEL <= 1e3);
1000 rows inserted
SQL> /* new temp column */
2 ALTER TABLE mytable ADD (tempcol NUMBER(8,3));
Table altered
SQL> /* copy data to temp */
2 UPDATE mytable
3 SET tempcol = mycol,
4 mycol = NULL;
1000 rows updated
SQL> ALTER TABLE mytable MODIFY (mycol NUMBER(8,3));
Table altered
SQL> UPDATE mytable
2 SET mycol = tempcol;
1000 rows updated
SQL> /* cleaning */
2 ALTER TABLE mytable DROP COLUMN tempcol;
Table altered