Update SQL column with sequenced number - sql

I have a table SL_PROD which has the following columns, NUMBER, DEPTCODE, DISP_SEQ AND SL_PROD_ID.
SL_PROD_ID is an identity column which incrementally increases with each row.
I need to write a query which updates the DISP_SEQ column with sequential numbers (1-X) for the rows which have a DEPTCODE of '725'. I've tried several things with no luck, any ideas?

Try this:
A common table expression can be used in updates. This is extremely usefull, if you want to use the values of window functions (with OVER) as update values.
Attention: Look carefully what you are ordering for. I used NUMBER but you might need some other sort column (maybe your IDENTITY column)
CREATE TABLE #SL_PROD(NUMBER INT,DEPT_CODE INT,DISP_SEQ INT,SL_PROD_ID INT IDENTITY);
INSERT INTO #SL_PROD(NUMBER,DEPT_CODE,DISP_SEQ) VALUES
(1,123,0)
,(2,725,0)
,(3,725,0)
,(4,123,0)
,(5,725,0);
WITH UpdateableCTE AS
(
SELECT ROW_NUMBER() OVER(ORDER BY NUMBER) AS NewDispSeq
,DISP_SEQ
FROM #SL_PROD
WHERE DEPT_CODE=725
)
UPDATE UpdateableCTE SET DISP_SEQ=NewDispSeq;
SELECT * FROM #SL_PROD;
GO
--Clean up
--DROP TABLE #SL_PROD;
The result (look at the lines with 725)
1 123 0 1
2 725 1 2
3 725 2 3
4 123 0 4
5 725 3 5

Related

union table, change serial primary key, postgresql

Postgresql:
I have two tables 'abc' and 'xyz' in postgresql. Both tables have same 'id' columns which type is 'serial primary key;'.
abc table id column values are 1,2,3 and also xyz id column containing same values 1,2,3,4
I want to union both tables with 'union all' constraint. But I want to change 'xyz' id column values to next value of 'abc' id column last value as 1,2,3,4,5,6,7
select id from abc
union all
select id from xyz
|id|
1
2
3
1
2
3
4
my wanted resuls as
|id|
1
2
3
4
5
6
7
BETTER - Thanks to #CaiusJard
This should do it for you
select id FROM abc
UNION ALL select x.id + a.maxid FROM xyz x,
(SELECT MAX(id) as maxid from abc) a
ORDER BY id
For anyone who's doing something like this:
I had a similar problem to this, I had table A and table B which had two different serials. My solution was to create a new table C which was identical to table B except it had an "oldid" column, and the id column was set to use the same sequence as table A. I then inserted all the data from table B into table C (putting the id in the oldid field). Once I fixed the refernces to point to from the oldid to the (new)id I was able to drop the oldid column.
In my case I needed to fix the old relations, and needed it to remain unique in the future (but I don't care that the ids from table A HAVE to all be before those from table C). Depending on what your trying to accomplish, this approach may be useful.
If anyone is going to use this approach, strictly speaking, there should be a trigger to prevent someone from manually setting an id in one table to match another. You should also alter the sequence to be owned by NONE so it's not dropped with table A, if table A is ever dropped.

Oracle Sequence wastes/reserves values (in INSERT SELECT)

I've been struggling with sequences for a few days. I have this Origin data table called "datos" with the next columns:
CENTRO
CODV
TEXT
INCIDENCY
And a Destiny data table called "anda" with the following:
TIPO = 31 (for all rows)
DESCRI = 'Site' (for all rows)
SECU = sequence number generated with Myseq.NEXTVAL
CENTRO
CODV
TEXT
The last three columns must be filled in with data from "datos" table.
When I execute my query, it all works fine, my table is filled and the sequence generates its values. But, in the INSERT INTO SELECT, I have the following conditions:
Every row in origin "datos" must not already be in the destiny "anda", so it won't be duplicated, and every row in "datos" must have the INCIDENCY flag value to 'N' or NULL.
If each row matches the conditions, it should be filled.
The thing is, that the query works fine and I have been trying with many different values. Here comes the problem:
When a row has its INCIDENCY value set to 'Y' (so it must not be copied into destiny table), it doesn't appear, but the sequence DOES consumes one value, and when I check Myseq.NEXTVAL its value is higher.
How can I prevent the sequence to add any value when it doesn't match the conditions? I've read that Oracle first reserves all the possible values returning from the SELECT query, but I can't find how to prevent it.
Here's the SQL:
INSERT INTO anda (TIPO, DESCRI, SECU, CENTRO, CODV, TEXT)
SELECT( 31 TIPO,
'Site' DESCRI,
Myseq.NEXTVAL,
datos.CENTRO,
datos.CODV,
datos.TEXT
FROM datos
WHERE (CENTRO, CODV) NOT IN
(SELECT CENTRO, CODV
FROM anda)
AND (datos.INCIDENCY = 'N' OR datos.INCIDENCY IS NULL)
)
Thanks in advance!!
Definition of MySeq
CREATE SEQUENCE CREATE SEQUENCE "BBDD"."MySeq" MINVALUE 800000000000
MAXVALUE 899999999999 INCREMENT BY 1 START WITH 800000000000 CACHE 20 ORDER NOCYCLE ;
You might be able to trick Oracle into doing this with a CTE:
INSERT INTO anda (TIPO, DESCRI, SECU, CENTRO, CODV, TEXT)
WITH toinsert as (
SELECT d.*
FROM datos d
WHERE (CENTRO, CODV) NOT IN (SELECT CENTRO, CODV FROM anda) AND
(d.INCIDENCY = 'N' OR d.INCIDENCY IS NULL)
)
SELECT 31 as TIPO, 'Site' as DESCRI, Myseq.NEXTVAL,
d.CENTRO, d.CODV, d.TEXT
FROM toinsert d;
I'm not quite sure if that will work. A more guaranteed approach is to use a before insert trigger (or identity column if you are using 12c+). You would increment the value in the trigger.
However, I do agree with Hugh Jones. You should be confident using the sequence to add a unique value to each row and this value will be increasing. Gaps can appear for other reasons, such as deletes. Also, I know that SQL Server can create gaps when doing parallel inserts; I'm not sure if that also happens with Oracle.
I don't believe you have a real problem(the gaps are not really an issue) but you can put a before insert (at row level) trigger on anda table and set sequ there with your sequence generated value.
But keep in mind that this will keep consecutive only the sequ number in a statement. You'll get gaps anyway for other reasons.
UPDATE: as Alex Poole has commented, the insert itself does not generate gaps.
See a test below:
> drop sequence tst_fgg_seq;
sequence TST_FGG_SEQ dropped.
> drop table tst_fgg;
table TST_FGG dropped.
> drop table tst_insert_fgg;
table TST_INSERT_FGG dropped.
> create sequence tst_fgg_seq start with 1 nocycle;
sequence TST_FGG_SEQ created.
> create table tst_fgg as select level l from dual connect by level < 11;
table TST_FGG created.
> create table tst_insert_fgg as
select tst_fgg_seq.nextval
from tst_fgg
where l between 3 and 5;
table TST_INSERT_FGG created.
> select * from tst_insert_fgg;
NEXTVAL
----------
1
2
3
> insert into tst_insert_fgg
select tst_fgg_seq.nextval
from tst_fgg
where l between 3 and 5;
3 rows inserted.
> select * from tst_insert_fgg;
NEXTVAL
----------
1
2
3
4
5
6
6 rows selected

Get randomically or row from a table

I need you help...for a little problem.
I have a java service that should access in a table and get a random row from table.
My table is simply: it contains only two cols:
"Id" INT IDENTITY(1,1) NOT NULL Primary Key
"Datas" Varchar(64) NOT NULL
Values Id is an progressive number, so you should think it could be enough to create a random number and get the row where id=randomic_number.
But I have lots of gap in table. So for example, a sample of table could be this:
ID Datas
1 Row1
2 Row2
3 Row3
8 Row4
10 Row5
25 Row6
639 Row7
Is there a very stylish way to get one row randomly? No condition must be...only random!
I use sql srv 2000.
I would avoid to to...
select *
and then cycling the entire Resultset using a random number...because it can contain a very large number of rows....
You should be able to do something along the lines of:
SELECT TOP 1 * FROM mytable ORDER BY newid()
Note: this is a duplicate of #52964 which is in turn a duplicate of #19412
I would suggest to get the last id in the table like so
SELECT TOP 1 Id FROM table_name ORDER BY Id DESC
then assuming this is stored in the maxId variable, you could generate a random number index between 1 and maxId and do :
SELECT TOP 1 * FROM table_name WHERE Id > index
And that's it

Delete duplicate id and Value ROW using SQL Server 2008 R2

In SQL Server 2008 R2 I added two duplicate ID and record in my table. When I try to delete one of the last two records I receive the following error.
The row values updated or deleted either do not make the row unique or they alter multiple rows.
The data is:
7 ABC 6
7 ABC 6
7 ABC 6
8 XYZ 1
8 XYZ 1
8 XYZ 4
7 ABC 6
7 ABC 6
I need to delete last two records:
7 ABC 6
7 ABC 6
I have been trying to delete last 2 record using the feature "Edit the Top 200 rows" to delete this duplicate id but get the error above.
Any help is appreciated. Thanks in advance:)
Since you have given no clue whatsoever that there are other columns in the table, assuming your data is in 3 columns A,B,C, you can delete 2 rows using:
;with t as (
select top(2) *
from tbl
where A = 7 and B = 'ABC' and C = 6
)
DELETE t;
This will arbitrarily match two rows based on the conditions, and delete them.
This is an outline of code I use to delete dups in tables that may have many dups.
/* I always put the rollback and commit up here in comments until I am sure I have
done what I wanted. */
BEGIN tran Jim1 -- rollback tran Jim1 -- Commit tran Jim1; DROP table PnLTest.dbo.What_Jim_Deleted
/* This creates a table to put the deleted rows in just in case I'm really screwed up */
SELECT top 1 *, NULL dupflag
INTO jt1.dbo.What_Jim_Deleted --DROP TABLE jt1.dbo.What_Jim_Deleted
FROM jt1.dbo.tab1;
/* This removes the row without removing the table */
TRUNCATE TABLE jt1.dbo.What_Jim_Deleted;
/* the cte assigns a row number to each unique security for each day, dups will have a
rownumber > 1. The fields in the partition by are from the composite key for the
table (if one exists. These are the queries that I ran to show them as dups
SELECT compkey1, compkey2, compkey3, compkey4, COUNT(*)
FROM jt1.dbo.tab1
GROUP BY compkey1, compkey2, compkey3, compkey4
HAVING COUNT(*) > 1
ORDER BY 1 DESC
*/
with getthedups as
(SELECT *,
ROW_NUMBER() OVER
(partition by compkey1,compkey2, compkey3, compkey4
ORDER BY Timestamp desc) dupflag /*This can be anything that gives some order to the rows (even if order doesn't matter) */
FROM jt1.dbo.tab1)
/* This delete is deleting from the cte which cascades to the underlying table
The Where is part of the Delete (even though it comes after the OUTPUT. The
OUTPUT takes all of the DELETED row and inserts them into the "oh shit" table,
just in case.*/
DELETE
FROM getthedups
OUTPUT DELETED.* INTO jti.dbo.What_Jim_Deleted
WHERE dupflag > 1
--Check the resulting tables here to ensure that you did what you think you did
/* If all has gone well then commit the tran and drop the "oh shit" table, or let it
hang around for a while. */

Select statement, table sample, equal distribution

Let's assume there is a SQL Server 2008 table like below, that holds 10 million rows.
One of the fields is Id, since it's identity it is from 1 to 10 million.
CREATE TABLE dbo.Stats
(
id INT IDENTITY(1,1) PRIMARY KEY,
field1 INT,
field2 INT,
...
)
Is there an efficient way by doing one select statement to get a subset of this data that satisfies the following requirements:
contains a limited number of rows in the result set, i.e. 100, 200, etc.
provides equal distribution of a certain column, not random, i.e. of column id
So, in our example, if we return 100 rows, the result set would look like this:
Row 1 - 100 000
Row 2 - 200 000
Row 3 - 300 000
...
Row 100 - 10 000 000
I want to avoid using cursor and storing this in a separate table.
Not sure how efficient it's going to be, but thie following query will return every 100000th row (relative to ordering established by id):
SELECT *
FROM (
SELECT *, ROW_NUMBER() OVER (ORDER BY id) RN
FROM Stats
) T
WHERE RN % 100000 = 0
ORDER BY id
Since it does not rely on actual id values, this will work even if you have "holes" in the sequence of id values.
Something like this?
SELECT id FROM dbo..Stats WHERE id % 100000 = 0
it should work, since you are saying that id goes from 1 to 10 000 000. If number of rows is not known, but number of resulting rows is what you know, then just calculate that 100000 number like (if you would like 100 resulting rows):
SELECT id FROM Stats WHERE (id % (SELECT COUNT(id) FROM Stats) / 100) = 0