I need to copy some data from one table to another in Oracle, while generating incremental values for a numeric column in the new table. This is a once-only exercise with a trivial number of rows (100).
I have an adequate solution to this problem but I'm curious to know if there is a more elegant way.
I'm doing it with a temporary sequence, like so:
CREATE SEQUENCE temp_seq
START WITH 1;
INSERT INTO new_table (new_col, copied_col1, copied_col2)
SELECT temp_seq.NEXTVAL, o.*
FROM (SELECT old_col1, old_col2
FROM old_table,
ORDER BY old_col1) o;
DROP SEQUENCE temp_seq;
Is there way to do with without creating the sequence or any other temporary object? Specifically, can this be done with a self-contained INSERT SELECT statement?
Please consider a trigger to be a non-option.
MORE INFO: I would like to control the order that the new rows are inserted, and it won't be the same order they were created in the old table (note I've added the ORDER BY clause above). But I still want my new sequential column to start from 1.
There are similar questions, but I believe the specifics of my question are original to SO.
You can use ROWNUM. This pseudo-column numbers the rows in your result:
Insert Into new_table (new_col, copied_col1, copied_col2)
Select Rownum, old_col1, old_col2
From old_table;
If you want your records to be sorted, you need to use a sub-query:
Insert Into new_table (new_col, copied_col1, copied_col2)
Select Rownum, old_col1, old_col2
From (
Select old_col1, old_col2
From old_table
Order By old_col1
);
Why don't you define the column new_col as primary_key or unique and mark it as autoincrement?
This way each insert will get the next higher "count".
I'm not pretty familiar with oracle, but i would bet there is a built in autoincrement function.
Related
I have a table with PK and another column for other id. In some cases i need to insert record with equal values in both columns. For primary key values i'm using sequence, which gives a Field<Long> from Sequences.MY_SEQ.nextval().
How can i extract value from a Field<Long> for guaranteed insert same ids in both columns? Using Field<Long> in insert clause generates 2 different ids in columns.
Here is the solution:
Long id = dsl.select(Sequences.MY_SEQ.nextval()).fetchOne().value1();
Your own solution works, of course, but it will generate two round trips to the database. One for fetching the sequence value and another one for the insert. If that's not a problem, perfect. Otherwise, you can still do it in one single query using INSERT .. SELECT:
In SQL:
(using Oracle syntax. Your SQL syntax may vary...)
INSERT INTO my_table (col1, col2, val)
SELECT t.id, t.id, 'abc'
FROM (
SELECT my_seq.nextval AS id
FROM dual
) t
With jOOQ
Table<?> t = table(select(MY_SEQ.nextval().as("id"))).as("t");
dsl.insertInto(MY_TABLE)
.columns(MY_TABLE.COL1, MY_TABLE.COL2, MY_TABLE.VAL)
.select(
select(t.field("id"), t.field("id"), val("abc"))
.from(t))
.execute();
In DB2 I can do a command that looks like this to retrieve information from the inserted row:
SELECT *
FROM NEW TABLE (
INSERT INTO phone_book
VALUES ( 'Peter Doe','555-2323' )
) AS t
How do I do that in Postgres?
There are way to retrieve a sequence, but I need to retrieve arbitrary columns.
My desire to merge a select with the insert is for performance reasons. This way I only need to execute one statement to insert values and select values from the insert. The values that are inserted come from a subselect rather than a values clause. I only need to insert 1 row.
That sample code was lifted from Wikipedia Insert Article
A plain INSERT ... RETURNING ... does the job and delivers best performance.
A CTE is not necessary.
INSERT INTO phone_book (name, number)
VALUES ( 'Peter Doe','555-2323' )
RETURNING * -- or just phonebook_id, if that's all you need
Aside: In most cases it's advisable to add a target list.
The Wikipedia page you quoted already has the same advice:
Using an INSERT statement with RETURNING clause for PostgreSQL (since
8.2). The returned list is identical to the result of a SELECT.
PostgreSQL supports this kind of behavior through a returning clause in a common table expression. You generally shouldn't assume that something like this will improve performance simply because you're executing one statement instead of two. Use EXPLAIN to measure performance.
create table test (
test_id serial primary key,
col1 integer
);
with inserted_rows as (
insert into test (c1) values (3)
returning *
)
select * from inserted_rows;
test_id col1
--
1 3
Docs
I've got some duplicate records in a table because as it turns out Netezza does not support constraint checks on primary keys. That being said, I have some records where the information is the exact same and I want to delete just ONE of them. I've tried doing
delete from table_name where test_id=2025 limit 1
and also
delete from table_name where test_id=2025 rowsetlimit 1
However neither option works. I get an error saying
found 'limit'. Expecting a keyword
Is there any way to limit the records deleted by this query? I know I could just delete the record and reinsert it but that is a little tedious since I will have to do this multiple times.
Please note that this is not SQL Server or MySQL.This is for Netezza
If it doesn't support either "DELETE TOP 1" or the "LIMIT" keyword, you may end up having to do one of the following:
1) add some sort of an auto-incrementing column (like IDs), making each row unique. I don't know if you can do that in Netezza after the table has been created, though.
2) Programmatically read the entire table with some programming language, eliminate duplicates programmatically, then deleting all the rows and inserting them again. This might not be possible if they are references by other tables, in which case, you might have to temporarily remove the constraint.
I hope that helps. Please let us know.
And for future reference; this is why I personally always create an auto-incrementing ID field, even if I don't think I'll ever use it. :)
The below query works for deleting duplicates from a table.
DELETE FROM YOURTABLE
WHERE COLNAME1='XYZ' AND
(
COLNAME1,
ROWID
)
NOT IN
(
SELECT COLNAME1,
MAX(ROWID)
FROM YOURTABLENAME
WHERE COLNAME = 'XYZ'
GROUP BY COLNAME1
)
If the records are identical then you could do something like
CREATE TABLE DUPES as
SELECT col11,col2,col3,col....... coln from source_table where test_id = 2025
group by
1,2,3..... n
DELETE FROM source_table where test_id = 2025
INSERT INTO Source_table select * from duoes
DROP TABLE DUPES
You could even create a sub-query to select all the test_ids HAVING COUNT(*) > 1 to automatically find the dupes in steps 1 and 3
-- remove duplicates from the <<TableName>> table
delete from <<TableName>>
where rowid not in
(
select min(rowid) from <<TableName>>
group by (col1,col2,col3)
);
The GROUP BY 1,2,3,....,n will eliminate the dupes on the insert to the temp table
Does the use rowid is allowed in Netezza...As far as my knowledge is concern i don't think this query will executed in Netezza...
Can I do this in SQL 2005?
SELECT 'C'+inserted.exhid AS ExhId,inserted.exhname AS ExhName,inserted.exhid AS RefID INTO mytable FROM inserted
WHERE inserted.altname IS NOT NULL
It won't work if the table exists, but will create the table if it is non-existent. How do I get it to insert into an existing table?
like this
INSERT INTO mytable
SELECT 'C'+inserted.exhid AS ExhId,inserted.exhname AS ExhName,
inserted.exhid AS RefID FROM inserted
WHERE inserted.altname IS NOT NULL
you also don't need the aliases in this case
SQLMenace's answer is correct. But to add to it, I would suggest that it is a good practice to explicitly list your columns that you are inserting into in the event the table structure/column order ever changes, that way your proc stands a better change of working consistently.
INSERT INTO mytable (
ExhId,
ExhName,
RefID)
SELECT 'C'+inserted.exhid,
inserted.exhname,
inserted.exhid
FROM inserted
WHERE inserted.altname IS NOT NULL
To insert into an existing table, use INSERT INTO instead of `SELECT INTO
Sybase db tables do not have a concept of self updating row numbers. However , for one of the modules , I require the presence of rownumber corresponding to each row in the database such that max(Column) would always tell me the number of rows in the table.
I thought I'll introduce an int column and keep updating this column to keep track of the row number. However I'm having problems in updating this column in case of deletes. What sql should I use in delete trigger to update this column?
You can easily assign a unique number to each row by using an identity column. The identity can be a numeric or an integer (in ASE12+).
This will almost do what you require. There are certain circumstances in which you will get a gap in the identity sequence. (These are called "identity gaps", the best discussion on them is here). Also deletes will cause gaps in the sequence as you've identified.
Why do you need to use max(col) to get the number of rows in the table, when you could just use count(*)? If you're trying to get the last row from the table, then you can do
select * from table where column = (select max(column) from table).
Regarding the delete trigger to update a manually managed column, I think this would be a potential source of deadlocks, and many performance issues. Imagine you have 1 million rows in your table, and you delete row 1, that's 999999 rows you now have to update to subtract 1 from the id.
Delete trigger
CREATE TRIGGER tigger ON myTable FOR DELETE
AS
update myTable
set id = id - (select count(*) from deleted d where d.id < t.id)
from myTable t
To avoid locking problems
You could add an extra table (which joins to your primary table) like this:
CREATE TABLE rowCounter
(id int, -- foreign key to main table
rownum int)
... and use the rownum field from this table.
If you put the delete trigger on this table then you would hugely reduce the potential for locking problems.
Approximate solution?
Does the table need to keep its rownumbers up to date all the time?
If not, you could have a job which runs every minute or so, which checks for gaps in the rownum, and does an update.
Question: do the rownumbers have to reflect the order in which rows were inserted?
If not, you could do far fewer updates, but only updating the most recent rows, "moving" them into gaps.
Leave a comment if you would like me to post any SQL for these ideas.
I'm not sure why you would want to do this. You could experiment with using temporary tables and "select into" with an Identity column like below.
create table test
(
col1 int,
col2 varchar(3)
)
insert into test values (100, "abc")
insert into test values (111, "def")
insert into test values (222, "ghi")
insert into test values (300, "jkl")
insert into test values (400, "mno")
select rank = identity(10), col1 into #t1 from Test
select * from #t1
delete from test where col2="ghi"
select rank = identity(10), col1 into #t2 from Test
select * from #t2
drop table test
drop table #t1
drop table #t2
This would give you a dynamic id (of sorts)