How to create a sequence in oracle from top to bottom - sql

So I have two tables in Oracle. Table A is the master table and Table B is data retrieved from a contractor. They both have the same general structure. In the end, I would like to INSERT INTO TABLE A(SELECT * FROM TABLE B). However the primary key column in Table B does not exist. How do you suggest creating a primary key that at the same time generates a sequence from 4328 and on for every row in Table B?
I proceeded to do the following:
create sequence myseq
increment by 1
start with 4328
MAXVALUE 99999
NOCACHE
NOCYCLE;
Then created a PK column to finally implemented the following:
INSERT INTO TABLE B (PK) VALUES(MYSEQ.nextVal);
But no results yielded except putting in one row at the very end. I want every row to be populated starting at 4328 and ending 291 rows later.

Sorry, but I don't know if I undestand your problem.
Do you want to insert one row in Table A and Table B with equal PK value?
You can do it by procedure put the sequence value in a variable before insert the rows, for example:
BEGIN
SELECT MYSEQ.nextval INTO v_SEQUENCE FROM dual;
insert into table_A values (v_SEQUENCE,1,2,3);
insert into table_B values (v_SEQUENCE,1,2,3);
END;
If you can get all rows from Table_B and insert into table_A with a PK you can do for example:
INSERT INTO TABLE_A (SELECT MYSEQ.nextval, B.* FROM TABLE_B B)
Is it?

Your approach only calls the sequence once. What you want to do is perform a loop in PL/SQL to call the sequence as many times as needed:
BEGIN
FOR x IN 1 .. 291 LOOP
INSERT INTO TABLE B (PK) VALUES(MYSEQ.nextVal);
END LOOP;
END;
Make sure you drop and recreate your sequence to ensure it starts at the right value.

Related

Use cursor value to make combination dynamically and use it while inserting in a row

I am pulling values in a cursor say from table called my_table and depending on the condition it may return a, b, c, d etc. Now with those values I need to prepare a combination of a, ab, ac, ad, b, bc,bd, c, cd and so on. I want to use these values while inserting a row in another 2 tables where in one table I will use the combination values together in a column and in another table I'll use original values separately used in the combination. The first table id will be the FK of the second table while inserting data. How to achieve the same.
You can do something like as mentioned below, change the code as per your requirement.
CREATE OR REPLACE TRIGGER my_table_trg
AFTER INSERT
ON my_table
FOR EACH ROW
DECLARE
CURSOR comb_values
IS
SELECT .......;
BEGIN
FOR i IN comb_values
LOOP
INSERT INTO table1
VALUES (i.combination_values);
INSERT INTO table2
VALUES (i.non_combination_values);
END LOOP;
END;

How to update multiple records in a table?

I need to update the column B in a table, which has a column A as the primary key, with the a different value for each value in column A. There are about 50,000 rows to be updated in the table, which makes it impossible to do this manually. Is there any other way to update it?
Of all the records in the table, I want to update just 50000. For each record among these 50,000, the value to be updated is different. How can I update the table without having to write 50,000 update queries?
Column A. Column B
One. 1
Two 2
Three 3
I want to update one=4, two=5 and so on for about 50,000 rows.
Thanks in advance guys!
I don't know whether I got your requirement properly but i have written a below working snippet to replicate the scenario. Let me know if this helps
--Drop any existing table if present with same name
DROP TABLE SIMPLE_UPDATE;
--Create new table
CREATE TABLE SIMPLE_UPDATE
(
COL1 NUMBER,
COL2 VARCHAR2(2000 CHAR)
);
-- Inserting random test data
INSERT INTO SIMPLE_UPDATE
SELECT LEVEL,TO_CHAR(TO_DATE(LEVEL,'J'),'JSP') FROM DUAL
CONNECT BY LEVEL < 500;
-- Updating the col2 value assuming thta the increment is adding 3 to each number and updating the col2 with the same.
UPDATE SIMPLE_UPDATE
SET COL2 = COL1+3
WHERE <COL_NAME> = <CONDITON>;
COMMIT;

Informix trigger for deleting one record

When I perform an task, two rows gets inserted in my table ie. duplication. I need to remove the duplicate by using an after insert trigger. I need to delete one duplicate record from those 2. I need something like this
CREATE TRIGGER del_rec
INSERT ON table1
AFTER(EXECUTE PROCEDURE del_proc());
CREATE PROCEDURE del_proc()
//check field a,b,c of this table already exists for this id. if yes delete the second one
END PROCEDURE;
For example:
table 1:
a b c d e
1 1 1 2 2
1 1 1 2 2
it should delete the second row.
Your table is misdesigned if duplicates can be inserted into it. You should have a unique constraint ensuring that it does not happen.
Assuming that you can't fix the table for some reason, then:
CREATE TRIGGER ins_table1
INSERT ON table1 REFERENCING NEW AS new
FOR EACH ROW (EXECUTE PROCEDURE ins_table1(new.a, new.b, new.c));
This assumes that columns a, b and c are sufficient to uniquely identify the row. I've renamed the trigger and procedure to more accurately reflect what/when they are relevant; del is not all that appropriate as a prefix for something called on INSERT.
CREATE PROCEDURE ins_table1(new_a INTEGER, new_b INTEGER, new_c INTEGER)
DEFINE l_a LIKE table1.a;
FOREACH SELECT a INTO l_a
FROM table1
WHERE a = new_a AND b = new_b AND c = new_c
RAISE EXCEPTION -271, -100;
END FOREACH;
END PROCEDURE;
This is called for each row that's inserted. If the SELECT statement returns a row, it will enter the body of the FOREACH loop, so the exception will be raised and the INSERT will be aborted with a more or less appropriate error (-271 Could not insert new row into the table; -100 ISAM error: duplicate value for a record with unique key).
If you try to do this validation with an AFTER trigger, you have to scan the entire table to see whether there are any duplicates, rather than just targeting the single key combination that was inserted. Note that in general, an INSERT can have multiple rows (think INSERT INTO Table SELECT * FROM SomeWhereElse). The performance difference will be dramatic! (Your query for an AFTER trigger would have to be something like SELECT a, b, c FROM table1 GROUP BY a, b, c HAVING COUNT(*) > 1.)
Why not just use SELECT UNIQUE to avoid inserting duplicate values, or to remove duplicate values which already exist in the table?

Rolling rows in SQL table

I'd like to create an SQL table that has no more than n rows of data. When a new row is inserted, I'd like the oldest row removed to make space for the new one.
Is there a typical way of handling this within SQLite?
Should manage it with some outside (third-party) code?
Expanding on Alex' answer, and assuming you have an incrementing, non-repeating serial column on table t named serial which can be used to determine the relative age of rows:
CREATE TRIGGER ten_rows_only AFTER INSERT ON t
BEGIN
DELETE FROM t WHERE serial <= (SELECT serial FROM t ORDER BY serial DESC LIMIT 10, 1);
END;
This will do nothing when you have fewer than ten rows, and will DELETE the lowest serial when an INSERT would push you to eleven rows.
UPDATE
Here's a slightly more complicated case, where your table records "age" of row in a column which may contain duplicates, as for example a TIMESTAMP column tracking the insert times.
sqlite> .schema t
CREATE TABLE t (id VARCHAR(1) NOT NULL PRIMARY KEY, ts TIMESTAMP NOT NULL);
CREATE TRIGGER ten_rows_only AFTER INSERT ON t
BEGIN
DELETE FROM t WHERE id IN (SELECT id FROM t ORDER BY ts DESC LIMIT 10, -1);
END;
Here we take for granted that we cannot use id to determine relative age, so we delete everything after the first 10 rows ordered by timestamp. (SQLite imposes an arbitrary order on rows sharing the same ts).
Seems SQLite's support for triggers can suffice: http://www.sqlite.org/lang_createtrigger.html
article on fixed queues in sql: http://www.xaprb.com/blog/2007/01/11/how-to-implement-a-queue-in-sql
should be able to use the same technique to implement "rolling rows"
This would be something like how you would do it. This assumes that my_id_column is auto-incrementing and is the ordering column for the table.
-- handle rolls forward
-- deletes the oldest row
create trigger rollfwd after insert on my_table when (select count() from my_table) > max_table_size
begin
delete from my_table where my_id_column = (select min(my_id_column) from my_table);
end;
-- handle rolls back
-- inserts an empty row at the position before oldest entry
-- assumes all columns option or defaulted
create trigger rollbk after delete on my_table when (select count() from my_table) < max_table_size
begin
insert into my_table (my_id_column) values ((select min(my_id_column) from my_table) - 1);
end;

Row number in Sybase tables

Sybase db tables do not have a concept of self updating row numbers. However , for one of the modules , I require the presence of rownumber corresponding to each row in the database such that max(Column) would always tell me the number of rows in the table.
I thought I'll introduce an int column and keep updating this column to keep track of the row number. However I'm having problems in updating this column in case of deletes. What sql should I use in delete trigger to update this column?
You can easily assign a unique number to each row by using an identity column. The identity can be a numeric or an integer (in ASE12+).
This will almost do what you require. There are certain circumstances in which you will get a gap in the identity sequence. (These are called "identity gaps", the best discussion on them is here). Also deletes will cause gaps in the sequence as you've identified.
Why do you need to use max(col) to get the number of rows in the table, when you could just use count(*)? If you're trying to get the last row from the table, then you can do
select * from table where column = (select max(column) from table).
Regarding the delete trigger to update a manually managed column, I think this would be a potential source of deadlocks, and many performance issues. Imagine you have 1 million rows in your table, and you delete row 1, that's 999999 rows you now have to update to subtract 1 from the id.
Delete trigger
CREATE TRIGGER tigger ON myTable FOR DELETE
AS
update myTable
set id = id - (select count(*) from deleted d where d.id < t.id)
from myTable t
To avoid locking problems
You could add an extra table (which joins to your primary table) like this:
CREATE TABLE rowCounter
(id int, -- foreign key to main table
rownum int)
... and use the rownum field from this table.
If you put the delete trigger on this table then you would hugely reduce the potential for locking problems.
Approximate solution?
Does the table need to keep its rownumbers up to date all the time?
If not, you could have a job which runs every minute or so, which checks for gaps in the rownum, and does an update.
Question: do the rownumbers have to reflect the order in which rows were inserted?
If not, you could do far fewer updates, but only updating the most recent rows, "moving" them into gaps.
Leave a comment if you would like me to post any SQL for these ideas.
I'm not sure why you would want to do this. You could experiment with using temporary tables and "select into" with an Identity column like below.
create table test
(
col1 int,
col2 varchar(3)
)
insert into test values (100, "abc")
insert into test values (111, "def")
insert into test values (222, "ghi")
insert into test values (300, "jkl")
insert into test values (400, "mno")
select rank = identity(10), col1 into #t1 from Test
select * from #t1
delete from test where col2="ghi"
select rank = identity(10), col1 into #t2 from Test
select * from #t2
drop table test
drop table #t1
drop table #t2
This would give you a dynamic id (of sorts)