I need to update the column B in a table, which has a column A as the primary key, with the a different value for each value in column A. There are about 50,000 rows to be updated in the table, which makes it impossible to do this manually. Is there any other way to update it?
Of all the records in the table, I want to update just 50000. For each record among these 50,000, the value to be updated is different. How can I update the table without having to write 50,000 update queries?
Column A. Column B
One. 1
Two 2
Three 3
I want to update one=4, two=5 and so on for about 50,000 rows.
Thanks in advance guys!
I don't know whether I got your requirement properly but i have written a below working snippet to replicate the scenario. Let me know if this helps
--Drop any existing table if present with same name
DROP TABLE SIMPLE_UPDATE;
--Create new table
CREATE TABLE SIMPLE_UPDATE
(
COL1 NUMBER,
COL2 VARCHAR2(2000 CHAR)
);
-- Inserting random test data
INSERT INTO SIMPLE_UPDATE
SELECT LEVEL,TO_CHAR(TO_DATE(LEVEL,'J'),'JSP') FROM DUAL
CONNECT BY LEVEL < 500;
-- Updating the col2 value assuming thta the increment is adding 3 to each number and updating the col2 with the same.
UPDATE SIMPLE_UPDATE
SET COL2 = COL1+3
WHERE <COL_NAME> = <CONDITON>;
COMMIT;
Related
I'm currently developing a project where I need to create a record in one table and leave the last column as NULL and update this later with the PK of another table (to establish a link).
Table 1 is a table for courses and table 2 is a table for the feedback form for each course.
The user first makes the course which is inserted in to table 1, THEN they make the feedback form which is inserted in to table 2.
Now I wanted to use a PK+FK relation here but, I can't set the FK of table 1 as the record hasn't yet been created in table 2.
For example, table 1 has the columns:
id(int)(PK), column1(int), column2(int), linkColumn(int)
Table 2 has the columns:
id(int)(PK), column1(int),...
I need to be able to make a record in table 1 and set linkColumn to NULL initially.
Then I need to create a record in table 2 and update linkColumn in table 1 with the Primary key of the newly created record in table 2.
How would I go about this?
Thanks!
Edit: I'm using PHP as the SQL handler
Use Triggers on insert for each row on Table2.
What Database are you using?
EDIT:
CREATE TRIGGER T_TABLE2_AI
AFTER INSERT
ON TABLE2 FOR EACH ROW
BEGIN
update Table1 set linkColumn = :new.ID where column1 = :new.column1;
END;
The question is: how to create a new column in Postgres SQL based on existing columns.
A work around was to create of a unique row identifier and create a parallel table with the row identifier, compute the desired update, and then replace row_3 with the update based on the unique row identifier. This is manual and not very efficient.
Assume the table structure is:
create table tab (
row_1 integer
, row_2 integer
, row_3 integer);
Assume the table has 1000 entries and row_1 and row_2 have legitimate values.
The question is: How can row_3 be updated to reflect the sum of row_1 and row_2 for the entire table. This should work for an arbitrary table.
If you want the "new" column to be up-to-date, then I would recommend using a view:
create view v_tab as
select tab.*, (col1 + col2) as col3
from tabl;
(I experience cognitive dissonance when columns are referred to as "row". ;)
This will do the calculation when the table is queried, so the results are always consistent.
If you just want a one-time change to the values, then use update.
ALTER TABLE tab ADD COLUMN row_3 INTEGER;
UPDATE tab SET row_3 = row_1+row_2;
If you want a 'NOT NULL' constraint on column 'row_3' then add it after the UPDATE.
So I have two tables in Oracle. Table A is the master table and Table B is data retrieved from a contractor. They both have the same general structure. In the end, I would like to INSERT INTO TABLE A(SELECT * FROM TABLE B). However the primary key column in Table B does not exist. How do you suggest creating a primary key that at the same time generates a sequence from 4328 and on for every row in Table B?
I proceeded to do the following:
create sequence myseq
increment by 1
start with 4328
MAXVALUE 99999
NOCACHE
NOCYCLE;
Then created a PK column to finally implemented the following:
INSERT INTO TABLE B (PK) VALUES(MYSEQ.nextVal);
But no results yielded except putting in one row at the very end. I want every row to be populated starting at 4328 and ending 291 rows later.
Sorry, but I don't know if I undestand your problem.
Do you want to insert one row in Table A and Table B with equal PK value?
You can do it by procedure put the sequence value in a variable before insert the rows, for example:
BEGIN
SELECT MYSEQ.nextval INTO v_SEQUENCE FROM dual;
insert into table_A values (v_SEQUENCE,1,2,3);
insert into table_B values (v_SEQUENCE,1,2,3);
END;
If you can get all rows from Table_B and insert into table_A with a PK you can do for example:
INSERT INTO TABLE_A (SELECT MYSEQ.nextval, B.* FROM TABLE_B B)
Is it?
Your approach only calls the sequence once. What you want to do is perform a loop in PL/SQL to call the sequence as many times as needed:
BEGIN
FOR x IN 1 .. 291 LOOP
INSERT INTO TABLE B (PK) VALUES(MYSEQ.nextVal);
END LOOP;
END;
Make sure you drop and recreate your sequence to ensure it starts at the right value.
I have this table:
Table1:
id text
1 lala
And i want take first row and copy it, but the id 1 change to 2.
Can you help me with this problem?
A SQL table has no concept of "first" row. You can however select a row based on its characteristics. So, the following would work:
insert into Table1(id, text)
select 2, text
from Table1
where id = 1;
As another note, when creating the table, you can have the id column be auto-incremented. The syntax varies from database to database. If id were auto-incremented, then you could just do:
insert into Table1(text)
select text
from Table1
where id = 1;
And you would be confident that the new row would have a unique id.
Kate - Gordon's answer is technically correct. However, I would like to know more about why you want to do this.
If you're intent is to have the field increment with the insertion of each new row, manually setting the id column value isn't a great idea - it becomes very easy for there to be a conflict with two rows attempting to use the same id at the same time.
I would recommend using an IDENTITY field for this (MS SQL Server -- use an AUTO_INCREMENT field in MySQL). You could then do the insert as follows:
INSERT INTO Table1 (text)
SELECT text
FROM Table1
WHERE id = 1
SQL Server would automatically assign a new, unique value to the id field.
Sybase db tables do not have a concept of self updating row numbers. However , for one of the modules , I require the presence of rownumber corresponding to each row in the database such that max(Column) would always tell me the number of rows in the table.
I thought I'll introduce an int column and keep updating this column to keep track of the row number. However I'm having problems in updating this column in case of deletes. What sql should I use in delete trigger to update this column?
You can easily assign a unique number to each row by using an identity column. The identity can be a numeric or an integer (in ASE12+).
This will almost do what you require. There are certain circumstances in which you will get a gap in the identity sequence. (These are called "identity gaps", the best discussion on them is here). Also deletes will cause gaps in the sequence as you've identified.
Why do you need to use max(col) to get the number of rows in the table, when you could just use count(*)? If you're trying to get the last row from the table, then you can do
select * from table where column = (select max(column) from table).
Regarding the delete trigger to update a manually managed column, I think this would be a potential source of deadlocks, and many performance issues. Imagine you have 1 million rows in your table, and you delete row 1, that's 999999 rows you now have to update to subtract 1 from the id.
Delete trigger
CREATE TRIGGER tigger ON myTable FOR DELETE
AS
update myTable
set id = id - (select count(*) from deleted d where d.id < t.id)
from myTable t
To avoid locking problems
You could add an extra table (which joins to your primary table) like this:
CREATE TABLE rowCounter
(id int, -- foreign key to main table
rownum int)
... and use the rownum field from this table.
If you put the delete trigger on this table then you would hugely reduce the potential for locking problems.
Approximate solution?
Does the table need to keep its rownumbers up to date all the time?
If not, you could have a job which runs every minute or so, which checks for gaps in the rownum, and does an update.
Question: do the rownumbers have to reflect the order in which rows were inserted?
If not, you could do far fewer updates, but only updating the most recent rows, "moving" them into gaps.
Leave a comment if you would like me to post any SQL for these ideas.
I'm not sure why you would want to do this. You could experiment with using temporary tables and "select into" with an Identity column like below.
create table test
(
col1 int,
col2 varchar(3)
)
insert into test values (100, "abc")
insert into test values (111, "def")
insert into test values (222, "ghi")
insert into test values (300, "jkl")
insert into test values (400, "mno")
select rank = identity(10), col1 into #t1 from Test
select * from #t1
delete from test where col2="ghi"
select rank = identity(10), col1 into #t2 from Test
select * from #t2
drop table test
drop table #t1
drop table #t2
This would give you a dynamic id (of sorts)