I need to move some data from Environment A to Environment B. So far, so easy. But some of the columns have FK Constraints and unfortunately the lookup data is already on Environment B and has different PKs. Lucky me, there are other unique columns I could do a mapping on. Therefore I wonder if SQL Developer has an export feature which allows me to replace certain column values by subqueries. To make it clear, I'm looking for a SQL Developer Feature, Query or similar which generates INSERT Statements that look like this:
INSERT INTO table_name(col1, col2, fkcol)
VALUES('value', 'value', (SELECT id FROM lookup_table WHERE unique_value = 'unique'))
My best approach was to try to generate them by hand, like this:
SELECT
'INSERT INTO table_name(col1, col2, fkcol) '
|| 'VALUES( '
|| (SELECT LISTAGG(col1, col2,
'SELECT id FROM lookup_table
WHERE unique_value = ''' || lookup_table.uniquevalue || '''', ', ')
WITHIN GROUP(ORDER BY col1)
FROM table_name INNER JOIN lookup_table ON table_name.fkcol = lookup_table.id)
|| ' );'
FROM table_name;
Which is absolutely a pain. Do you know something better to achive this without approaching the other db?
Simple write a query that produces the required data (with the mapped key) using a join of both tables.
For example (see the sample data below) such query (mapping the unique_value to the id):
select
tab.col1, tab.col2, lookup_table.id fkcol
from tab
join lookup_table
on tab.fkcol = lookup_table.unique_value
COL1 COL2 FKCOL
---------- ------ ----------
1 value1 11
2 value2 12
Now you can use the normal SQL Developer export feature in the INSERT format, which would yield following script - if you want to transer it to other DB or insert it direct with INSERT ... SELECT.
Insert into TABLE_NAME (COL1,COL2,FKCOL) values ('1','value1','11');
Insert into TABLE_NAME (COL1,COL2,FKCOL) values ('2','value2','12');
Sample Data
select * from tab;
COL1 COL2 FKCOL
---------- ------ -------
1 value1 unique
2 value2 unique2
select * from lookup_table
ID UNIQUE_
---------- -------
11 unique
12 unique2
Related
I have created new table name NEW_TABLE like
Create table NEW_TABLE
(
Col_1 VARCHAR(50),
Col_2_ VARCHAR(50),
Col_3_ VARCHAR(50)
)
I am inserting value from OLD_TABLE like this way
INSERT INTO NEW_TABLE (Col_1)
SELECT Col_1
FROM OLD_TABLE_A
WHERE Col_1 IS NOT NULL;
INSERT INTO NEW_TABLE (Col_2)
SELECT Col_1
FROM OLD_TABLE_B
WHERE Col_1 IS NOT NULL;
When I want to see the NEW_TABLE it show the data like this
Col_1 Col_2
----- -----
AA
BB
CC
XX
MM
ZZ
PP
CC
I am getting NULL value at the start of Col_2.
I want this:
Col_1 Col_2
----- -----
AA XX
BB MM
CC ZZ
PP
CC
I have to insert different column in different time separately.while inserting a column I do not want to consider other
insert creates new row. If you want to fill column2 values where column1 is already filled you need to use update or merge. But as mentioned in comments you need to know how to match column2 with column1. You haven't provided any join condition for the data so people are guessing what you need. Please post some sample data from tableA and tableB and how it should look in new_table.
I think you need something like:
step1:
INSERT INTO NEW_TABLE (Col_1)
SELECT Col_1
FROM OLD_TABLE_A
WHERE Col_1 IS NOT NULL;
step2:
merge into NEW_TABLE n
using OLD_TABLE_B b
on (/*HERE PUT JOIN CONDITION*/)
when matched then update set n.col_2_ = b.col_1;
step3:
merge into NEW_TABLE n
using OLD_TABLE_C c
on (/*HERE PUT JOIN CONDITION*/)
when matched then update set n.col_3_ = c.col_1;
Since you stated in a comment that there is no relation between the columns, and that there are the same number of columns in old_table_a and old_table_b this will work. I broke it into steps to make following it easier.
First establish the original table with a WITH clause. Then with another WITH clause, add an ID column which is the row number. Finally SELECT, joining on the ID (uncomment the INSERT line at the top when you are satisfied with the results).
Note the "ID" is meaningless as a true ID and serves only to match rows one for one in each table. If these tables have different numbers of rows you will get unexpected results but it meets your requirements.
SQL> --insert into new_table(col_1, col_2)
SQL> -- Set up the original old table A
SQL> with old_table_a(col_1) as (
select 'AA' from dual union
select 'BB' from dual union
select 'CC' from dual
),
-- Add the id, which is the row_number
ota_rn(id, col_1) as (
select row_number() over (order by col_1) as id, col_1
from old_table_a
),
-- Set up the original old table B
old_table_b(col_1) as (
select 'XX' from dual union
select 'YY' from dual union
select 'ZZ' from dual
),
-- Add the id, which is the row_number
otb_rn(id, col_1) as (
select row_number() over (order by col_1) as id, col_1
from old_table_b
)
-- Now join on the ID (which is really meaningless)
select a.col_1, b.col_1
from ota_rn a
join otb_rn b
on (a.id = b.id);
COL_1 COL_1
---------- ----------
AA XX
BB YY
CC ZZ
SQL>
Update before I even post the answer: I see from subsequent comments as I was about to post that you want to allow for adding additional columns with perhaps differing numbers of rows, etc. That will call for UPDATING, not INSERTING and unless you use the fake row_number ID method I use above really makes no sense in a true relational table. In that case this answer will not meet your needs but I will leave it here in case you want to adapt it for your needs.
I suggest you reconsider your approach to your original problem as this path will take you down a dark hole. You will have unrelated attributes in a table which violates basic database design and makes selecting this data in the future problematic at best (how will you query results? I'm curious how you will use this table). Maybe you should take a step back and reconsider your approach and at least start with some properly normalized tables. What's the real issue your are trying to solve? I bet there is a better way.
The second INSERT should be UPDATE, something like:
UPDATE NEW_TABLE
SET Col_2 = (SELECT Col_2
FROM OLD_TABLE
WHERE Col_1 = <selection value>
)
WHERE Col_1 = <selection value> ;
The basic answer is that you should
insert into NEW_TABLE (Col_1, Col_2)
select OLD_TABLE_A.Col_1, OLD_TABLE_B.Col_2
from OLD_TABLE_A, OLD_TABLE_B
where OLD_TABLE_A.Col_1 is not null
and OLD_TABLE_B.Col_2 is not null;
the problem is that you will then get
Col_1 Col_2
----- -----
AA XX
AA YY
AA ZZ
BB XX
BB YY
BB ZZ
CC XX
CC YY
CC ZZ
now the question you need to answer (that's what Dimitry asked in his comment) is how do you decide that you do not want the AA,YY, AA,ZZ, BB,XX, BB,ZZ, CC,XX and CC,YY ? Once you have an answer to this you can augment the where condition to remove them.
select min (case tab when 'A' then Col_1 end) as Col_1
,min (case tab when 'B' then Col_1 end) as Col_2
from ( SELECT 'A' as tab ,rownum as rn ,Col_1 FROM OLD_TABLE_A
union all SELECT 'B' ,rownum ,Col_1 FROM OLD_TABLE_B
)
group by rn
order by rn
;
OR
select min (Col_1) as Col_1
,min (Col_2) as Col_2
from ( SELECT 'A' as tab,rownum as rn,Col_1 ,null as Col_2 FROM OLD_TABLE_A
union all SELECT 'B' ,rownum ,null ,Col_1 FROM OLD_TABLE_B
)
group by rn
order by rn
;
OR
select a.Col_1
,b.Col_1 as Col_2
from (SELECT rownum as rn,Col_1 FROM OLD_TABLE_A) a
full join (SELECT rownum as rn,Col_1 FROM OLD_TABLE_B) b
on b.rn = a.rn
order by coalesce (a.rn,b.rn)
;
Results
+-------+-------+
| COL_1 | COL_2 |
+-------+-------+
| AA | XX |
+-------+-------+
| BB | MM |
+-------+-------+
| CC | ZZ |
+-------+-------+
| | PP |
+-------+-------+
| | CC |
+-------+-------+
The problem as I see it is:
Fill any holes in Col_2 with one of each of the values from
OLD_TABLE_B, when you've run out of holes then add new rows.
Exactly the same technique should to fill Col_3 from OLD_TABLE_C, as so on. Ideally the initial Col_1 from OLD_TABLE_A should also be able to use the technique although it's a simple insert.
If you end up with an OLD_TABLE_B_PART_2 this should be able to be run against Col_2 later with the same technique.
The solution needs the following parts:
A MERGE statement as you need to do updates otherwise inserts.
To use a single MERGE for each pass to update multiple rows, each row with different values, you need a unique way of identifying the row for the ON clause. With no unique column(s) / primary key you need to use the ROWID pseudo-column. This will be very efficient at targeting the row in the table when we get to the UPDATE clause as ROWID encodes the physical location of the row.
You need all the rows from OLD_TABLE and as many matching rows from NEW_TABLE you can find with holes, so it's a LEFT OUTER JOIN. You could do some sort of UNION then aggregate the rows but this would need an often expensive GROUP BY and you many need to discard an unknown number of surplus rows from NEW_TABLE.
To match a (potentially non-unique) row in the OLD_TABLE with a unique hole in the NEW_TABLE, both will need a temporary matching IDs. The ROWNUM pseudo-column does this and is cheap.
Given the above, the following statement should work:
MERGE INTO NEW_TABLE
USING
( SELECT Col_1, ntid
FROM
( SELECT ROWNUM num, Col_1
FROM OLD_TABLE_B
WHERE Col_1 IS NOT NULL
) ot
LEFT OUTER JOIN
( SELECT ROWNUM num, ROWID ntid
FROM NEW_TABLE
WHERE Col_2 IS NULL
) nt ON nt.num=ot.num
) sel
ON (NEW_TABLE.ROWID=ntid)
WHEN MATCHED THEN
UPDATE SET Col_2=sel.Col_1
WHEN NOT MATCHED THEN
INSERT (Col_2) VALUES (sel.Col_1);
Check the execution plan before using on big data tables. I've seen the optimiser (in Oracle 12c) use a MERGE or HASH join against the target (NEW_TABLE) for the update rather than a plain USER-ROWID access. In this case the workaround I have used was to force NESTED-LOOP joins i.e. add an optimisation hint to the MERGE at the start of the query, so MERGE /*+ use_nl(NEW_TABLE) */. You may also need to check how it does LEFT JOIN depending on your data.
Create table NEW_TABLE
(
Col_1 VARCHAR(5),
Col_2_ VARCHAR(5),
Col_3_ VARCHAR(5)
);
Create table OLD_TABLE
(
Col_1 VARCHAR(5),
Col_2_ VARCHAR(5),
Col_3_ VARCHAR(5)
);
insert into old_table values ('AA','XX', null);
insert into old_table values ('BB','MM', null);
insert into old_table values ('CC','ZZ', null);
insert into old_table values (null,'PP', 'YYY');
insert into old_table values (null,'CC', 'XXX');
select * from old_table;
COL_1 COL_2 COL_3
----- ----- -----
AA XX
BB MM
CC ZZ
PP YYY
CC XXX
alter table new_table add (position number);
.
MERGE INTO new_table D
USING (select rownum position, old_table.* from old_table where col_1 is not null) S
ON (d.position = s.position)
WHEN MATCHED THEN UPDATE SET D.Col_1 = S.Col_1
WHEN NOT MATCHED THEN INSERT (d.position, D.Col_1)
VALUES (s.position, S.Col_1);
MERGE INTO new_table D
USING (select rownum position, old_table.* from old_table where col_2_ is not null) S
ON (d.position = s.position)
WHEN MATCHED THEN UPDATE SET D.Col_2_ = S.Col_2_
WHEN NOT MATCHED THEN INSERT (d.position, D.Col_2_)
VALUES (s.position,S.Col_2_);
MERGE INTO new_table D
USING (select rownum position, old_table.* from old_table where col_3_ is not null) S
ON (d.position = s.position)
WHEN MATCHED THEN UPDATE SET D.Col_3_ = S.Col_3_
WHEN NOT MATCHED THEN INSERT (d.position, D.Col_3_)
VALUES (s.position, S.Col_3_);
select * from new_table order by position;
COL_1 COL_2 COL_3 POSITION
----- ----- ----- ----------
AA XX YYY 1
BB MM XXX 2
CC ZZ 3
PP 4
CC 5
You can drop POSITION column from new_table after the operation if you wish.
run below query
INSERT INTO NEW_TABLE (Col_1, Col_2)
( SELECT Col_1, Col_2
FROM OLD_TABLE_A
WHERE not (Col_1 IS NULL and Col_2 IS NULL))
You can't do that like your way.
TRY THIS
INSERT INTO NEW_TABLE (Col_1,COL_2)
SELECT A.Col_1,B.COL_1
FROM OLD_TABLE_A A FULL OUTER JOIN OLD_TABLE_B B ON 1=1
AND A.Col_1 IS NOT NULL
AND B.Col_1 IS NOT NULL;
SELECT POM.TABLE_NAME, POM.COLUMN_NAME
FROM ALL_TAB_COLUMNS POM
WHERE POM.COLUMN_NAME LIKE'%STATUS%'
I want to see all possible values in columns on the list(in one row if possible). How can i modify this select to do it?
i want soemthing like this
TABLE_NAME | COLUMN_NAME |VALUES
-----------| ----------- | -------
CAR | COLOR | RED,GREEN
You can use the below query for your requirement. It fetched distinct column values for a table.
It can be used only for the table having limited number of distinct values as I have used LISTAGG function.
SELECT POM.TABLE_NAME, POM.COLUMN_NAME,
XMLTYPE(DBMS_XMLGEN.GETXML('SELECT LISTAGG(COLUMN_NAME,'','') WITHIN GROUP (ORDER BY COLUMN_NAME) VAL
FROM (SELECT DISTINCT '|| POM.COLUMN_NAME ||' COLUMN_NAME
FROM '||POM.OWNER||'.'||POM.TABLE_NAME||')')
).EXTRACT('/ROWSET/ROW/VAL/text()').GETSTRINGVAL() VAL
FROM ALL_TAB_COLUMNS POM
WHERE POM.COLUMN_NAME LIKE'%STATUS%';
I am using MERGE (Oracle) to do updates to records that match on criteria specified in the ON clause joined on a virtual table create by a subquery. Form of the statement is:
MERGE INTO table1 t1 USING SELECT (t2.f21, MAX(t2.f22), t3.f31, t3.f32
from
table2 t2, table3 t3
where
{... various join/filter criteria ...}
group by t2.f21, t3.f31, t3.f32) MATCHDATA
ON (t1.f11 = MATCHDATA.f21)
where t1.f12 = 'something';
Now the rub: MATCHDATA will return multiple rows because the "... criteria ..." will by nature return multiple groups of matching records. So the 'group by' along with the use of 'MAX()' is not buying me any guarantees; on the contrary, if I added:
where rownum = 1
after MATCHDATA after wrapping the MATCHDATA result in a another SELECT statement that simply repeated the returned field names, I would then be limiting myself to being able to update only the one record in the one group of records that needs updating that has the highest value as determined by MAX(). Instead, I need to have the records in table1 that match on the join field in each MAX() record for their group of records updated. I started on Fri. down the PARTITION BY path and am new to that one, so didn't make much headway. But it looks promising and maybe tomorrow will yield better results. As it is, when I try to use it without for example limiting the returned recordset in MATCHDATA to one record via use of "rownum = 1", I get that familiar "could not return a stable set of records" error that MERGE proponents (like myself) must smile sheepishly at when their colleagues come to them for advice on this "better-than-correlated-subqueries"-evangelized nouveau SQL command as they face this same error.
As you can see, I am treating MERGE as the more successful brother of the correlated subquery. But is this a case where I should be looking back to the lesser of two weevils (i.e., use a correlated subquery instead) to get the job done? Or is the fix to be found in PARTITION BY or another modification to the above?
Thanks to all who take the time to offer their advice, I appreciate it.
I get that familiar "could not return a stable set of records" error
Because the join key you have used in the ON clause is not enough to make the row unique to perform the WHEN MATCHED THEN UPDATE statement.
You must include more keys in the ON clause until the matched rows are unique and thus returning a stable set of records.
Let's see a test case:
Set up
SQL> CREATE TABLE source_table (
2 col1 NUMBER,
3 col2 VARCHAR2(10),
4 col3 VARCHAR2(10)
5 );
Table created.
SQL>
SQL> INSERT INTO source_table (col1, col2, col3) VALUES (1, 'a', 'p');
1 row created.
SQL> INSERT INTO source_table (col1, col2, col3) VALUES (1, 'b', 'q');
1 row created.
SQL> INSERT INTO source_table (col1, col2, col3) VALUES (2, 'c', 'r');
1 row created.
SQL> INSERT INTO source_table (col1, col2, col3) VALUES (3, 'c', 's');
1 row created.
SQL>
SQL> COMMIT;
Commit complete.
SQL>
SQL> CREATE TABLE target_table (
2 col1 NUMBER,
3 col2 VARCHAR2(10),
4 col3 VARCHAR2(10)
5 );
Table created.
SQL>
SQL> INSERT INTO target_table (col1, col2, col3) VALUES (1, 'b', 'p');
1 row created.
SQL> INSERT INTO target_table (col1, col2, col3) VALUES (3, 'd', 'q');
1 row created.
SQL>
SQL> COMMIT;
Commit complete.
SQL>
SQL> SELECT * FROM source_table;
COL1 COL2 COL3
---------- ---------- ----------
1 a p
1 b q
2 c r
3 c s
SQL> SELECT * FROM target_table;
COL1 COL2 COL3
---------- ---------- ----------
1 b p
3 d q
SQL>
Error reproduce
SQL> MERGE INTO target_table trg
2 USING source_table src
3 ON (trg.col1 = src.col1) -- Not Unique
4 WHEN MATCHED THEN UPDATE SET
5 trg.col2 = src.col2,
6 trg.col3 = src.col3
7 WHEN NOT MATCHED THEN INSERT
8 (
9 col1,
10 col2,
11 col3
12 )
13 VALUES
14 (
15 src.col1,
16 src.col2,
17 src.col3
18 );
USING source_table src
*
ERROR at line 2:
ORA-30926: unable to get a stable set of rows in the source tables
SQL>
So, as expected we get the error ORA-30926: unable to get a stable set of rows in the source tables
Let's make the ON clause unique.
SQL> MERGE INTO target_table trg
2 USING source_table src
3 ON (trg.col1 = src.col1
4 AND
5 trg.col2 = src.col2) -- Unique
6 WHEN MATCHED THEN UPDATE SET
7 trg.col3 = src.col3
8 WHEN NOT MATCHED THEN INSERT
9 (
10 col1,
11 col2,
12 col3
13 )
14 VALUES
15 (
16 src.col1,
17 src.col2,
18 src.col3
19 );
4 rows merged.
SQL> SELECT * FROM target_table;
COL1 COL2 COL3
---------- ---------- ----------
1 b q
3 d q
2 c r
3 c s
1 a p
SQL>
Problem solved!
Remember, you cannot update the columns which are referenced in the ON clause.
Let's say we have this table T2:
C1 C2 AMOUNT UF
-- -- ---------- ----------
A X 12 101
A Y 3 102
A Y 12 103
B X 7 104
B Y 9 105
I need to have the records in table1 that match on the join field in
each MAX() record for their group of records updated. I started on
Fri. down the PARTITION BY path and am new to that one, so didnt make
much headway.
This is good path and you can do this using function rank():
select * from (
select t2.*, rank() over (partition by c1 order by amount desc) rn from t2 )
where rn=1
C1 C2 AMOUNT UF RN
-- -- ---------- ---------- --
A X 12 101 1
A Y 12 103 1
B Y 9 105 1
But if your joining field for merge is only 'C1' then this set of records is not stable, because for C1='A'
we have two rows and Oracle looks sheepishly, it does not know which one interests you.
To resolve this you can use row_number()
instead of rank() - if it's all the same. But if this matters you need something more in order clause, for instance:
select * from (
select t2.*, rank() over (partition by c1 order by amount desc, c2) rn from t2 )
where rn = 1
C1 C2 AMOUNT UF RN
-- -- ---------- ---------- --
A X 12 101 1
B Y 9 105 1
This set of rows is stable, because for C1 there are no duplicates and you can use it in your merge.
merge into t1
using (
select * from (
select t2.*, rank() over (partition by c1 order by amount desc, c2) rn from t2 )
where rn=1) md
on (md.c1 = t1.c1)
when matched then update set t1.uf = md.uf
when not matched then insert (t1.c1, t1.uf)
values (md.c1, md.uf)
If I have to search for some data I can use wildcards and use a simple query -
SELECT * FROM TABLE WHERE COL1 LIKE '%test_string%'
And, if I have to look through many values I can use -
SELECT * FROM TABLE WHERE COL1 IN (Select col from AnotherTable)
But, is it possible to use both together. That is, the query doesn't just perform a WHERE IN but also perform something similar to WHERE LIKE? A query that just doesn't look through a set of values but search using wildcards through a set of values.
If this isn't clear I can give an example. Let me know. Thanks.
Example -
lets consider -
AnotherTable -
id | Col
------|------
1 | one
2 | two
3 | three
Table -
Col | Col1
------|------
aa | one
bb | two
cc | three
dd | four
ee | one_two
bb | three_two
Now, if I can use
SELECT * FROM TABLE WHERE COL1 IN (Select col from AnotherTable)
This gives me -
Col | Col1
------|------
aa | one
bb | two
cc | three
But what if I need -
Col | Col1
------|------
aa | one
bb | two
cc | three
ee | one_two
bb | three_two
I guess this should help you understand what I mean by using WHERE IN and LIKE together
SELECT *
FROM TABLE A
INNER JOIN AnotherTable B on
A.COL1 = B.col
WHERE COL1 LIKE '%test_string%'
Based on the example code provided, give this a try. The final select statement presents the data as you have requested.
create table #AnotherTable
(
ID int IDENTITY(1,1) not null primary key,
Col varchar(100)
);
INSERT INTO #AnotherTable(col) values('one')
INSERT INTO #AnotherTable(col) values('two')
INSERT INTO #AnotherTable(col) values('three')
create table #Table
(
Col varchar(100),
Col1 varchar(100)
);
INSERT INTO #Table(Col,Col1) values('aa','one')
INSERT INTO #Table(Col,Col1) values('bb','two')
INSERT INTO #Table(Col,Col1) values('cc','three')
INSERT INTO #Table(Col,Col1) values('dd','four')
INSERT INTO #Table(Col,Col1) values('ee','one_two')
INSERT INTO #Table(Col,Col1) values('ff','three_two')
SELECT * FROM #AnotherTable
SELECT * FROM #Table
SELECT * FROM #Table WHERE COL1 IN(Select col from #AnotherTable)
SELECT distinct A.*
FROM #Table A
INNER JOIN #AnotherTable B on
A.col1 LIKE '%'+B.Col+'%'
DROP TABLE #Table
DROP TABLE #AnotherTable
Yes. Use the keyword AND:
SELECT * FROM TABLE WHERE COL1 IN (Select col from AnotherTable) AND COL1 LIKE '%test_string%'
But in this case, you are probably better off using JOIN syntax:
SELECT TABLE.* FROM TABLE JOIN AnotherTable on TABLE.COL1 = AnotherTable.col WHERE TABLE.COL1 LIKE '%test_string'
no because each element in the LIKE clause needs the wildcard and there's not a way to do that with the IN clause
The pattern matching operators are:
IN, against a list of values,
LIKE, against a pattern,
REGEXP/RLIKE against a regular expression (which includes both wildcards and alternatives, and is thus closest to "using wildcards through a set of valuws", e.g. (ab)+a|(ba)+b will match all strings aba...ba or bab...ab),
FIND_IN_SET to get the index of a string in a set (which is represented as a comma separated string),
SOUNDS LIKE to compare strings based on how they're pronounced and
MATCH ... AGAINST for full-text matching.
That's about it for string matching, though there are other string functions.
For the example, you could try joining on Table.Col1 LIKE CONCAT(AnotherTable.Col, '%'), though performance will probably be dreadful (assuming it works).
Try a cross join, so that you can compare every row in AnotherTable to every row in Table:
SELECT DISTINCT t.Col, t.Col1
FROM AnotherTable at
CROSS JOIN Table t
WHERE t.col1 LIKE ('%' + at.col + '%')
To make it safe, you'll need to escape wildcards in at.col. Try this answer for that.
If I understand the question correctly you want the rows from "Table" when "Table.Col1" is IN "AnotherTable.Col" and you also want the rows when Col1 IS LIKE '%some_string%'.
If so you want something like:
SELECT
t.*
FROM
[Table] t
LEFT JOIN
[AnotherTable] at ON t.Col1 = at.Col
WHERE (at.Col IS NOT NULL
OR t.Col1 LIKE '%some_string%')
Something like this?
SELECT * FROM TABLE
WHERE
COL1 IN (Select col from AnotherTable)
AND COL1 LIKE '%test_string%'
Are you thinking about something like EXISTS?
SELECT * FROM TABLE t WHERE EXISTS (Select col from AnotherTable t2 where t2.col = t.col like '%test_string%' )
I have a table with some duplicate rows. I want to modify only the duplicate rows as follows.
Before:
id col1
------------
1 vvvv
2 vvvv
3 vvvv
After:
id col1
------------
1 vvvv
2 vvvv-2
3 vvvv-3
Col1 is appended with a hyphen and the value of id column.
This SQL will only update duplicates, but not the one with the lowest id :
update tbl
set col1 = col1 + '-' + convert(varchar, id)
where exists(select * from tbl t where t.col1 = tbl.col1 and t.id < tbl.id)
Check IN out in Oracle Syntax. The query is not tested
update table1 set
col1 = col1 || id
where
id not in (
select min(id) from table1
groupby col1
)
You might be able to do that with a sproc and cursors. I don't think it's possible in any reasonable select query.
You can accomplish this with a 2 step process, although an SQL wizard could probably modify this to give you a solution in one step.
First you need to get all the duplicate values. Here is an SQL query that will do that:
SELECT COUNT(*) AS NumberOfDuplicates, col1
FROM Table1
GROUP BY col1
HAVING (COUNT(*) > 1)
This will give you a resultset listing the number of duplicates and the duplicate value.
In step 2 you would loop through this resultset, fetch the col1 value, return all the records containing that value and (possibly using a loop counter variable) alter the value as per your example.
Note: you don't really need to return the number of duplicates to achieve your goal, but it will help you to test the query and be satisfied that it works.