Suppose I have a table containing increasing and irregularly incremented numbers (3, 4, 7, 11, 16,...) in the first column col1.
I want to update col1 such that the numbers will be 1000 plus the row number. So something like:
UPDATE tab1 SET col1 = 1000 + row_number()
I am using Oracle SQL and would appreciate any help! Many thanks in advance.
In Oracle, the simplest method might be to create a sequence and use that:
create sequence temp_seq_rownum;
update tab1
set col1 = 1000 + temp_seq_rownum.nextval;
drop sequence temp_seq_rownum;
At the time I am posting this, a different answer is already marked as "correct". That answer assigns values 1001, 1002 etc. with no regard to the pre-existing values in col1.
To me that makes no sense. The problem is more interesting if the OP actually meant what he wrote - the new values should follow the pre-existing values in col1, so that while the numbers 1001, 1002, 1003 etc. are not the same as the pre-existing values, they still preserve the pre-existing order.
In Oracle, row_number() is not allowed in a straight update statement. Perhaps the simplest way to achieve this task is with a merge statement, as demonstrated below.
For testing I created a table with two columns, with equal values before the update, simply so that we can verify that the new values in col1 preserve the pre-existing order after the update. I start with the test table, then the merge statement and a select statement to see what merge did.
create table t (col1, col2) as
select 3, 3 from dual union all
select 4, 4 from dual union all
select 16, 16 from dual union all
select 7, 7 from dual union all
select 1, 1 from dual
;
Table T created.
merge into t
using ( select rowid as rid, 1000 + row_number() over (order by col1) as val
from t) s
on (t.rowid = s.rid)
when matched then update set col1 = val;
5 rows merged.
select *
from t
order by col1
;
COL1 COL2
------- -------
1001 1
1002 3
1003 4
1004 7
1005 16
Related
Is it possible for in statement to be dynamic? like dynamic comma separation
for example:
DATA=1
select * from dual
where
account_id in (*DATA);
DATA=2
select * from dual
where
account_id in (*DATA1,*DATA2);
FOR DATA=n
how will i make the in statement dynamic/flexible (comma) for unknown quantity.
select * from dual
where
account_id in (*DATAn,*DATAn+1,etc);
A hierarchical query might help.
acc CTE represents sample data
lines #9 - 11 are what you might be looking for; data is concatenated with level pseudocolumn's value returned by a hierarchical query
Here you go:
SQL> with acc (account_id) as
2 (select 'data1' from dual union all
3 select 'data2' from dual union all
4 select 'data3' from dual union all
5 select 'data4' from dual
6 )
7 select *
8 from acc
9 where account_id in (select 'data' || level
10 from dual
11 connect by level <= &n
12 );
Enter value for n: 1
ACCOU
-----
data1
SQL> /
Enter value for n: 3
ACCOU
-----
data1
data2
data3
SQL>
As I can see, You are using the number in Where clause, Substitution will be enough to solve your problem.
See the example below:
CREATE table t(col1 number);
insert into t values(1);
insert into t values(2);
insert into t values(3);
-- Substitution variable initialization
define data1=1;
define data2='1,2';
--
-- With data1
select * from t where col1 in (&data1);
Output:
-- With data2
select * from t where col1 in (&data2);
Output:
Hope, This will be helpful to you.
Cheers!!
The basic problem is not the listagg function but a major misconception that just because elements in a string list are comma separated that a string with commas in it is a list. Not So. Consider a table with the following rows.
Key
- Data1
- Data2
- Data1,Data2
And the query: Select * from table_name where key = 'wanted_key'; Now if all commas separate independent elements then what value in for "wanted_Key" is needed to return only the 3rd row above? Even with the IN predicate 'Data1,Data2' is still just 1 value, not 2. For 2 values it would have to be ('Data1','Data2').
The problem you're having with Listagg is not because of the comma but because it's not the appropriate function. Listagg takes values from multiple rows and combines then into a single comma separated string but not comma separated list. Example:
with elements as
( select 'A' code, 'Data1' item from dual union all
select 'A', 'Data2' from dual union all
select 'A', 'Data3' from dual
)
select listagg( item, ',') within group (order by item)
from elements group by code;
(You might also want to try 'Data1,Data2' as a single element. Watch out.
What you require is a query that breaks out each element separately. This can be done with
with element_list as
(select 'Data1,Data2,Data3' items from dual) -- get paraemter string
, parsed as
(select regexp_substr(items,'[^,]+',1,level) item
from element_list connect by regexp_substr(items,'[^,]+',1,level) is not null -- parse string to elements
)
The CTE "parsed" can now be used as table/view in your query.
This will not perform as well as querying directly with a parameter, but performance degradation is the cost of dynamic/flexible queries.
Also as set up this will NOT handle parameters which contain commas within an individual element. That would require much more code as you would have to determine/design how to keep the comma in those elements.
I am creating a volatile table and trying to insert rows to the table. I can upload one row like below...
create volatile table Example
(
ProductID VARCHAR(15),
Price DECIMAL (15,2)
)
on commit preserve rows;
et;
INSERT INTO Example
Values
('Steve',4);
However, when I try to upload multiple I get the error:
"Syntax error: expected something between ')' and ','."
INSERT INTO Example
Values
('Steve',4),
('James',8);
As Gordon said, Teradata doesn't support VALUES with multiple rows (and the UNION ALL will fail because of the missing FROM.
You can utilize a Multi Statement Request (MSR) instead:
INSERT INTO Example Values('Steve',4)
;INSERT INTO Example Values('James',8)
;
If it's a BTEQ job the Inserts are submitted as one block after the final semicolon (when there's a new command starting on the same line it's part of the MSR). In SQL Assistant or Studio you must submit it using F9 instead of F5.
I don't think Teradata supports the multiple row values syntax. Just use select:
INSERT INTO Example(ProductId, Price)
WITH dual as (SELECT 1 as x)
SELECT 'Steve' as ProductId, 4 as Price FROM dual UNION ALL
SELECT 'James' as ProductId, 8 as Price FROM dual;
CTE syntax (working):
insert into target_table1 (col1, col2)
with cte as (select 1 col1)
select 'value1', 'value2' from cte
union all
select 'value1a', 'value2a' from cte
;
CTE Syntax not working in Teradata
(error: expected something between ")" and the "insert" keyword)
with cte as (select 1 col1)
insert into target_table1 (col1, col2)
select 'value1', 'value2' from cte
union all
select 'value1a', 'value2a' from cte
;
I found a solution for this via RECURSIVE. It goes like this:-
INSERT INTO table (col1, col2)
with recursive table (col1, col2) as
(select 'val1','val2' from table) -- 1
select 'val1','val2' from table -- 2
union all select 'val3','val4' from table
union all select 'val5','val6' from table;
Data of line 1 does not get inserted (but you need this line). Starting from line 2, the data you enter for val1, val2 etc. gets inserted into the respective columns. Use as many UNION ALLs' as many rows you want to insert. Hope this helps :)
At least in our version of Teradata, we are not able to use an insert statement with a CTE. Instead, find a real table (preferably small in size) and do a top 1.
Insert Into OtherRealTable(x, y)
Select top 1
'x' as x,
'y' as y
FROM RealTable
create table dummy as (select '1' col1) with data;
INSERT INTO Student
(Name, Maths, Science, English)
SELECT 'Tilak', 90, 40, 60 from dummy union
SELECT 'Raj', 30, 20, 10 from dummy
;
yes you can try this.
INSERT INTO Student
SELECT (Name, Maths, Science, English) FROM JSON_Table
(ON (SELECT 1 id,cast('{"DataSet" : [
{"s":"m", "Name":"Tilak", "Maths":"90","Science":"40", "English":"60" },
{"s":"m", "Name":"Raj", "Maths":"30","Science":"20", "English":"10" }
]
}' AS json ) jsonCol)
USING rowexpr('$.DataSet[*]')
colexpr('[{"jsonpath":"$.s","type":"CHAR(1)"},{"jsonpath":"$.Name","type":"VARCHAR(30)"}, {"jsonpath":"$.Maths","type":"INTEGER"}, {"jsonpath":"$.Science","type":"INTEGER"}, {"jsonpath":"$.English","type":"INTEGER"}]')
) AS JT(id,State,Name, Maths, Science, English)
Context:
I have two tables: markettypewagerlimitgroups (mtwlg) and stakedistributionindicators (sdi). When a mtwlg is created, 2 rows are created in the sdi table which are linked to the mtwlg - each row with the same values bar 2, the id and another field (let's call it column X) which must contain a 0 for one row and 1 for the other.
There was a bug present in our codebase which prevented this happening automatically, so any mtwlg's created during the time that bug was present do not have the related sdi's, causing NPE's in various places.
To fix this, a patch needs to be written to loop through the mtwlg table and for each ID, search the sdi table for the 2 related rows. If the rows are present, do nothing; if there is only 1 row, check if F is a 0 or a 1, and insert a row with the other value; if neither row is present, insert them both. This needs to be done for every mtwlg, and a unique ID needs to be inserted too.
Pseudocode:
For each market type wager limit group ID
Check if there are 2 rows with that id in the stake distributions table, 1 where column X = 0 and one where column X = 1
if none
create 2 rows in the stake distributions table with unique id's; 1 for each X value
if one
create the missing row in the stake distributions table with a unique id
if 2
do nothing
If it helps at all - the patch will be applied using liquibase.
Anyone with any advice or thoughts as to if and how this will be possible to write in SQL/a liquibase patch?
Thanks in advance, let me know of any other information you need.
EDIT:
I've actually just been advised to do this using PL/SQL, do you have any thoughts/suggestions in regards to this?
Thanks again.
Oooooh, an excellent job for MERGE.
Here's your pseudo code again:
For each market type wager limit group ID
Check if there are 2 rows with that id in the stake distributions table,
1 where column X = 0 and one where column X = 1
if none
create 2 rows in the stake distributions table with unique id's;
1 for each X value
if one
create the missing row in the stake distributions table with a unique id
if 2
do nothing
Here's the MERGE variant (still pseudo-code'ish as I don't know how your data really looks):
MERGE INTO stake_distributions d
USING (
SELECT limit_group_id, 0 AS x
FROM market_type_wagers
UNION ALL
SELECT limit_group_id, 1 AS x
FROM market_type_wagers
) t
ON (
d.limit_group_id = t.limit_group_id AND d.x = t.x
)
WHEN NOT MATCHED THEN INSERT (d.limit_group_id, d.x)
VALUES (t.limit_group_id, t.x);
No loops, no PL/SQL, no conditional statements, just plain beautiful SQL.
Nice alternative suggested by Boneist in the comments uses a CROSS JOIN rather than UNION ALL in the USING clause, which is likely to perform better (unverified):
MERGE INTO stake_distributions d
USING (
SELECT w.limit_group_id, x.x
FROM market_type_wagers w
CROSS JOIN (
SELECT 0 AS x FROM DUAL
UNION ALL
SELECT 1 AS x FROM DUAL
) x
) t
ON (
d.limit_group_id = t.limit_group_id AND d.x = t.x
)
WHEN NOT MATCHED THEN INSERT (d.limit_group_id, d.x)
VALUES (t.limit_group_id, t.x);
Answer: you don't. There is absolutely no need to loop through anything - you can do it in a single insert. All you need to do is identify the rows that are missing, and then you just need to add them in.
Here is an example:
drop table t1;
drop table t2;
drop sequence t2_seq;
create table t1 (cola number,
colb number,
colc number);
create table t2 (id number,
cola number,
colb number,
colc number,
colx number);
create sequence t2_seq
START WITH 1
INCREMENT BY 1
MAXVALUE 99999999
MINVALUE 1
NOCYCLE
CACHE 20
NOORDER;
insert into t1 values (1, 10, 100);
insert into t2 values (t2_seq.nextval, 1, 10, 100, 0);
insert into t2 values (t2_seq.nextval, 1, 10, 100, 1);
insert into t1 values (2, 20, 200);
insert into t2 values (t2_seq.nextval, 2, 20, 200, 0);
insert into t1 values (3, 30, 300);
insert into t2 values (t2_seq.nextval, 3, 30, 300, 1);
insert into t1 values (4, 40, 400);
commit;
insert into t2 (id, cola, colb, colc, colx)
with dummy as (select 1 id from dual union all
select 0 id from dual)
select t2_seq.nextval,
t1.cola,
t1.colb,
t1.colc,
d.id
from t1
cross join dummy d
left outer join t2 on (t2.cola = t1.cola and d.id = t2.colx)
where t2.id is null;
commit;
select * from t2
order by t2.cola;
ID COLA COLB COLC COLX
---------- ---------- ---------- ---------- ----------
1 1 10 100 0
2 1 10 100 1
3 2 20 200 0
5 2 20 200 1
7 3 30 300 0
4 3 30 300 1
6 4 40 400 0
8 4 40 400 1
If the processing logic is too gnarly to be encapsulated in a single SQL statement, you may need to resort to cursor for loops and row types - basically allows you to do things like the following:
DECLARE
r_mtwlg markettypewagerlimitgroups%ROWTYPE;
BEGIN
FOR r_mtwlg IN (
SELECT mtwlg.*
FROM markettypewagerlimitgroups mtwlg
)
LOOP
-- do stuff here
-- refer to elements of the current row like this
DBMS_OUTPUT.PUT_LINE(r_mtwlg.id);
END LOOP;
END;
/
You can obviously nest another loop inside this one that hits the stakedistributionindicators table, but I'll leave that as an exercise for you. You could also left join to stakedistributionindicators a couple of times in this first cursor so that you only return rows that don't already have an x=1 and x=0, again you can probably work that bit out for yourself.
If you would rather write your logic in Java vs. PL/SQL, Liquibase allows you to create custom changes. The custom change points to a Java class you write that can do whatever logic you need. A simple example can be found here
I have a table in MS Access with rows which have a column called "repeat"
I want to SELECT all the rows, duplicated by their "repeat" column value.
For example, if repeat is 4, then I should return 4 rows of the same values. If repeat is 1, then I should return only one row.
This is very similar to this answer:
https://stackoverflow.com/a/6608143
Except I need a solution for MS Access.
First create a "Numbers" table and fill it with numbers from 1 to 1000 (or up to whatever value the "Repeat" column can have):
CREATE TABLE Numbers
( i INT NOT NULL PRIMARY KEY
) ;
INSERT INTO Numbers
(i)
VALUES
(1), (2), ..., (1000) ;
then you can use this:
SELECT t.*
FROM TableX AS t
JOIN
Numbers AS n
ON n.i <= t.repeat ;
If repeat has only small values you can try:
select id, col1 from table where repeat > 0
union all
select id, col1 from table where repeat > 1
union all
select id, col1 from table where repeat > 2
union all
select id, col1 from table where repeat > 3
union all ....
What you can do is to retrieve the one 'unique' row and copy this row/column into a string however many copies you need from it using a for loop.
I´d like to add several rows stemming from another SQL query (different table) to a query result. e.g.:
SELECT mycol from mytable
# returns
mycol
1
4
6
SELECT anothercol from anothertable
#returns
anothercol
3
8
9
What I would like to obtain is:
myresult
1
4
6
3
8
9
Currently I do this kind of operation with statistical software packages, but I wonder if that is possible in MySQL somehow. It's often needed when merging time series from different sources. Is there a SQL way to do it?
Use the UNION statement.
It's something like:
(SELECT * FROM table WHERE column = 34)
UNION
(SELECT * FROM table WHERE column2 = 45);
Then you can add an ORDER BY at the end like:
(SELECT * FROM table WHERE column = 34)
UNION
(SELECT * FROM table WHERE column2 = 45)
ORDER BY column;
SELECT mycol
FROM mytable
UNION ALL
SELECT othercol
FROM othertable
This question might be helpful: Merge 2 tables for a SELECT query?