Oracle transpose a simple table - sql

I have a simple table from a select query that looks like this
CATEGORY | EQUAL | LESS | GREATER
VALUE | 60 | 100 | 20
I want to be able to transpose it so it looks like this
CATEGORY | VALUE
EQUAL | 60
LESS | 100
GREATER | 20
I tried using the pivot function in oracle but I can't seem to get it to work.
I've tried looking all over online but I can't find anything that will help me.
Any help is much appreciated thank you!

Using unpivot -
CREATE TABLE Table1
(CATEGORY varchar2(5), EQUAL int, LESS int, GREATER int)
;
INSERT ALL
INTO Table1 (CATEGORY, EQUAL, LESS, GREATER)
VALUES ('VALUE', 60, 100, 20)
SELECT * FROM dual
;
Query -
select COL AS CATEGORY,VALUE from table1
unpivot (value for COL in (EQUAL, LESS, GREATER));
Result -
CATEGORY VALUE
EQUAL 60
LESS 100
GREATER 20

You can use union all:
select 'EQUAL' as category, equal as value from t union all
select 'LESS' as category, less from t union all
select 'GREATER' as category, greater from t;
If you had a large table, you might want to try some other method (such as a lateral join in Oracle 12c). But for a small table, this is fine.

You may unpivot your values by contribution of DECODE and CONNECT BY clauses :
select decode(myId, 1, 'EQUAL',
2, 'LESS',
3, 'GREATER') as category,
decode(myId, 1, EQUAL,
2, LESS,
3, GREATER) as value
from mytable
cross join (select level as myId from dual connect by level <= 3);
CATEGORY VALUE
-------- -----
EQUAL 60
LESS 100
GREATER 20
SQL Fiddle Demo

Related

Update numbers with a counter in SQL

Suppose I have a table containing increasing and irregularly incremented numbers (3, 4, 7, 11, 16,...) in the first column col1.
I want to update col1 such that the numbers will be 1000 plus the row number. So something like:
UPDATE tab1 SET col1 = 1000 + row_number()
I am using Oracle SQL and would appreciate any help! Many thanks in advance.
In Oracle, the simplest method might be to create a sequence and use that:
create sequence temp_seq_rownum;
update tab1
set col1 = 1000 + temp_seq_rownum.nextval;
drop sequence temp_seq_rownum;
At the time I am posting this, a different answer is already marked as "correct". That answer assigns values 1001, 1002 etc. with no regard to the pre-existing values in col1.
To me that makes no sense. The problem is more interesting if the OP actually meant what he wrote - the new values should follow the pre-existing values in col1, so that while the numbers 1001, 1002, 1003 etc. are not the same as the pre-existing values, they still preserve the pre-existing order.
In Oracle, row_number() is not allowed in a straight update statement. Perhaps the simplest way to achieve this task is with a merge statement, as demonstrated below.
For testing I created a table with two columns, with equal values before the update, simply so that we can verify that the new values in col1 preserve the pre-existing order after the update. I start with the test table, then the merge statement and a select statement to see what merge did.
create table t (col1, col2) as
select 3, 3 from dual union all
select 4, 4 from dual union all
select 16, 16 from dual union all
select 7, 7 from dual union all
select 1, 1 from dual
;
Table T created.
merge into t
using ( select rowid as rid, 1000 + row_number() over (order by col1) as val
from t) s
on (t.rowid = s.rid)
when matched then update set col1 = val;
5 rows merged.
select *
from t
order by col1
;
COL1 COL2
------- -------
1001 1
1002 3
1003 4
1004 7
1005 16

Pivot with column name in Postgres

I have the following table tbl:
column1 | column2 | column 3
-----------------------------------
1 | 'value1' | 3
2 | 'value2' | 4
How to do "pivot" with column names to produce output like:
column1 | 1 | 2
column2 | 'value1' |'value2'
column3 | 3 | 4
As has been commented, the issue of data types is undefined in the question.
If you are OK with all result columns being type text (every data type can be converted to text), you can use one of these:
Plain SQL
WITH cte AS (
SELECT nu.*
FROM tbl t
, LATERAL (
VALUES
(1, t.column1::text)
, (2, t.column2)
, (3, t.column3::text)
) nu(rn, c)
)
SELECT *
FROM (TABLE cte OFFSET 0 LIMIT 3) c1
JOIN (TABLE cte OFFSET 3 LIMIT 3) c2 USING (rn);
The same with useful column names:
WITH cte AS (
SELECT nu.*
FROM tbl t
, LATERAL (
VALUES
('column1', t.column1::text)
, ('column2', t.column2)
, ('column3', t.column3::text)
) nu(rn, c)
)
SELECT * FROM (
SELECT *
FROM (TABLE cte OFFSET 0 LIMIT 3) c1
JOIN (TABLE cte OFFSET 3 LIMIT 3) c2 USING (rn)
) t (key, row1, row2);
Works in any modern version of Postgres.
The SQL string has to be adapted to the number of rows and columns. See fiddles below!
Using a document type as stepping stone
Makes for shorter code.
With many rows and many columns, performance of the SQL solution may scale better because the intermediate derived table is smaller.
(The thread is limited as you can't have more than ~ 1600 table columns in Postgres.)
Since everything is converted to text anyway, hstore seems most efficient. See:
Key value pair in PostgreSQL
SELECT key
, arr[1] AS row1
, arr[2] AS row2
FROM (
SELECT x.key, array_agg(x.value) AS arr
FROM tbl t, each(hstore(t)) x
GROUP BY 1
) sub
ORDER BY 1;
Technically speaking we would have to enforce the right sort order when in array_agg(), but that should work without explicit ORDER BY. To be absolutely sure you can add one: array_agg(x.value ORDER BY t.ctid) Using ctid for lack of information.
You can do the same with JSON functions in (Postgres 9.3+). Just replace each(hstore(t) with json_each_text(row_to_json(t). The rest is identical.
These fiddles demonstrate how to scale each query:
Original example with 2 rows of 3 columns:
db<>fiddle here
Scaled up to 3 rows of 4 columns:
db<>fiddle here

Oracle SQL - How can I write an insert statement that is conditional and looped?

Context:
I have two tables: markettypewagerlimitgroups (mtwlg) and stakedistributionindicators (sdi). When a mtwlg is created, 2 rows are created in the sdi table which are linked to the mtwlg - each row with the same values bar 2, the id and another field (let's call it column X) which must contain a 0 for one row and 1 for the other.
There was a bug present in our codebase which prevented this happening automatically, so any mtwlg's created during the time that bug was present do not have the related sdi's, causing NPE's in various places.
To fix this, a patch needs to be written to loop through the mtwlg table and for each ID, search the sdi table for the 2 related rows. If the rows are present, do nothing; if there is only 1 row, check if F is a 0 or a 1, and insert a row with the other value; if neither row is present, insert them both. This needs to be done for every mtwlg, and a unique ID needs to be inserted too.
Pseudocode:
For each market type wager limit group ID
Check if there are 2 rows with that id in the stake distributions table, 1 where column X = 0 and one where column X = 1
if none
create 2 rows in the stake distributions table with unique id's; 1 for each X value
if one
create the missing row in the stake distributions table with a unique id
if 2
do nothing
If it helps at all - the patch will be applied using liquibase.
Anyone with any advice or thoughts as to if and how this will be possible to write in SQL/a liquibase patch?
Thanks in advance, let me know of any other information you need.
EDIT:
I've actually just been advised to do this using PL/SQL, do you have any thoughts/suggestions in regards to this?
Thanks again.
Oooooh, an excellent job for MERGE.
Here's your pseudo code again:
For each market type wager limit group ID
Check if there are 2 rows with that id in the stake distributions table,
1 where column X = 0 and one where column X = 1
if none
create 2 rows in the stake distributions table with unique id's;
1 for each X value
if one
create the missing row in the stake distributions table with a unique id
if 2
do nothing
Here's the MERGE variant (still pseudo-code'ish as I don't know how your data really looks):
MERGE INTO stake_distributions d
USING (
SELECT limit_group_id, 0 AS x
FROM market_type_wagers
UNION ALL
SELECT limit_group_id, 1 AS x
FROM market_type_wagers
) t
ON (
d.limit_group_id = t.limit_group_id AND d.x = t.x
)
WHEN NOT MATCHED THEN INSERT (d.limit_group_id, d.x)
VALUES (t.limit_group_id, t.x);
No loops, no PL/SQL, no conditional statements, just plain beautiful SQL.
Nice alternative suggested by Boneist in the comments uses a CROSS JOIN rather than UNION ALL in the USING clause, which is likely to perform better (unverified):
MERGE INTO stake_distributions d
USING (
SELECT w.limit_group_id, x.x
FROM market_type_wagers w
CROSS JOIN (
SELECT 0 AS x FROM DUAL
UNION ALL
SELECT 1 AS x FROM DUAL
) x
) t
ON (
d.limit_group_id = t.limit_group_id AND d.x = t.x
)
WHEN NOT MATCHED THEN INSERT (d.limit_group_id, d.x)
VALUES (t.limit_group_id, t.x);
Answer: you don't. There is absolutely no need to loop through anything - you can do it in a single insert. All you need to do is identify the rows that are missing, and then you just need to add them in.
Here is an example:
drop table t1;
drop table t2;
drop sequence t2_seq;
create table t1 (cola number,
colb number,
colc number);
create table t2 (id number,
cola number,
colb number,
colc number,
colx number);
create sequence t2_seq
START WITH 1
INCREMENT BY 1
MAXVALUE 99999999
MINVALUE 1
NOCYCLE
CACHE 20
NOORDER;
insert into t1 values (1, 10, 100);
insert into t2 values (t2_seq.nextval, 1, 10, 100, 0);
insert into t2 values (t2_seq.nextval, 1, 10, 100, 1);
insert into t1 values (2, 20, 200);
insert into t2 values (t2_seq.nextval, 2, 20, 200, 0);
insert into t1 values (3, 30, 300);
insert into t2 values (t2_seq.nextval, 3, 30, 300, 1);
insert into t1 values (4, 40, 400);
commit;
insert into t2 (id, cola, colb, colc, colx)
with dummy as (select 1 id from dual union all
select 0 id from dual)
select t2_seq.nextval,
t1.cola,
t1.colb,
t1.colc,
d.id
from t1
cross join dummy d
left outer join t2 on (t2.cola = t1.cola and d.id = t2.colx)
where t2.id is null;
commit;
select * from t2
order by t2.cola;
ID COLA COLB COLC COLX
---------- ---------- ---------- ---------- ----------
1 1 10 100 0
2 1 10 100 1
3 2 20 200 0
5 2 20 200 1
7 3 30 300 0
4 3 30 300 1
6 4 40 400 0
8 4 40 400 1
If the processing logic is too gnarly to be encapsulated in a single SQL statement, you may need to resort to cursor for loops and row types - basically allows you to do things like the following:
DECLARE
r_mtwlg markettypewagerlimitgroups%ROWTYPE;
BEGIN
FOR r_mtwlg IN (
SELECT mtwlg.*
FROM markettypewagerlimitgroups mtwlg
)
LOOP
-- do stuff here
-- refer to elements of the current row like this
DBMS_OUTPUT.PUT_LINE(r_mtwlg.id);
END LOOP;
END;
/
You can obviously nest another loop inside this one that hits the stakedistributionindicators table, but I'll leave that as an exercise for you. You could also left join to stakedistributionindicators a couple of times in this first cursor so that you only return rows that don't already have an x=1 and x=0, again you can probably work that bit out for yourself.
If you would rather write your logic in Java vs. PL/SQL, Liquibase allows you to create custom changes. The custom change points to a Java class you write that can do whatever logic you need. A simple example can be found here

Creating a view from a set of results

I have several tables all holding small amounts of data on a batch of product. For example, I have a table (called 'Tests') that holds a test number, a test name and the description. This is referenced by my batch table, which holds a test number, test result (as a real) and the batch number itself.
Some batches may have 50 tests, some may have 30, some may have as little as 1.
I was hoping to create a view that converts something like these tables;
BatchNumber TestNum TestResult | TestNumber TestName TestDesc
----------- -------- ----------- | ----------- --------- ---------
1000 1 1.20 | 1 Thickness How thick the product is
1001 1 1.30 | 2 Colour What colour the product is
1001 2 45.1 | 3 Weight How heavy the product is
...
to the following;
BatchNumber Thickness Colour Weight
------------ --------- ------ -------
1000 1.20 NULL NULL
1001 1.30 45.1 NULL
...
Though the 'null' could just be blank, it would probably be better that way, I just used that to better show my requirement.
I've found many articles online on the benefit of PIVOTing, UNPIVOTing, UNIONing but none show the direct benefit, or indeed provide a clear and succinct way of using the data without copying data into a new table, which isn't really useful for my need. I was hoping that a view would be possible so that end-user applications can just call that instead of doing the joins locally.
I hope that makes sense, and thank you!
You need cross tab query for this.
http://www.mssqltips.com/sqlservertip/1019/crosstab-queries-using-pivot-in-sql-server/
DECLARE #Tests Table (TestNumber int, TestName VARCHAR(100),TestDesc varchar(100))
INSERT INTO #Tests
SELECT 1, 'Thickness', '' UNION ALL
SELECT 2, 'Color', '' UNION ALL
SELECT 3, 'Weight', ''
DECLARE #BTests Table (BatchNum int, TestNumber int, TestResult float)
INSERT INTO #BTests
SELECT 1000, 1, 1.20 UNION ALL
SELECT 1001, 1, 1.30 UNION ALL
SELECT 1001, 2, 45.1
;with cte as (
select b.*, t.TestName
from #BTests b
inner join #Tests t on b.TestNumber = t.TestNumber
)
SELECT Batchnum, [Thickness] AS Thickness, [Color] AS Color, [Weight] as [Weight]
FROM
(SELECT Batchnum, TestName, testResult
FROM Cte ) ps
PIVOT
(
SUM (testResult)
FOR testname IN
( [Thickness],[Color], [Weight])
) AS pvt
CREATE VIEW?
e.g.
CREATE VIEW myview AS
SELECT somecolumns from sometable

create a table of duplicated rows of another table using the select statement

I have a table with one column containing different integers.
For each integer in the table I would like to duplicate it as the number of digits -
For example:
12345 (5 digits):
1. 12345
2. 12345
3. 12345
4. 12345
5. 12345
I thought doing it using with recursion t (...) as () but I didn't manage, since I don't really understand how it works and what is happening "behind the scenes.
I don't want to use insert because I want it to be scalable and automatic for as many integers as needed in a table.
Any thoughts and an explanation would be great.
The easiest way is to join to a table with numbers from 1 to n in it.
SELECT n, x
FROM yourtable
JOIN
(
SELECT day_of_calendar AS n
FROM sys_calendar.CALENDAR
WHERE n BETWEEN 1 AND 12 -- maximum number of digits
) AS dt
ON n <= CHAR_LENGTH(TRIM(ABS(x)))
In my example I abused TD's builtin calendar, but that's not a good choice, as the optimizer doesn't know how many rows will be returned and as the plan must be a Product Join it might decide to do something stupid. So better use a number table...
Create a numbers table that will contain the integers from 1 to the maximum number of digits that the numbers in your table will have (I went with 6):
create table numbers(num int)
insert numbers
select 1 union select 2 union select 3 union select 4 union select 5 union select 6
You already have your table (but here's what I was using to test):
create table your_table(num int)
insert your_table
select 12345 union select 678
Here's the query to get your results:
select ROW_NUMBER() over(partition by b.num order by b.num) row_num, b.num, LEN(cast(b.num as char)) num_digits
into #temp
from your_table b
cross join numbers n
select t.num
from #temp t
where t.row_num <= t.num_digits
I found a nice way to perform this action. Here goes:
with recursive t (num,num_as_char,char_n)
as
(
select num
,cast (num as varchar (100)) as num_as_char
,substr (num_as_char,1,1)
from numbers
union all
select num
,substr (t.num_as_char,2) as num_as_char2
,substr (num_as_char2,1,1)
from t
where char_length (num_as_char2) > 0
)
select *
from t
order by num,char_length (num_as_char) desc