I need trial balance from following tables:
Table: Journal
ID
TransactionDate
AccountCodeLevel1VarChar
AccountCodeLevel2VarChar
AccountCodeLevel3VarChar
AccountCodeLevel4VarChar
AccountCodeLevel5VarChar
AccountCodeLevel6VarChar
DebitAmountDecimal
CreditAmountDecimal
DescriptionVarchar
1
2022-1-1
1
01
01
01
001
001
1
Received cash from issuing common stocks
2
2022-1-1
3
01
01
01
001
001
1
Received cash from issuing common stocks
3
2022-1-1
1
01
01
01
001
001
2
Received cash borrowed from bank
4
2022-1-1
2
01
01
01
001
001
2
Received cash borrowed from bank
5
2022-1-1
1
01
01
01
001
001
3
Received cash borrowed from bank
6
2022-1-1
2
01
01
01
001
001
3
Received cash borrowed from bank
7
2022-1-1
1
01
01
01
001
001
4
Received cash from selling products
8
2022-1-1
4
01
01
01
001
001
4
Received cash from selling products
9
2022-1-1
1
01
01
01
001
001
5
Collected cash from services rendered
10
2022-1-1
4
02
01
01
001
001
5
Collected cash from services rendered
11
2022-1-1
5
01
01
01
001
001
6
Paid employee salaries
12
2022-1-1
1
01
01
01
001
001
6
Paid employee salaries
Table: AccountCode
AccountCodeLevel1VarChar
AccountCodeLevel2VarChar
AccountCodeLevel3VarChar
AccountCodeLevel4VarChar
AccountCodeLevel5VarChar
AccountCodeLevel6VarChar
AccountNameVarChar
1
01
01
01
001
001
Cash
2
01
01
01
001
001
Loan from banks
3
01
01
01
001
001
Paid-up common shares
4
01
01
01
001
001
Service revenue
4
02
01
01
001
001
Sales revenue
5
01
01
01
001
001
Salary
How can I get the SQL to render the summary result as shown hereunder with DebitAmountDecimal minus CreditAmountDecimal?
AccountCodeLevel1VarChar
AccountCodeLevel2VarChar
AccountCodeLevel3VarChar
AccountCodeLevel4VarChar
AccountCodeLevel5VarChar
AccountCodeLevel6VarChar
AccountNameVarChar
TrialBalanceAmount
1
01
01
01
001
001
Cash
9
2
01
01
01
001
001
Loan from banks
-5
3
01
01
01
001
001
Paid-up common shares
-1
4
01
01
01
001
001
Service revenue
-4
4
02
01
01
001
001
Sales revenue
-5
5
01
01
01
001
001
Salary
6
I tried to do with this SQL statement but the result is wrong:
SELECT
"ACC"."AccountCodeLevel1VarChar",
"ACC"."AccountCodeLevel2VarChar",
"ACC"."AccountCodeLevel3VarChar",
"ACC"."AccountCodeLevel4VarChar",
"ACC"."AccountCodeLevel5VarChar",
"ACC"."AccountCodeLevel6VarChar",
"ACC"."AccountNameVarChar",
SUM( "JNL"."DebitAmountDecimal" - "JNL"."CreditAmountDecimal" ) "TrialBalanceAmount"
FROM
"AccountCode" "ACC", "Journal" "JNL"
WHERE
"ACC"."AccountCodeLevel1VarChar" = "JNL"."AccountCodeLevel1VarChar"
AND
"ACC"."AccountCodeLevel2VarChar" = "JNL"."AccountCodeLevel2VarChar"
AND
"ACC"."AccountCodeLevel3VarChar" = "JNL"."AccountCodeLevel3VarChar"
AND
"ACC"."AccountCodeLevel4VarChar" = "JNL"."AccountCodeLevel4VarChar"
AND
"ACC"."AccountCodeLevel5VarChar" = "JNL"."AccountCodeLevel5VarChar"
AND
"ACC"."AccountCodeLevel6VarChar" = "JNL"."AccountCodeLevel6VarChar"
GROUP BY
"ACC"."AccountCodeLevel1VarChar",
"ACC"."AccountCodeLevel2VarChar",
"ACC"."AccountCodeLevel3VarChar",
"ACC"."AccountCodeLevel4VarChar",
"ACC"."AccountCodeLevel5VarChar",
"ACC"."AccountCodeLevel6VarChar",
"ACC"."AccountNameVarChar"
ORDER BY
"ACC"."AccountCodeLevel1VarChar" ASC,
"ACC"."AccountCodeLevel2VarChar" ASC,
"ACC"."AccountCodeLevel3VarChar" ASC,
"ACC"."AccountCodeLevel4VarChar" ASC,
"ACC"."AccountCodeLevel5VarChar" ASC,
"ACC"."AccountCodeLevel6VarChar" ASC
Thank you so much #Mark Rotteveel for the comment that I was summing NULL.
SUM( COALESCE ( "JNL"."DebitAmountDecimal", 0 ) ) - SUM( COALESCE ( "JNL"."CreditAmountDecimal", 0 ) ) "TrialBalanceAmount"
Now the result is complete.
A client (e-commerce store) doesn't possess a very well-built database. For instance, there are many users with a lot of shopping orders (=different IDs) for exactly the same products and on the same day. It is obvious that these seemingly multiple orders are in many cases just one unique order. At least that's what we have decided to work with to simplify the issue. (I am trying to do a basic data analytics.)
My table might look like this:
| Email | OrderID | Order_date | TotalAmount |
| ----------------- | --------- | ---------------- | ---------------- |
|customerA#gmail.com| 1 |Jan 01 2021 1:00PM| 2000 |
|customerA#gmail.com| 2 |Jan 01 2021 1:03PM| 2000 |
|customerA#gmail.com| 3 |Jan 01 2021 1:05PM| 2000 |
|customerA#gmail.com| 4 |Jan 01 2021 1:10PM| 2000 |
|customerA#gmail.com| 5 |Jan 01 2021 1:14PM| 2000 |
|customerA#gmail.com| 6 |Jan 03 2021 3:55PM| 3000 |
|customerA#gmail.com| 7 |Jan 03 2021 4:00PM| 3000 |
|customerA#gmail.com| 8 |Jan 03 2021 4:05PM| 3000 |
|customerB#gmail.com| 9 |Jan 04 2021 2:10PM| 1000 |
|customerB#gmail.com| 10 |Jan 04 2021 2:20PM| 1000 |
|customerB#gmail.com| 11 |Jan 04 2021 2:30PM| 1000 |
|customerB#gmail.com| 12 |Jan 06 2021 5:00PM| 5000 |
|customerC#gmail.com| 13 |Jan 09 2021 3:00PM| 4000 |
|customerC#gmail.com| 14 |Jan 09 2021 3:06PM| 4000 |
And my desired result would look like this:
| Email | OrderID | Order_date | TotalAmount |
| ----------------- | --------- | ---------------- | ---------------- |
|customerA#gmail.com| 5 |Jan 01 2021 1:14PM| 2000 |
|customerA#gmail.com| 8 |Jan 03 2021 4:05PM| 3000 |
|customerA#gmail.com| 11 |Jan 04 2021 2:30PM| 1000 |
|customerA#gmail.com| 12 |Jan 06 2021 5:00PM| 5000 |
|customerA#gmail.com| 14 |Jan 09 2021 3:06PM| 4000 |
I would guess this might be a common problem, but is there a simple solution to this?
Maybe there is, but I certainly don't seem to come up with one any time soon. I'd like to see even a complex solution, btw :-)
Thank you for any kind of help you can provide!
Do you mean this?
WITH
indata(Email,OrderID,Order_ts,TotalAmount) AS (
SELECT 'customerA#gmail.com', 1,TO_TIMESTAMP( 'Jan 01 2021 01:00PM','Mon DD YYYY HH12:MIAM'),2000
UNION ALL SELECT 'customerA#gmail.com', 2,TO_TIMESTAMP( 'Jan 01 2021 01:03PM','Mon DD YYYY HH12:MIAM'),2000
UNION ALL SELECT 'customerA#gmail.com', 3,TO_TIMESTAMP( 'Jan 01 2021 01:05PM','Mon DD YYYY HH12:MIAM'),2000
UNION ALL SELECT 'customerA#gmail.com', 4,TO_TIMESTAMP( 'Jan 01 2021 01:10PM','Mon DD YYYY HH12:MIAM'),2000
UNION ALL SELECT 'customerA#gmail.com', 5,TO_TIMESTAMP( 'Jan 01 2021 01:14PM','Mon DD YYYY HH12:MIAM'),2000
UNION ALL SELECT 'customerA#gmail.com', 6,TO_TIMESTAMP( 'Jan 03 2021 03:55PM','Mon DD YYYY HH12:MIAM'),3000
UNION ALL SELECT 'customerA#gmail.com', 7,TO_TIMESTAMP( 'Jan 03 2021 04:00PM','Mon DD YYYY HH12:MIAM'),3000
UNION ALL SELECT 'customerA#gmail.com', 8,TO_TIMESTAMP( 'Jan 03 2021 04:05PM','Mon DD YYYY HH12:MIAM'),3000
UNION ALL SELECT 'customerB#gmail.com', 9,TO_TIMESTAMP( 'Jan 04 2021 02:10PM','Mon DD YYYY HH12:MIAM'),1000
UNION ALL SELECT 'customerB#gmail.com',10,TO_TIMESTAMP( 'Jan 04 2021 02:20PM','Mon DD YYYY HH12:MIAM'),1000
UNION ALL SELECT 'customerB#gmail.com',11,TO_TIMESTAMP( 'Jan 04 2021 02:30PM','Mon DD YYYY HH12:MIAM'),1000
UNION ALL SELECT 'customerB#gmail.com',12,TO_TIMESTAMP( 'Jan 06 2021 05:00PM','Mon DD YYYY HH12:MIAM'),5000
UNION ALL SELECT 'customerC#gmail.com',13,TO_TIMESTAMP( 'Jan 09 2021 03:00PM','Mon DD YYYY HH12:MIAM'),4000
UNION ALL SELECT 'customerC#gmail.com',14,TO_TIMESTAMP( 'Jan 09 2021 03:06PM','Mon DD YYYY HH12:MIAM'),4000
)
,
-- need a ROW_NUMBER() to identify the last row within the day (order descending to get 1.
-- can't filter by an OLAP function, so in a fullselect, and WHERE cond in the final SELECT
with_rank AS (
SELECT
*
, ROW_NUMBER() OVER(PARTITION BY email,DAY(order_ts) ORDER BY order_ts DESC) AS rank
FROM INDATA
)
SELECT
*
FROM with_rank
WHERE rank = 1;
-- out Email | OrderID | Order_ts | TotalAmount | rank
-- out ---------------------+---------+---------------------+-------------+------
-- out customerA#gmail.com | 5 | 2021-01-01 13:14:00 | 2000 | 1
-- out customerA#gmail.com | 8 | 2021-01-03 16:05:00 | 3000 | 1
-- out customerB#gmail.com | 11 | 2021-01-04 14:30:00 | 1000 | 1
-- out customerB#gmail.com | 12 | 2021-01-06 17:00:00 | 5000 | 1
-- out customerC#gmail.com | 14 | 2021-01-09 15:06:00 | 4000 | 1
This is my initial table, (the dates are in DD/MM/YY format)
ID DAY TYPE_ID TYPE NUM START_DATE END_DATE
---- --------- ------- ---- ---- --------- ---------
4241 15/09/15 2 1 66 01/01/00 31/12/99
4241 16/09/15 2 1 66 01/01/00 31/12/99
4241 17/09/15 9 1 59 17/09/15 18/09/15
4241 18/09/15 9 1 59 17/09/15 18/09/15
4241 19/09/15 2 1 66 01/01/00 31/12/99
4241 20/09/15 2 1 66 01/01/00 31/12/99
4241 15/09/15 3 2 63 01/01/00 31/12/99
4241 16/09/15 8 2 159 16/09/15 17/09/15
4241 17/09/15 8 2 159 16/09/15 17/09/15
4241 18/09/15 3 2 63 01/01/00 31/12/99
4241 19/09/15 3 2 63 01/01/00 31/12/99
4241 20/09/15 3 2 63 01/01/00 31/12/99
2134 15/09/15 2 1 66 01/01/00 31/12/99
2134 16/09/15 2 1 66 01/01/00 31/12/99
2134 17/09/15 9 1 59 17/09/15 18/09/15
2134 18/09/15 9 1 59 17/09/15 18/09/15
2134 19/09/15 2 1 66 01/01/00 31/12/99
2134 20/09/15 2 1 66 01/01/00 31/12/99
2134 15/09/15 3 2 63 01/01/00 31/12/99
2134 16/09/15 8 2 159 16/09/15 17/09/15
2134 17/09/15 8 2 159 16/09/15 17/09/15
2134 18/09/15 3 2 63 01/01/00 31/12/99
2134 19/09/15 3 2 63 01/01/00 31/12/99
2134 20/09/15 3 2 63 01/01/00 31/12/99
And I've to create groups with initial DAY and end DAY for the same ID, and TYPE.
I don't want to group by day, I need to create a group every time my TYPE_ID changes, based on the initial order (ID, TYPE, DAY ASC)
This is the result that I want to achieve:
ID DAY_INI DAY_END TYPE_ID TYPE NUM START_DATE END_DATE
---- --------- --------- ------- ---- ---- --------- ---------
4241 15/09/15 16/09/15 2 1 66 01/01/00 31/12/99
4241 17/09/15 18/09/15 9 1 59 17/09/15 18/09/15
4241 19/09/15 20/09/15 2 1 66 01/01/00 31/12/99
4241 15/09/15 15/09/15 3 2 63 01/01/00 31/12/99
4241 16/09/15 17/09/15 8 2 159 16/09/15 17/09/15
4241 18/09/15 20/09/15 3 2 63 01/01/00 31/12/99
2134 15/09/15 16/09/15 2 1 66 01/01/00 31/12/99
2134 17/09/15 18/09/15 9 1 59 17/09/15 18/09/15
2134 19/09/15 20/09/15 2 1 66 01/01/00 31/12/99
2134 15/09/15 15/09/15 3 2 63 01/01/00 31/12/99
2134 16/09/15 17/09/15 8 2 159 16/09/15 17/09/15
2134 18/09/15 20/09/15 3 2 63 01/01/00 31/12/99
Could you please give any clue about how to do it??, thanks!
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE TEST ( ID, DAY, TYPE_ID, TYPE, NUM, START_DATE, END_DATE ) AS
SELECT 4241, DATE '2015-09-15', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-16', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-17', 9, 1, 59, DATE '2015-09-17', DATE '2015-09-18' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-18', 9, 1, 59, DATE '2015-09-17', DATE '2015-09-18' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-19', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-20', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-15', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-16', 8, 2, 159, DATE '2015-09-16', DATE '2015-09-17' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-17', 8, 2, 159, DATE '2015-09-16', DATE '2015-09-17' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-18', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-19', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-20', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-15', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-16', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-17', 9, 1, 59, DATE '2015-09-17', DATE '2015-09-18' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-18', 9, 1, 59, DATE '2015-09-17', DATE '2015-09-18' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-19', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-20', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-15', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-16', 8, 2, 159, DATE '2015-09-16', DATE '2015-09-17' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-17', 8, 2, 159, DATE '2015-09-16', DATE '2015-09-17' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-18', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-19', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-20', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
Query 1:
WITH group_changes AS (
SELECT t.*,
CASE TYPE_ID WHEN LAG( TYPE_ID ) OVER ( PARTITION BY ID, TYPE ORDER BY DAY ) THEN 0 ELSE 1 END AS HAS_CHANGED_GROUP
FROM TEST t
),
groups AS (
SELECT ID, DAY, TYPE_ID, TYPE, NUM, START_DATE, END_DATE,
SUM( HAS_CHANGED_GROUP ) OVER ( PARTITION BY ID, TYPE ORDER BY DAY ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW ) AS GRP
FROM group_changes
)
SELECT ID,
MIN( DAY ) AS DAY_INI,
MAX( DAY ) AS DAY_END,
MIN( TYPE_ID ) AS TYPE_ID,
TYPE,
MIN( NUM ) AS NUM,
MIN( START_DATE ) AS START_DATE,
MIN( END_DATE ) AS END_DATE
FROM groups
GROUP BY ID, TYPE, GRP
Results:
| ID | DAY_INI | DAY_END | TYPE_ID | TYPE | NUM | START_DATE | END_DATE |
|------|-----------------------------|-----------------------------|---------|------|-----|-----------------------------|-----------------------------|
| 4241 | September, 17 2015 00:00:00 | September, 18 2015 00:00:00 | 9 | 1 | 59 | September, 17 2015 00:00:00 | September, 18 2015 00:00:00 |
| 2134 | September, 15 2015 00:00:00 | September, 15 2015 00:00:00 | 3 | 2 | 63 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 2134 | September, 18 2015 00:00:00 | September, 20 2015 00:00:00 | 3 | 2 | 63 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 4241 | September, 15 2015 00:00:00 | September, 16 2015 00:00:00 | 2 | 1 | 66 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 4241 | September, 19 2015 00:00:00 | September, 20 2015 00:00:00 | 2 | 1 | 66 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 4241 | September, 15 2015 00:00:00 | September, 15 2015 00:00:00 | 3 | 2 | 63 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 4241 | September, 16 2015 00:00:00 | September, 17 2015 00:00:00 | 8 | 2 | 159 | September, 16 2015 00:00:00 | September, 17 2015 00:00:00 |
| 2134 | September, 17 2015 00:00:00 | September, 18 2015 00:00:00 | 9 | 1 | 59 | September, 17 2015 00:00:00 | September, 18 2015 00:00:00 |
| 2134 | September, 15 2015 00:00:00 | September, 16 2015 00:00:00 | 2 | 1 | 66 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 2134 | September, 19 2015 00:00:00 | September, 20 2015 00:00:00 | 2 | 1 | 66 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 2134 | September, 16 2015 00:00:00 | September, 17 2015 00:00:00 | 8 | 2 | 159 | September, 16 2015 00:00:00 | September, 17 2015 00:00:00 |
| 4241 | September, 18 2015 00:00:00 | September, 20 2015 00:00:00 | 3 | 2 | 63 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
Add an enumeration to the original data set (using Row_Number or rownum). Add the MIN(Enumeration) for each group. Then sort the groups by the enumeration.
I would like to replicate key names x number of times and have a separate column to indicate the replication number, e.g. let's say I have three key names as follows:
101
102
103
So, I would like each of the above numbers (names) replicated 3 times and to have a separate identifier number equal to 4 characters. It would therefore look like this:
101 0001
101 0002
101 0003
102 0001
102 0002
102 0003
103 0001
103 0002
103 0003
I guess this could be genered with a relatively straight forward awk script? *Edit: I would like to not specify the names to replicate in the script - it should be "replicate all names in this text file", as there are a lot of them (~400) and all with variable name types.
Thank you in advance!
In bash
echo {101,102,103}" "{01,02,03}
101 01 101 02 101 03 102 01 102 02 102 03 103 01 103 02 103 03
Following Fedorqui's advice for newlines
printf "%s\n" {101,102,103}" "{01,02,03}
101 01
101 02
101 03
102 01
102 02
102 03
103 01
103 02
103 03
Using the awk -v flag to pass a variable number which is the number of repeats for each line, and sprint to format the number into 2 decimal places with zero padding:
awk -v number_repeats=3 '{
for(i=0; i<number_repeats; i++) {
print $0, sprintf("%02d", i+1)
}
}'
If you don't mind GNU Parallel
parallel -k 'printf "%02d %02d\n"' ::: {6..12} ::: 1 2 3
06 01
06 02
06 03
07 01
07 02
07 03
08 01
08 02
08 03
09 01
09 02
09 03
10 01
10 02
10 03
11 01
11 02
11 03
12 01
12 02
12 03
Or, if your keys are in a file, called keys like this
32
45
78
You can read the file
parallel -k 'printf "%02d %02d\n" {1} {2}' :::: keys ::: 1 2 3
32 01
32 02
32 03
45 01
45 02
45 03
78 01
78 02
78 03
How to convert the row to column in oracle
Data is as under
AREA_CODE PREFIX
21 48
21 66
21 80
21 86
21 58
21 59
21 51
21 81
21 35
21 56
21 78
21 34
21 49
21 79
21 36
21 99
21 82
21 38
21 32
21 65
22 26
22 20
22 27
22 34
22 33
22 21
22 38
22 36
232 22
232 26
232 27
233 88
233 86
233 85
233 87
233 89
233 82
235 56
235 53
235 87
235 86
required output will b
AREA_CODE P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13
21 48 66 80 86 58 59 51 81 35 56 78 34 49
22 26 20 27 34 33 21 38 36
232 22 26 27 88 86 85 87 89 82 56 53 87 86
Assuming that number of prefix per area code is 10 and table name is table_name, this query can be used in 10 G
with tab as (select AREA_CODE,
PREFIX,
row_NUMBER() over(partition by AREA_CODE order by null) rn
from table_name)
select AREA_CODE,
min(decode(rn, 1, PREFIX, null)) as PREFIX1,
min(decode(rn, 2, PREFIX, null)) as PREFIX2,
min(decode(rn, 3, PREFIX, null)) as PREFIX3,
min(decode(rn, 4, PREFIX, null)) as PREFIX4,
min(decode(rn, 5, PREFIX, null)) as PREFIX5,
min(decode(rn, 6, PREFIX, null)) as PREFIX6,
min(decode(rn, 7, PREFIX, null)) as PREFIX7,
min(decode(rn, 8, PREFIX, null)) as PREFIX8,
min(decode(rn, 9, PREFIX, null)) as PREFIX9,
min(decode(rn, 10, PREFIX, null)) as PREFIX10
from tab
group by AREA_CODE
And in 11G
with tab as (select AREA_CODE,
PREFIX,
row_NUMBER() over(partition by AREA_CODE order by null) rn
from table_name)
select *
from tab
pivot (max(PREFIX) as PREFIX for RN in (1,2,3,4,5,6,7,8,9,10))
Output:
| AREA_CODE | 1_PREFIX | 2_PREFIX | 3_PREFIX | 4_PREFIX | 5_PREFIX | 6_PREFIX | 7_PREFIX | 8_PREFIX | 9_PREFIX | 10_PREFIX |
|-----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|-----------|
| 21 | 58 | 86 | 80 | 66 | 56 | 59 | 51 | 81 | 35 | 48 |
| 22 | 33 | 34 | 27 | 20 | 26 | 21 | 36 | 38 | (null) | (null) |
| 232 | 27 | 26 | 22 | (null) | (null) | (null) | (null) | (null) | (null) | (null) |
| 233 | 85 | 86 | 88 | 87 | 82 | 89 | (null) | (null) | (null) | (null) |
| 235 | 56 | 53 | 87 | 86 | (null) | (null) | (null) | (null) | (null) | (null) |
for more values, you can change increase the list of min(decode(rn, 1, PREFIX, null)) as PREFIX1.
my test data was :
select 21,48 from dual union all
select 21,66 from dual union all
select 21,80 from dual union all
select 21,86 from dual union all
select 21,58 from dual union all
select 21,59 from dual union all
select 21,51 from dual union all
select 21,81 from dual union all
select 21,35 from dual union all
select 21,56 from dual union all
select 22,26 from dual union all
select 22,20 from dual union all
select 22,27 from dual union all
select 22,34 from dual union all
select 22,33 from dual union all
select 22,21 from dual union all
select 22,38 from dual union all
select 22,36 from dual union all
select 232,22 from dual union all
select 232,26 from dual union all
select 232,27 from dual union all
select 233,88 from dual union all
select 233,86 from dual union all
select 233,85 from dual union all
select 233,87 from dual union all
select 233,89 from dual union all
select 233,82 from dual union all
select 235,56 from dual union all
select 235,53 from dual union all
select 235,87 from dual union all
select 235,86 from dual