Row to column in oracle 10G - sql

How to convert the row to column in oracle
Data is as under
AREA_CODE PREFIX
21 48
21 66
21 80
21 86
21 58
21 59
21 51
21 81
21 35
21 56
21 78
21 34
21 49
21 79
21 36
21 99
21 82
21 38
21 32
21 65
22 26
22 20
22 27
22 34
22 33
22 21
22 38
22 36
232 22
232 26
232 27
233 88
233 86
233 85
233 87
233 89
233 82
235 56
235 53
235 87
235 86
required output will b
AREA_CODE P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13
21 48 66 80 86 58 59 51 81 35 56 78 34 49
22 26 20 27 34 33 21 38 36
232 22 26 27 88 86 85 87 89 82 56 53 87 86

Assuming that number of prefix per area code is 10 and table name is table_name, this query can be used in 10 G
with tab as (select AREA_CODE,
PREFIX,
row_NUMBER() over(partition by AREA_CODE order by null) rn
from table_name)
select AREA_CODE,
min(decode(rn, 1, PREFIX, null)) as PREFIX1,
min(decode(rn, 2, PREFIX, null)) as PREFIX2,
min(decode(rn, 3, PREFIX, null)) as PREFIX3,
min(decode(rn, 4, PREFIX, null)) as PREFIX4,
min(decode(rn, 5, PREFIX, null)) as PREFIX5,
min(decode(rn, 6, PREFIX, null)) as PREFIX6,
min(decode(rn, 7, PREFIX, null)) as PREFIX7,
min(decode(rn, 8, PREFIX, null)) as PREFIX8,
min(decode(rn, 9, PREFIX, null)) as PREFIX9,
min(decode(rn, 10, PREFIX, null)) as PREFIX10
from tab
group by AREA_CODE
And in 11G
with tab as (select AREA_CODE,
PREFIX,
row_NUMBER() over(partition by AREA_CODE order by null) rn
from table_name)
select *
from tab
pivot (max(PREFIX) as PREFIX for RN in (1,2,3,4,5,6,7,8,9,10))
Output:
| AREA_CODE | 1_PREFIX | 2_PREFIX | 3_PREFIX | 4_PREFIX | 5_PREFIX | 6_PREFIX | 7_PREFIX | 8_PREFIX | 9_PREFIX | 10_PREFIX |
|-----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|-----------|
| 21 | 58 | 86 | 80 | 66 | 56 | 59 | 51 | 81 | 35 | 48 |
| 22 | 33 | 34 | 27 | 20 | 26 | 21 | 36 | 38 | (null) | (null) |
| 232 | 27 | 26 | 22 | (null) | (null) | (null) | (null) | (null) | (null) | (null) |
| 233 | 85 | 86 | 88 | 87 | 82 | 89 | (null) | (null) | (null) | (null) |
| 235 | 56 | 53 | 87 | 86 | (null) | (null) | (null) | (null) | (null) | (null) |
for more values, you can change increase the list of min(decode(rn, 1, PREFIX, null)) as PREFIX1.
my test data was :
select 21,48 from dual union all
select 21,66 from dual union all
select 21,80 from dual union all
select 21,86 from dual union all
select 21,58 from dual union all
select 21,59 from dual union all
select 21,51 from dual union all
select 21,81 from dual union all
select 21,35 from dual union all
select 21,56 from dual union all
select 22,26 from dual union all
select 22,20 from dual union all
select 22,27 from dual union all
select 22,34 from dual union all
select 22,33 from dual union all
select 22,21 from dual union all
select 22,38 from dual union all
select 22,36 from dual union all
select 232,22 from dual union all
select 232,26 from dual union all
select 232,27 from dual union all
select 233,88 from dual union all
select 233,86 from dual union all
select 233,85 from dual union all
select 233,87 from dual union all
select 233,89 from dual union all
select 233,82 from dual union all
select 235,56 from dual union all
select 235,53 from dual union all
select 235,87 from dual union all
select 235,86 from dual

Related

Pandas: keep first row of duplicated indices of second level of multi index

I found lots of drop_duplicates for index when both multi level indices are the same but, I would like to keep the first row of a multi index when the second level of the multi index has duplicates. So here:
| | col_0 | col_1 | col_2 | col_3 | col_4 |
|:-------------------------------|--------:|--------:|--------:|--------:|--------:|
| date | ID
| ('2022-01-01', 'identifier_0') | 26 | 46 | 44 | 21 | 10 |
| ('2022-01-01', 'identifier_1') | 25 | 45 | 83 | 23 | 45 |
| ('2022-01-01', 'identifier_2') | 42 | 79 | 55 | 5 | 78 |
| ('2022-01-01', 'identifier_3') | 32 | 4 | 57 | 19 | 61 |
| ('2022-01-01', 'identifier_4') | 30 | 25 | 5 | 93 | 72 |
| ('2022-01-02', 'identifier_0') | 42 | 14 | 56 | 43 | 42 |
| ('2022-01-02', 'identifier_1') | 90 | 27 | 46 | 58 | 5 |
| ('2022-01-02', 'identifier_2') | 33 | 39 | 53 | 94 | 86 |
| ('2022-01-02', 'identifier_3') | 32 | 65 | 98 | 81 | 64 |
| ('2022-01-02', 'identifier_4') | 48 | 31 | 25 | 58 | 15 |
| ('2022-01-03', 'identifier_0') | 5 | 80 | 33 | 96 | 80 |
| ('2022-01-03', 'identifier_1') | 15 | 86 | 45 | 39 | 62 |
| ('2022-01-03', 'identifier_2') | 98 | 3 | 42 | 50 | 83 |
I'd like to keep first rows with unique ID.
If your index is a MultiIndex:
>>> df.loc[~df.index.get_level_values('ID').duplicated()]
col_0 col_1 col_2 col_3 col_4
date ID
2022-01-01 identifier_0 26 46 44 21 10
identifier_1 25 45 83 23 45
identifier_2 42 79 55 5 78
identifier_3 32 4 57 19 61
identifier_4 30 25 5 93 72
# Or
>>> df.groupby(level='ID').first()
col_0 col_1 col_2 col_3 col_4
ID
identifier_0 26 46 44 21 10
identifier_1 25 45 83 23 45
identifier_2 42 79 55 5 78
identifier_3 32 4 57 19 61
identifier_4 30 25 5 93 72
If your index is an Index:
>>> df.loc[~df.index.str[1].duplicated()]
col_0 col_1 col_2 col_3 col_4
(2022-01-01, identifier_0) 26 46 44 21 10
(2022-01-01, identifier_1) 25 45 83 23 45
(2022-01-01, identifier_2) 42 79 55 5 78
(2022-01-01, identifier_3) 32 4 57 19 61
(2022-01-01, identifier_4) 30 25 5 93 72
>>> df.groupby(df.index.str[1]).first()
col_0 col_1 col_2 col_3 col_4
identifier_0 26 46 44 21 10
identifier_1 25 45 83 23 45
identifier_2 42 79 55 5 78
identifier_3 32 4 57 19 61
identifier_4 30 25 5 93 72

How to create a SQL query for the below scenario

I am using Snowflake SQL, but I guess this can be solved by any sql. So I have data like this:
RA_MEMBER_ID YEAR QUARTER MONTH Monthly_TOTAL_PURCHASE CATEGORY
1000 2020 1 1 105 CAT10
1000 2020 1 1 57 CAT13
1000 2020 1 2 107 CAT10
1000 2020 1 2 59 CAT13
1000 2020 1 3 109 CAT11
1000 2020 1 3 61 CAT14
1000 2020 2 4 111 CAT11
1000 2020 2 4 63 CAT14
1000 2020 2 5 113 CAT12
1000 2020 2 5 65 CAT15
1000 2020 2 6 115 CAT12
1000 2020 2 6 67 CAT15
And I need data like this:
RA_MEMBER_ID YEAR QUARTER MONTH Monthly_TOTAL_PURCHASE CATEGORY Monthly_rank Quarterly_Total_purchase Quarter_category Quarter_rank Yearly_Total_purchase Yearly_category Yearly_rank
1000 2020 1 1 105 CAT10 1 105 CAT10 1 105 CAT10 1
1000 2020 1 1 57 CAT13 2 57 CAT13 2 57 CAT13 2
1000 2020 1 2 107 CAT10 1 212 CAT10 1 212 CAT10 1
1000 2020 1 2 59 CAT13 2 116 CAT13 2 116 CAT13 2
1000 2020 1 3 109 CAT11 1 212 CAT10 1 212 CAT10 1
1000 2020 1 3 61 CAT14 2 116 CAT13 2 116 CAT13 2
1000 2020 2 4 111 CAT11 1 111 CAT11 1 212 CAT10 1
1000 2020 2 4 63 CAT14 2 63 CAT14 2 124 CAT14 2
1000 2020 2 5 113 CAT12 1 113 CAT12 1 212 CAT10 1
1000 2020 2 5 65 CAT15 2 65 CAT15 2 124 CAT14 2
1000 2020 2 6 115 CAT12 1 228 CAT12 1 228 CAT12 1
1000 2020 2 6 67 CAT15 2 132 CAT15 2 132 CAT15 2
So basically, I have the top two categories by purchase amount for the first 6 months. I need the same for quarterly based on which month of the quarter it is. So let's say it is February, then the top 2 categories and amounts should be calculated based on both January and February. For March we have to get the quarter data by taking all three months. From April it will be the same as monthly rank, for May again calculate based on April and May. Similarly for Yearly also.
I have tried a lot of things but nothing seems to give me what I want.
The solution should be generic enough because there can be many other months and years.
I really need help in this.
Not sure if below is what you are after. I assume that everything is category based:
create or replace table test (
ra_member_id int,
year int,
quarter int,
month int,
monthly_purchase int,
category varchar
);
insert into test values
(1000, 2020, 1,1, 105, 'cat10'),
(1000, 2020, 1,1, 57, 'cat13'),
(1000, 2020, 1,2, 107, 'cat10'),
(1000, 2020, 1,2, 59, 'cat13'),
(1000, 2020, 1,3, 109, 'cat11'),
(1000, 2020, 1,3, 61, 'cat14'),
(1000, 2020, 2,4, 111, 'cat11'),
(1000, 2020, 2,4, 63, 'cat14'),
(1000, 2020, 2,5, 113, 'cat12'),
(1000, 2020, 2,5, 65, 'cat15'),
(1000, 2020, 2,6, 115, 'cat12'),
(1000, 2020, 2,6, 67, 'cat15');
WITH BASE as (
select
RA_MEMBER_ID,
YEAR,
QUARTER,
MONTH,
CATEGORY,
MONTHLY_PURCHASE,
LAG(MONTHLY_PURCHASE) OVER (PARTITION BY QUARTER, CATEGORY ORDER BY MONTH) AS QUARTERLY_PURCHASE_LAG,
IFNULL(QUARTERLY_PURCHASE_LAG, 0) + MONTHLY_PURCHASE AS QUARTERLY_PURCHASE,
LAG(MONTHLY_PURCHASE) OVER (PARTITION BY YEAR, CATEGORY ORDER BY MONTH) AS YEARLY_PURCHASE_LAG,
IFNULL(YEARLY_PURCHASE_LAG, 0) + MONTHLY_PURCHASE AS YEARLY_PURCHASE
FROM
TEST
),
BASE_RANK AS (
SELECT
RA_MEMBER_ID,
YEAR,
QUARTER,
MONTH,
CATEGORY,
MONTHLY_PURCHASE,
RANK() OVER (PARTITION BY MONTH ORDER BY MONTHLY_PURCHASE DESC) as MONTHLY_RANK,
QUARTERLY_PURCHASE,
RANK() OVER (PARTITION BY QUARTER ORDER BY QUARTERLY_PURCHASE DESC) as QUARTERLY_RANK,
YEARLY_PURCHASE,
RANK() OVER (PARTITION BY YEAR ORDER BY YEARLY_PURCHASE DESC) as YEARLY_RANK
FROM BASE
),
MAIN AS (
SELECT
RA_MEMBER_ID,
YEAR,
QUARTER,
MONTH,
CATEGORY,
MONTHLY_PURCHASE,
MONTHLY_RANK,
QUARTERLY_PURCHASE,
QUARTERLY_RANK,
YEARLY_PURCHASE,
YEARLY_RANK
FROM BASE_RANK
)
SELECT * FROM MAIN
ORDER BY YEAR, QUARTER, MONTH
;
Result:
+--------------+------+---------+-------+----------+------------------+--------------+--------------------+----------------+-----------------+-------------+
| RA_MEMBER_ID | YEAR | QUARTER | MONTH | CATEGORY | MONTHLY_PURCHASE | MONTHLY_RANK | QUARTERLY_PURCHASE | QUARTERLY_RANK | YEARLY_PURCHASE | YEARLY_RANK |
|--------------+------+---------+-------+----------+------------------+--------------+--------------------+----------------+-----------------+-------------|
| 1000 | 2020 | 1 | 1 | cat10 | 105 | 1 | 105 | 4 | 105 | 9 |
| 1000 | 2020 | 1 | 1 | cat13 | 57 | 2 | 57 | 6 | 57 | 12 |
| 1000 | 2020 | 1 | 2 | cat10 | 107 | 1 | 212 | 1 | 212 | 3 |
| 1000 | 2020 | 1 | 2 | cat13 | 59 | 2 | 116 | 2 | 116 | 6 |
| 1000 | 2020 | 1 | 3 | cat11 | 109 | 1 | 109 | 3 | 109 | 8 |
| 1000 | 2020 | 1 | 3 | cat14 | 61 | 2 | 61 | 5 | 61 | 11 |
| 1000 | 2020 | 2 | 4 | cat11 | 111 | 1 | 111 | 4 | 220 | 2 |
| 1000 | 2020 | 2 | 4 | cat14 | 63 | 2 | 63 | 6 | 124 | 5 |
| 1000 | 2020 | 2 | 5 | cat12 | 113 | 1 | 113 | 3 | 113 | 7 |
| 1000 | 2020 | 2 | 5 | cat15 | 65 | 2 | 65 | 5 | 65 | 10 |
| 1000 | 2020 | 2 | 6 | cat12 | 115 | 1 | 228 | 1 | 228 | 1 |
| 1000 | 2020 | 2 | 6 | cat15 | 67 | 2 | 132 | 2 | 132 | 4 |
+--------------+------+---------+-------+----------+------------------+--------------+--------------------+----------------+-----------------+-------------+

SQL query, create groups by dates

This is my initial table, (the dates are in DD/MM/YY format)
ID DAY TYPE_ID TYPE NUM START_DATE END_DATE
---- --------- ------- ---- ---- --------- ---------
4241 15/09/15 2 1 66 01/01/00 31/12/99
4241 16/09/15 2 1 66 01/01/00 31/12/99
4241 17/09/15 9 1 59 17/09/15 18/09/15
4241 18/09/15 9 1 59 17/09/15 18/09/15
4241 19/09/15 2 1 66 01/01/00 31/12/99
4241 20/09/15 2 1 66 01/01/00 31/12/99
4241 15/09/15 3 2 63 01/01/00 31/12/99
4241 16/09/15 8 2 159 16/09/15 17/09/15
4241 17/09/15 8 2 159 16/09/15 17/09/15
4241 18/09/15 3 2 63 01/01/00 31/12/99
4241 19/09/15 3 2 63 01/01/00 31/12/99
4241 20/09/15 3 2 63 01/01/00 31/12/99
2134 15/09/15 2 1 66 01/01/00 31/12/99
2134 16/09/15 2 1 66 01/01/00 31/12/99
2134 17/09/15 9 1 59 17/09/15 18/09/15
2134 18/09/15 9 1 59 17/09/15 18/09/15
2134 19/09/15 2 1 66 01/01/00 31/12/99
2134 20/09/15 2 1 66 01/01/00 31/12/99
2134 15/09/15 3 2 63 01/01/00 31/12/99
2134 16/09/15 8 2 159 16/09/15 17/09/15
2134 17/09/15 8 2 159 16/09/15 17/09/15
2134 18/09/15 3 2 63 01/01/00 31/12/99
2134 19/09/15 3 2 63 01/01/00 31/12/99
2134 20/09/15 3 2 63 01/01/00 31/12/99
And I've to create groups with initial DAY and end DAY for the same ID, and TYPE.
I don't want to group by day, I need to create a group every time my TYPE_ID changes, based on the initial order (ID, TYPE, DAY ASC)
This is the result that I want to achieve:
ID DAY_INI DAY_END TYPE_ID TYPE NUM START_DATE END_DATE
---- --------- --------- ------- ---- ---- --------- ---------
4241 15/09/15 16/09/15 2 1 66 01/01/00 31/12/99
4241 17/09/15 18/09/15 9 1 59 17/09/15 18/09/15
4241 19/09/15 20/09/15 2 1 66 01/01/00 31/12/99
4241 15/09/15 15/09/15 3 2 63 01/01/00 31/12/99
4241 16/09/15 17/09/15 8 2 159 16/09/15 17/09/15
4241 18/09/15 20/09/15 3 2 63 01/01/00 31/12/99
2134 15/09/15 16/09/15 2 1 66 01/01/00 31/12/99
2134 17/09/15 18/09/15 9 1 59 17/09/15 18/09/15
2134 19/09/15 20/09/15 2 1 66 01/01/00 31/12/99
2134 15/09/15 15/09/15 3 2 63 01/01/00 31/12/99
2134 16/09/15 17/09/15 8 2 159 16/09/15 17/09/15
2134 18/09/15 20/09/15 3 2 63 01/01/00 31/12/99
Could you please give any clue about how to do it??, thanks!
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE TEST ( ID, DAY, TYPE_ID, TYPE, NUM, START_DATE, END_DATE ) AS
SELECT 4241, DATE '2015-09-15', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-16', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-17', 9, 1, 59, DATE '2015-09-17', DATE '2015-09-18' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-18', 9, 1, 59, DATE '2015-09-17', DATE '2015-09-18' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-19', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-20', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-15', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-16', 8, 2, 159, DATE '2015-09-16', DATE '2015-09-17' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-17', 8, 2, 159, DATE '2015-09-16', DATE '2015-09-17' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-18', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-19', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 4241, DATE '2015-09-20', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-15', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-16', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-17', 9, 1, 59, DATE '2015-09-17', DATE '2015-09-18' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-18', 9, 1, 59, DATE '2015-09-17', DATE '2015-09-18' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-19', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-20', 2, 1, 66, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-15', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-16', 8, 2, 159, DATE '2015-09-16', DATE '2015-09-17' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-17', 8, 2, 159, DATE '2015-09-16', DATE '2015-09-17' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-18', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-19', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
UNION ALL SELECT 2134, DATE '2015-09-20', 3, 2, 63, DATE '2000-01-01', DATE '1999-12-31' FROM DUAL
Query 1:
WITH group_changes AS (
SELECT t.*,
CASE TYPE_ID WHEN LAG( TYPE_ID ) OVER ( PARTITION BY ID, TYPE ORDER BY DAY ) THEN 0 ELSE 1 END AS HAS_CHANGED_GROUP
FROM TEST t
),
groups AS (
SELECT ID, DAY, TYPE_ID, TYPE, NUM, START_DATE, END_DATE,
SUM( HAS_CHANGED_GROUP ) OVER ( PARTITION BY ID, TYPE ORDER BY DAY ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW ) AS GRP
FROM group_changes
)
SELECT ID,
MIN( DAY ) AS DAY_INI,
MAX( DAY ) AS DAY_END,
MIN( TYPE_ID ) AS TYPE_ID,
TYPE,
MIN( NUM ) AS NUM,
MIN( START_DATE ) AS START_DATE,
MIN( END_DATE ) AS END_DATE
FROM groups
GROUP BY ID, TYPE, GRP
Results:
| ID | DAY_INI | DAY_END | TYPE_ID | TYPE | NUM | START_DATE | END_DATE |
|------|-----------------------------|-----------------------------|---------|------|-----|-----------------------------|-----------------------------|
| 4241 | September, 17 2015 00:00:00 | September, 18 2015 00:00:00 | 9 | 1 | 59 | September, 17 2015 00:00:00 | September, 18 2015 00:00:00 |
| 2134 | September, 15 2015 00:00:00 | September, 15 2015 00:00:00 | 3 | 2 | 63 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 2134 | September, 18 2015 00:00:00 | September, 20 2015 00:00:00 | 3 | 2 | 63 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 4241 | September, 15 2015 00:00:00 | September, 16 2015 00:00:00 | 2 | 1 | 66 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 4241 | September, 19 2015 00:00:00 | September, 20 2015 00:00:00 | 2 | 1 | 66 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 4241 | September, 15 2015 00:00:00 | September, 15 2015 00:00:00 | 3 | 2 | 63 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 4241 | September, 16 2015 00:00:00 | September, 17 2015 00:00:00 | 8 | 2 | 159 | September, 16 2015 00:00:00 | September, 17 2015 00:00:00 |
| 2134 | September, 17 2015 00:00:00 | September, 18 2015 00:00:00 | 9 | 1 | 59 | September, 17 2015 00:00:00 | September, 18 2015 00:00:00 |
| 2134 | September, 15 2015 00:00:00 | September, 16 2015 00:00:00 | 2 | 1 | 66 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 2134 | September, 19 2015 00:00:00 | September, 20 2015 00:00:00 | 2 | 1 | 66 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
| 2134 | September, 16 2015 00:00:00 | September, 17 2015 00:00:00 | 8 | 2 | 159 | September, 16 2015 00:00:00 | September, 17 2015 00:00:00 |
| 4241 | September, 18 2015 00:00:00 | September, 20 2015 00:00:00 | 3 | 2 | 63 | January, 01 2000 00:00:00 | December, 31 1999 00:00:00 |
Add an enumeration to the original data set (using Row_Number or rownum). Add the MIN(Enumeration) for each group. Then sort the groups by the enumeration.

Crosstab or transpose query

I have a result set from a query like this
mon-yar Count EB VC
Apr-11 34 1237 428
May-11 54 9834 87
Jun-11 23 9652 235
Jul-11 567 10765 1278
Aug-11 36 10234 1092
Sep-11 78 8799 987
Oct-11 23 10923 359
Nov-11 45 11929 346
Dec-11 67 9823 874
Jan-12 45 2398 245
Feb-12 90 3487 937
Mar-12 123 7532 689
Apr-12 109 1256 165
What I wish is this:
monthyear Apr-11 May-11 Jun-11 Jul-11 Aug-11 Sep-11 Oct-11 Nov-11 Dec-11 Jan-12 Feb-12 Mar-12 Apr-12
Count 34 54 23 567 36 78 23 45 67 45 90 123 109
EB 1237 9834 9652 10765 10234 8799 10923 11929 9823 2398 3487 7532 1256
VC 428 87 235 1278 1092 987 359 346 874 245 937 689 165
The Month Year values are dynamic. What can I do to generate it this way?
If you don't want to use PIVOT, you can use the solution below, as long as you don't mind using text to columns in Excel upon the result.
If you were to run:
with tbl as(
select 'Apr-11' as monyar, 34 as cnt, 1237 as eb, 428 as vc from dual union all
select 'May-11' as monyar, 54 as cnt, 9834 as eb, 87 as vc from dual union all
select 'Jun-11' as monyar, 23 as cnt, 9652 as eb, 235 as vc from dual union all
select 'Jul-11' as monyar, 567 as cnt, 10765 as eb, 1278 as vc from dual union all
select 'Aug-11' as monyar, 36 as cnt, 10234 as eb, 1092 as vc from dual union all
select 'Sep-11' as monyar, 78 as cnt, 8799 as eb, 987 as vc from dual union all
select 'Oct-11' as monyar, 23 as cnt, 10923 as eb, 359 as vc from dual union all
select 'Nov-11' as monyar, 45 as cnt, 11929 as eb, 346 as vc from dual union all
select 'Dec-11' as monyar, 67 as cnt, 9823 as eb, 874 as vc from dual union all
select 'Jan-12' as monyar, 45 as cnt, 2398 as eb, 245 as vc from dual union all
select 'Feb-12' as monyar, 90 as cnt, 3487 as eb, 937 as vc from dual union all
select 'Mar-12' as monyar, 123 as cnt, 7532 as eb, 689 as vc from dual union all
select 'Apr-12' as monyar, 109 as cnt, 1256 as eb, 165 as vc from dual
)
select 'Month' as lbl, listagg(monyar,' | ') within group (order by monyar) as list from tbl
union all
select 'Count' as lbl, listagg(cnt,' | ') within group (order by monyar) as list from tbl
union all
select 'EB' as lbl, listagg(eb,' | ') within group (order by monyar) as list from tbl
union all
select 'VC' as lbl, listagg(vc,' | ') within group (order by monyar) as list from tbl
Result:
LBL LIST
Month Apr-11 | Apr-12 | Aug-11 | Dec-11 | Feb-12 | Jan-12 | Jul-11 | Jun-11 | Mar-12 | May-11 | Nov-11 | Oct-11 | Sep-11
Count 34 | 109 | 36 | 67 | 90 | 45 | 567 | 23 | 123 | 54 | 45 | 23 | 78
EB 1237 | 1256 | 10234 | 9823 | 3487 | 2398 | 10765 | 9652 | 7532 | 9834 | 11929 | 10923 | 8799
VC 428 | 165 | 1092 | 874 | 937 | 245 | 1278 | 235 | 689 | 87 | 346 | 359 | 987
Using the pipe as the delimitter you can then split the 2nd column into however many columns there are.
LISTAGG is an Oracle function and I'm not sure there is a 1:1 equivalent in sql server, so you would have to mimic the vertical concatenation one way or another, if it has to be run in sql server.

SQL Server partitioning when null

I have a sql server table like this:
Value RowID Diff
153 48 1
68 49 1
50 57 NULL
75 58 1
65 59 1
70 63 NULL
66 64 1
79 66 NULL
73 67 1
82 68 1
85 69 1
66 70 1
118 88 NULL
69 89 1
67 90 1
178 91 1
How can I make it like this (note the partition after each null in 3rd column):
Value RowID Diff
153 48 1
68 49 1
50 57 NULL
75 58 2
65 59 2
70 63 NULL
66 64 3
79 66 NULL
73 67 4
82 68 4
85 69 4
66 70 4
118 88 NULL
69 89 5
67 90 5
178 91 5
It looks like you are partitioning over sequential values of RowID. There is a trick to do this directly by grouping on RowID - Row_Number():
select
value,
rowID,
Diff,
RowID - row_number() over (order by RowID) Diff2
from
Table1
Notice how this gets you similar groupings, except with distinct Diff values (in Diff2):
| VALUE | ROWID | DIFF | DIFF2 |
|-------|-------|--------|-------|
| 153 | 48 | 1 | 47 |
| 68 | 49 | 1 | 47 |
| 50 | 57 | (null) | 54 |
| 75 | 58 | 1 | 54 |
| 65 | 59 | 1 | 54 |
| 70 | 63 | (null) | 57 |
| 66 | 64 | 1 | 57 |
| 79 | 66 | (null) | 58 |
| 73 | 67 | 1 | 58 |
| 82 | 68 | 1 | 58 |
| 85 | 69 | 1 | 58 |
| 66 | 70 | 1 | 58 |
| 118 | 88 | (null) | 75 |
| 69 | 89 | 1 | 75 |
| 67 | 90 | 1 | 75 |
| 178 | 91 | 1 | 75 |
Then to get ordered values for Diff, you can use Dense_Rank() to produce a numbering over each separate partition - except when a value is Null:
select
value,
rowID,
case when Diff = 1
then dense_rank() over (order by Diff2)
else Diff end as Diff
from (
select
value,
rowID,
Diff,
RowID - row_number() over (order by RowID) Diff2
from
Table1
) T
The result is the expected result, except keyed off of RowID directly rather than off of the existing Diff column.
| VALUE | ROWID | DIFF |
|-------|-------|--------|
| 153 | 48 | 1 |
| 68 | 49 | 1 |
| 50 | 57 | (null) |
| 75 | 58 | 2 |
| 65 | 59 | 2 |
| 70 | 63 | (null) |
| 66 | 64 | 3 |
| 79 | 66 | (null) |
| 73 | 67 | 4 |
| 82 | 68 | 4 |
| 85 | 69 | 4 |
| 66 | 70 | 4 |
| 118 | 88 | (null) |
| 69 | 89 | 5 |
| 67 | 90 | 5 |
| 178 | 91 | 5 |