group by case condition followed by union of two columns - sql

I have the following columns in table Sales :
Category1,priceA,priceB,Category2,costA,costB,type.(some items in category1 are same as category2)
sum(priceA), sum(priceB)
are to be grouped by category1,type.
sum(costA), sum(costB)
are to be grouped by category2,type.
I need the final output as
Union(category1,category2) as category3 ,sum(priceA)+sum(costA),sum(priceB)+sum(costB),type
to be grouped by UNION(category1+category2),type.
(sum(priceA)+sum(costA) would happen whenever items in category1 matches with category2 and same would be for sum(priceB)+sum(costB))
I tried to do it by
select category1,sum(priceA),sum(priceB),type group by category1,type
UNION ALL
select category2,sum(costA),sum(costB),type group by category2,type
Then following it up with another sum and group by. But I want to know how to do it without separately selecting and avoiding the union of basically 2 tables. Can I use group by followed by case statement here? Actually the table I referred as sales is an inner join of multiple tables , hence the motivation to not use select on it separately on two occasaions( in my actual case it would be union of 4 select queries on the table which makes the query look really big too). Plus I dont have permission to create procedure so no PL/SQL. Any fancy way for the above situation which will shorten the query and improve the performance ?
EDIT- SAMPLE DATA (Category1,PriceA,PriceB,Category2,CostA,CostB,Type)
+-----+----+----+-----+----+----+---+
| AUS | 20 | 25 | UK | 35 | 40 | X |
| UK | 30 | 26 | SA | 32 | 40 | Y |
| USA | 22 | 24 | NZ | 38 | 36 | Z |
| BRA | 16 | 10 | USA | 25 | 25 | Z |
| RUS | 20 | 15 | UK | 20 | 30 | X |
+-----+----+----+-----+----+----+---+
Which I divided into union of two tables as these:
+-----+----+----+---+
| AUS | 20 | 25 | X |
| UK | 30 | 26 | Y |
| USA | 22 | 24 | Z |
| BRA | 16 | 10 | Z |
| RUS | 20 | 15 | X |
+-----+----+----+---+
And
+-----+----+----+---+
| UK | 55 | 70 | X |
| SA | 33 | 40 | Y |
| NZ | 38 | 36 | Z |
| USA | 25 | 25 | Z |
+-----+----+----+---+
Final output would be like :
+-----+----+----+---+
| UK | 55 | 70 | X |
| UK | 30 | 26 | Y |
| NZ | 38 | 36 | Z |
| USA | 47 | 49 | Z |
| AUS | 20 | 25 | X |
| SA | 32 | 40 | Y |
| BRA | 16 | 10 | Z |
| RUS | 20 | 15 | X |
+-----+----+----+---+

This will give you what you want. SQLFiddle.
with sample_data1 as (
select "Category1", "PriceA", "PriceB", "Type"
from sample_data
union all
select "Category2", "CostA", "CostB", "Type"
from sample_data
)
select "Category1", sum("PriceA"), sum("PriceB")
from sample_data1 sd1
group by "Category1", "Type"
You will have to have a union at some point, because you need to increase the number of rows from your original source table. You can't do it with just a CASE.

Related

How to make rows shuffle?

There is a table with over 10+ rows, and now needed to shuffle all rows randomly and create a new table on it. any ideas ?
Using select * from table order by random() seems slow.
raw table is like,and the target column is separated into two parts:
+--------+------+--------+------+-----+--------+
| cst_id | name | salary | fund | age | target |
+--------+------+--------+------+-----+--------+
| 1 | a | 100 | Y | 33 | 0 |
| 2 | b | 200 | Y | 21 | 0 |
| 3 | c | 300 | Y | 45 | 0 |
| 4 | d | 400 | N | 26 | 0 |
| 5 | e | 500 | N | 37 | 0 |
| 6 | f | 600 | Y | 56 | 0 |
| 7 | g | 700 | Y | 44 | 0 |
| 8 | h | 800 | N | 22 | 1 |
| 9 | i | 900 | N | 38 | 1 |
| 10 | j | 1000 | Y | 61 | 1 |
| 11 | k | 1100 | N | 51 | 1 |
| 12 | l | 1200 | N | 21 | 1 |
| 13 | m | 1300 | Y | 32 | 1 |
| 14 | n | 1400 | N | 17 | 1 |
+--------+------+--------+------+-----+--------+
after:
+--------+------+--------+------+-----+--------+
| cst_id | name | salary | fund | age | target |
+--------+------+--------+------+-----+--------+
| 1 | a | 100 | Y | 33 | 0 |
| 2 | b | 200 | Y | 21 | 0 |
| 8 | h | 800 | N | 22 | 1 |
| 9 | i | 900 | N | 38 | 1 |
| 3 | c | 300 | Y | 45 | 0 |
| 13 | m | 1300 | Y | 32 | 1 |
| 14 | n | 1400 | N | 17 | 1 |
| 5 | e | 500 | N | 37 | 0 |
| 6 | f | 600 | Y | 56 | 0 |
| 7 | g | 700 | Y | 44 | 0 |
| 10 | j | 1000 | Y | 61 | 1 |
| 11 | k | 1100 | N | 51 | 1 |
| 4 | d | 400 | N | 26 | 0 |
+--------+------+--------+------+-----+--------+
Following explanation is to create NEW table from existing one with same data as in old one(same schema) with shuffled rows.
Create a new table and import all those rows and records from first table, randomly selected and ordered by the RAND() SQL function:
CREATE TABLE new_table SELECT * FROM old_table ORDER BY RAND()
Or if you have created a table identical to the structure of the old one, use INSERT INTO instead:
INSERT INTO new_table SELECT * FROM old_table ORDER BY RAND()
That is of course if you want to preserve the primary key identification of each row, which is most likely what you want to do with old tables because of the legacy code and data entity relationships. However, if you want a grand new table with all the shuffled records completely rearranged in order as if it’s for a different application, you can ignore the primary key or ID by not importing the ID field of the old table.
For instance, you got ID, col1 and col2 in the old table as data fields. To create a grand new reordered or shuffled rows version of old table:
CREATE TABLE new_table SELECT col1, col2 FROM old_table ORDER BY RAND()
And a new primary key ID will be automatically assigned to each of the rows in the new table.
But in SQL, Relations have no order. Rows in a relational database are not sorted. You may get different order while retrieving.

how to split table in postgresql

I get a data set about 70 thousand rows and now I want to split this table into three with exact number of rows(the code was fisrt applied in SAS and now move to postgresql),one from 1-5000,two from 5001-25000 and last one with the rest row,and no duplicated rows in any of them.
like:
+--------+-----+--------+-----+
| cst_id | age | salary | sex |
+--------+-----+--------+-----+
| 1 | 44 | 2000 | M |
| 2 | 23 | 3000 | F |
| 3 | 34 | 4000 | M |
| 4 | 51 | 5000 | M |
| 5 | 26 | 6000 | F |
| 6 | 28 | 7000 | F |
| 7 | 39 | 8000 | M |
+--------+-----+--------+-----+
finally I want three table with the exact number of rows I assign(such as 3rows-2rows-rest rows),and they are all distinct.like:
table1:
+--------+-----+--------+-----+
| cst_id | age | salary | sex |
+--------+-----+--------+-----+
| 1 | 44 | 2000 | M |
| 2 | 23 | 3000 | F |
| 3 | 34 | 4000 | M |
+--------+-----+--------+-----+
table2:
+--------+-----+--------+-----+
| cst_id | age | salary | sex |
+--------+-----+--------+-----+
| 4 | 51 | 5000 | M |
| 5 | 26 | 6000 | F |
+--------+-----+--------+-----+
table3:
+--------+-----+--------+-----+
| cst_id | age | salary | sex |
+--------+-----+--------+-----+
| 6 | 28 | 7000 | F |
| 7 | 39 | 8000 | M |
+--------+-----+--------+-----+
how to use postgresql to finish this?
There is a window function "NTILE" can do this:
-- add a col to help split
create temp table help_table as
select *
,NTILE(3) OVER(ORDER BY cat_id) as batch_nbr
from your_table;
create table_1 as
select * from help_table where batch_nbr = 1;
create table_2 as
select * from help_table where batch_nbr = 2;
create table_3 as
select * from help_table where batch_nbr = 3;
You can split this process into steps as a function.
Get the total number of distinct rows.
Divide that value by 3 and store the value as a DECLARED variable (_size).
Create table_1, table_2, and table_3.
INSERT INTO table_1 with LIMIT (_size).
INSERT INTO table_2 with LIMIT (_size) WHERE id > table_1's greatest id.
INSERT INTO table_3 with LIMIT (_size) WHERE id > table_2's greatest id.
Hopefully this helps.

How do I select columns whenever they change?

I'm trying to create a slowly changing dimension (type 2 dimension) and am a bit lost on how to logically write it out. Say that we have a source table with a grain of Person | Country | Department | Login Time. I want to create this dimension table with Person | Country | Department | Eff Start time | Eff End Time.
Data could look like this:
Person | Country | Department | Login Time
------------------------------------------
Bob | CANADA | Marketing | 2009-01-01
Bob | CANADA | Marketing | 2009-02-01
Bob | USA | Marketing | 2009-03-01
Bob | USA | Sales | 2009-04-01
Bob | MEX | Product | 2009-05-01
Bob | MEX | Product | 2009-06-01
Bob | MEX | Product | 2009-07-01
Bob | CANADA | Marketing | 2009-08-01
What I want in the Type 2 dimension would look like this:
Person | Country | Department | Eff Start time | Eff End Time
------------------------------------------------------------------
Bob | CANADA | Marketing | 2009-01-01 | 2009-03-01
Bob | USA | Marketing | 2009-03-01 | 2009-04-01
Bob | USA | Sales | 2009-04-01 | 2009-05-01
Bob | MEX | Product | 2009-05-01 | 2009-08-01
Bob | CANADA | Marketing | 2009-08-01 | NULL
Assume that Bob's name, Country and Department hasn't been updated since 2009-08-01 so it's left as NULL
What function would work best here? This is on Netezza, which uses a flavor of Postgres.
Obviously GROUP BY would not work here because of same groupings later on (I added in Bob | CANADA | Marketing at the last row to show this.
EDIT
Including a hash column on Person, Country, and Department, would make sense, correct? Thinking of using logic of
SELECT PERSON, COUNTRY, DEPARTMENT
FROM table t1
where
person = person
AND t1.hash <> hash_function(person, country, department)
Answer
create table so (
person varchar(32)
,country varchar(32)
,department varchar(32)
,login_time date
) distribute on random;
insert into so values ('Bob','CANADA','Marketing','2009-01-01');
insert into so values ('Bob','CANADA','Marketing','2009-02-01');
insert into so values ('Bob','USA','Marketing','2009-03-01');
insert into so values ('Bob','USA','Sales','2009-04-01');
insert into so values ('Bob','MEX','Product','2009-05-01');
insert into so values ('Bob','MEX','Product','2009-06-01');
insert into so values ('Bob','MEX','Product','2009-07-01');
insert into so values ('Bob','CANADA','Marketing','2009-08-01');
/* ************************************************************************** */
with prm as ( --Create an ordinal primary key.
select
*
,row_number() over (
partition by person
order by login_time
) rwn
from
so
), chn as ( --Chain events to their previous and next event.
select
cur.rwn
,cur.person
,cur.country
,cur.department
,cur.login_time cur_login
,case
when
cur.country = prv.country
and cur.department = prv.department
then 1
else 0
end prv_equal
,case
when
(
cur.country = nxt.country
and cur.department = nxt.department
) or nxt.rwn is null --No next record should be equivalent to matching.
then 1
else 0
end nxt_equal
,case prv_equal
when 0 then cur_login
else null
end eff_login_start_sparse
,case
when eff_login_start_sparse is null
then max(eff_login_start_sparse) over (
partition by cur.person
order by rwn
rows unbounded preceding --The secret sauce.
)
else eff_login_start_sparse
end eff_login_start
,case nxt_equal
when 0 then cur_login
else null
end eff_login_end
from
prm cur
left outer join prm nxt on
cur.person = nxt.person
and cur.rwn + 1 = nxt.rwn
left outer join prm prv on
cur.person = prv.person
and cur.rwn - 1 = prv.rwn
), grp as ( --Group by login starts.
select
person
,country
,department
,eff_login_start
,max(eff_login_end) eff_login_end
from
chn
group by
person
,country
,department
,eff_login_start
), led as ( --Change the effective end to be the next start, if desired.
select
person
,country
,department
,eff_login_start
,case
when eff_login_end is null
then null
else
lead(eff_login_start) over (
partition by person
order by eff_login_start
)
end eff_login_end
from
grp
)
select * from led order by eff_login_start;
This code returns the following table.
PERSON | COUNTRY | DEPARTMENT | EFF_LOGIN_START | EFF_LOGIN_END
--------+---------+------------+-----------------+---------------
Bob | CANADA | Marketing | 2009-01-01 | 2009-03-01
Bob | USA | Marketing | 2009-03-01 | 2009-04-01
Bob | USA | Sales | 2009-04-01 | 2009-05-01
Bob | MEX | Product | 2009-05-01 | 2009-08-01
Bob | CANADA | Marketing | 2009-08-01 |
Explanation
I must have solved this four or five times in the past few years and keep neglecting to write it down formally. I'm glad to have the chance to do it, so this is a great question.
When attempting this, I like writing down the problem in matrix form. Here's the input, presuming that all values have the same key in the SCD.
Cv | Ce
----|----
A | 10
A | 11
B | 14
C | 16
D | 18
D | 25
D | 34
A | 40
Where Cv is the value that we'll need to compare against (again, presuming that the key value for the SCD is equal in this data; we'll be partitioning over the key value the entire time so it's irrelevant to the solution) and Ce is the event time.
First, we need an ordinal primary key. I've designated this Ck in the table. This will allow us to join the table to itself to get the previous and next events. I've called these columns Pk (previous key), Nk (next key), Pv, and Nv.
Cv | Ce | Ck | Pk | Pv | Nk | Nv |
----|----|----|----|----|----|----|
A | 10 | 1 | | | 2 | A |
A | 11 | 2 | 1 | A | 3 | B |
B | 14 | 3 | 2 | A | 4 | C |
C | 16 | 4 | 3 | B | 5 | D |
D | 18 | 5 | 4 | C | 6 | D |
D | 25 | 6 | 5 | D | 7 | D |
D | 34 | 7 | 6 | D | 8 | A |
A | 40 | 8 | 7 | D | | |
Now we need some columns to see if we're at the beginning or end of a contiguous event block. I'll call these Pc and Nc, for contiguous. Pc is defined as Pv = Cv => true. 1 represents true and 0 represents false. Nc is defined similarly, except that the null case defaults to true (we'll see why in a minute)
Cv | Ce | Ck | Pk | Pv | Nk | Nv | Pc | Nc |
----|----|----|----|----|----|----|----|----|
A | 10 | 1 | | | 2 | A | 0 | 1 |
A | 11 | 2 | 1 | A | 3 | B | 1 | 0 |
B | 14 | 3 | 2 | A | 4 | C | 0 | 0 |
C | 16 | 4 | 3 | B | 5 | D | 0 | 0 |
D | 18 | 5 | 4 | C | 6 | D | 0 | 1 |
D | 25 | 6 | 5 | D | 7 | D | 1 | 1 |
D | 34 | 7 | 6 | D | 8 | A | 1 | 0 |
A | 40 | 8 | 7 | D | | | 0 | 1 |
Now you can start to see how the 1,1 combination of Pc,Nc is a completely useless record. We know this intuitively, since Bob's Mex/Product combination on the 6th row is pretty much useless information when building an SCD.
So let's get rid of the useless information. I'll add two new columns here: an almost-complete effective start time called Sn and an actually-complete effective end time called Ee. Sn is is populated with Ce when Pc is 0 and Ee is populated with Ce when Nc is 0.
Cv | Ce | Ck | Pk | Pv | Nk | Nv | Pc | Nc | Sn | Ee |
----|----|----|----|----|----|----|----|----|----|----|
A | 10 | 1 | | | 2 | A | 0 | 1 | 10 | |
A | 11 | 2 | 1 | A | 3 | B | 1 | 0 | | 11 |
B | 14 | 3 | 2 | A | 4 | C | 0 | 0 | 14 | 14 |
C | 16 | 4 | 3 | B | 5 | D | 0 | 0 | 16 | 16 |
D | 18 | 5 | 4 | C | 6 | D | 0 | 1 | 18 | |
D | 25 | 6 | 5 | D | 7 | D | 1 | 1 | | |
D | 34 | 7 | 6 | D | 8 | A | 1 | 0 | | 34 |
A | 40 | 8 | 7 | D | | | 0 | 1 | 40 | |
This looks really close, but we still have the problem that we can't group by Cv (person/country/department). What we need is for Sn to populate all those nulls with the previous value of Sn. You could join this table to itself on rwn < rwn and get the maximum, but I'm going to be lazy and use Netezza's analytic functions and the rows unbounded preceding clause. It's a shortcut to the method I just described. So we're going to create another column called Es, efffective start, defined as follows.
case
when Sn is null
then max(Sn) over (
partition by k --key value of the SCD
order by Ck
rows unbounded preceding
)
else Sn
end Es
With that definition, we get this.
Cv | Ce | Ck | Pk | Pv | Nk | Nv | Pc | Nc | Sn | Ee | Es |
----|----|----|----|----|----|----|----|----|----|----|----|
A | 10 | 1 | | | 2 | A | 0 | 1 | 10 | | 10 |
A | 11 | 2 | 1 | A | 3 | B | 1 | 0 | | 11 | 10 |
B | 14 | 3 | 2 | A | 4 | C | 0 | 0 | 14 | 14 | 14 |
C | 16 | 4 | 3 | B | 5 | D | 0 | 0 | 16 | 16 | 16 |
D | 18 | 5 | 4 | C | 6 | D | 0 | 1 | 18 | | 18 |
D | 25 | 6 | 5 | D | 7 | D | 1 | 1 | | | 18 |
D | 34 | 7 | 6 | D | 8 | A | 1 | 0 | | 34 | 18 |
A | 40 | 8 | 7 | D | | | 0 | 1 | 40 | | 40 |
The rest is trivial. Group by Es and grab the max of Ee to obtain this table.
Cv | Es | Ee |
----|----|----|
A | 10 | 11 |
B | 14 | 14 |
C | 16 | 16 |
D | 18 | 34 |
A | 40 | |
If you want to populate the effective end time with the next start, join the table again to itself or use the lead() window function to grab it.

How to generate merit list from exam results in SQL Server

I'm using SQL Server 2008 R2. I have a table called tstResult in my database.
AI SubID StudID StudName TotalMarks ObtainedMarks
--------------------------------------------------------
1 | 1 | 1 | Jakir | 100 | 90
2 | 1 | 2 | Rubel | 100 | 75
3 | 1 | 3 | Ruhul | 100 | 82
4 | 1 | 4 | Beauty | 100 | 82
5 | 1 | 5 | Bulbul | 100 | 96
6 | 1 | 6 | Ripon | 100 | 82
7 | 1 | 7 | Aador | 100 | 76
8 | 1 | 8 | Jibon | 100 | 80
9 | 1 | 9 | Rahaat | 100 | 82
Now I want a SELECT query that generate a merit list according to the Obtained Marks. In this query obtained marks "96" will be the top in the merit list and all the "82" marks will be placed one after another in the merit list. Something like this:
StudID StudName TotalMarks ObtainedMarks Merit List
----------------------------------------------------------
| 5 | Bulbul | 100 | 96 | 1
| 1 | Jakir | 100 | 90 | 2
| 9 | Rahaat | 100 | 82 | 3
| 3 | Ruhul | 100 | 82 | 3
| 4 | Beauty | 100 | 82 | 3
| 6 | Ripon | 100 | 82 | 3
| 8 | Jibon | 100 | 80 | 4
| 7 | Aador | 100 | 76 | 5
| 2 | Rubel | 100 | 75 | 6
;with cte as
(
select *, dense_rank() over (order by ObtainedMarks desc) as Merit_List
from tstResult
)
select * from cte order by Merit_List desc
you need to use Dense_rank()
select columns from tstResult order by ObtainedMarks desc

SQL Joining 2 Tables

I would like to merge two tables into one and also add a counter next to that. What i have now is
SELECT [CUCY_DATA].*, [DIM].[Col1], [DIM].[Col2],
(SELECT COUNT([Cut Counter]) FROM [MSD]
WHERE [CUCY_DATA].[Cut Counter] = [MSD].[Cut Counter]
) AS [Nr Of Errors]
FROM [CUCY_DATA] FULL JOIN [DIM]
ON [CUCY_DATA].[Cut Counter] = [DIM].[Cut Counter]
This way the data is inserted but where the values don't match nulls are inserted. I want for instance this
Table CUCY_DATA
|_Cut Counter_|_Data1_|_Data2_|
| 1 | 12 | 24 |
| 2 | 13 | 26 |
| 3 | 10 | 20 |
| 4 | 11 | 22 |
Table DIM
|_Cut Counter_|_Col1_|_Col2_|
| 1 | 25 | 40 |
| 3 | 50 | 45 |
And they need to be merged into:
|_Cut Counter_|_Data1_|_Data2_|_Col1_|_Col2_|
| 1 | 12 | 24 | 25 | 40 |
| 2 | 13 | 26 | 25 | 40 |
| 3 | 10 | 20 | 50 | 45 |
| 4 | 11 | 22 | 50 | 45 |
SO THIS IS WRONG:
|_Cut Counter_|_Data1_|_Data2_|_Col1__|_Col2__|
| 1 | 12 | 24 | 25 | 40 |
| 2 | 13 | 26 | NULL | NULL |
| 3 | 10 | 20 | 50 | 45 |
| 4 | 11 | 22 | NULL | NULL |
Kind regards, Bob
How are you getting the col1 and col2 values where there is no corresponding row in your DIM table? (Rows 2 and 4). Your "wrong" result is exactly correct, that's what the outer join does.