how to separated One column to two with cases - sql

I have a table and I would want to separate the data to multiple columns, how i can do it ?
I tried this:
select a.[batch],a.[Doc_Type],
Soaking Out =
CASE a.[Doc_Type]
WHEN 'BB' THEN 'Soaking Out'
END,
Soaking In =
CASE a.[Doc_Type]
WHEN 'AA' THEN 'Soaking In'
END,
FROM Transaction_Hdr a JOIN Transaction_dtl b
on a.Doc_Number=b.Doc_Number

Your original query would output the strings 'soaking in' or 'soaking out', but what is needed in those case expressions (after then) is the column [Qty] and it is that value which will be returned from the case expression.
What I don't know is which table [Qty] comes from but I assume it is the detail table (b) otherwise there isn't much point in joining that detail table.
SELECT
a.[Doc_Type]
, a.[batch]
, CASE a.[Doc_Type] WHEN 'BB' THEN b.Qty END [soaking out]
, CASE a.[Doc_Type] WHEN 'AA' THEN b.Qty END [soaking in]
FROM Transaction_Hdr a
JOIN Transaction_dtl b ON a.Doc_Number = b.Doc_Number
ORDER BY
a.[Doc_Type]
, a.[batch]
But: a "detail" table and a "header" table usually indicates many rows of detail for a single header. So you might need a SUM() and GROUP BY
SELECT
h.[Doc_Type]
, h.[batch]
, SUM(CASE h.[Doc_Type] WHEN 'BB' THEN d.Qty END) [soaking out]
, SUM(CASE h.[Doc_Type] WHEN 'AA' THEN d.Qty END) [soaking in]
FROM Transaction_Hdr h
JOIN Transaction_dtl d ON h.Doc_Number = d.Doc_Number
GROUP BY
h.[Doc_Type]
, h.[batch]
ORDER BY
h.[Doc_Type]
, h.[batch]
Note I have now used aliases "h" = "header" and "d" = "detail" as I am really not keen of aliases that rely on a sequence within the query (as that sequence can get messed with very easily). I find it way easier for an alias to easily identify its associated table by "first letter of each word in a table's name" or similar.

select a.[batch],a.[Doc_Type],
isnull(CASE WHEN a.[Doc_Type]='AA' THEN convert(real,a.Qty) END,0) as [Soaking In] ,
isnull(CASE WHEN a.[Doc_Type]='BB' THEN convert(real,a.Qty) END ,0)as [Soaking Out]
FROM Transaction_Hdr a

I think you are looking for quantity in result table, So you should use that instead of string 'Soaking In' and 'Soaking Out' as follows
select a.[batch],a.[Doc_Type],
SoakingOut =
CASE a.[Doc_Type]
WHEN 'BB' THEN Qty
END ,
SoakingIn =
CASE a.[Doc_Type]
WHEN 'AA' THEN Qty
END
FROM #temp a

BEGIN TRAN
CREATE TABLE #Data (
Doc_Type VARCHAR(10),
Batch INT,
Qty DECIMAL(4,2)
);
INSERT INTO #Data VALUES
('AA', 1, 20.5),
('BB', 2, 10 ),
('AA', 3, 6 ),
('BB', 4, 7 ),
('AA', 5, 8 );
SELECT ISNULL(CASE WHEN Doc_Type='AA'THEN CONVERT(NVARCHAR(10),QTY) END,'') Soaking_In ,
ISNULL(CASE WHEN Doc_Type='BB'THEN CONVERT(NVARCHAR(10),QTY) END,'') Soaking_Out
FROM #Data
ROLLBACK TRAN

Use CASE() and Modulus as below, assuming that Batch is always inceremnted by 1 and Doc_Type has always those two values AA and BB in the same order:
CREATE TABLE Data (
Doc_Type VARCHAR(10),
Batch INT,
Qty DECIMAL(4,2)
);
INSERT INTO Data VALUES
('AA', 1, 20.5),
('BB', 2, 10 ),
('AA', 3, 6 ),
('BB', 4, 7 ),
('AA', 5, 8 );
SELECT D.Doc_Type, D.Batch,
CASE WHEN D.Batch % 2 = 0 Then 0 ELSE D.Qty END AS Soaking_In,
CASE WHEN D.Batch % 2 = 1 Then 0 ELSE D.Qty END AS Soaking_Out
FROM Data D;
Results:
+----------+-------+------------+-------------+
| Doc_Type | Batch | Soaking_In | Soaking_Out |
+----------+-------+------------+-------------+
| AA | 1 | 20,50 | 0,00 |
| BB | 2 | 0,00 | 10,00 |
| AA | 3 | 6,00 | 0,00 |
| BB | 4 | 0,00 | 7,00 |
| AA | 5 | 8,00 | 0,00 |
+----------+-------+------------+-------------+
Demo

Related

Formatting multiple SELECT statements with PIVOT

Currently have a script that unions around a dozen SELECT statements, and example of two of these along with an example of the results is shown below.
DECLARE #Age TABLE (name VARCHAR(30), total FLOAT, percentage FLOAT)
INSERT INTO #Age
SELECT '0-18', (SELECT COUNT(*) FROM tblPerson p
INNER JOIN tblClient c ON c.intPersonID = p.intPersonID
WHERE ISNULL(dbo.fncReportClient_Age(p.dteBirthdate, GETDATE()), '') >= 0 AND ISNULL(dbo.fncReportClient_Age(p.dteBirthdate, GETDATE()), '') <= 18), ''
UPDATE #Age
SET percentage = ROUND((SELECT total FROM #Age WHERE name = '0-18')/(SELECT SUM(total) FROM #Age) * 100, 2)
FROM #Age
WHERE name = '0-18'
Etc.
SELECT
g.nvhGenderName,
COUNT(*),
ROUND(COUNT(*) * 1.0 / SUM(COUNT(*)) OVER () * 100, 2)
FROM
tblClient c
LEFT JOIN tblPerson p ON p.intPersonID = c.intPersonID
LEFT JOIN tblGender g ON g.intGenderID = p.intGenderID
GROUP BY g.nvhGenderName
UNION ALL
SELECT * FROM #Age
Results example below:
Name | Total | % |
---------------------------------
Male | 6514 | 60.32 |
Female | 4285 | 39.68 |
0-18 | 279 | 1.58 |
19-24 | 1748 | 9.93 |
25-34 | 5423 | 30.80 |
35-64 | 9546 | 54.21 |
65+ | 614 | 3.50 |
I would like to display these results horizontally as opposed to vertically, I think it is possible to do this with PIVOT but have never really used them. An example of how I want the data to be displayed is shown below:
Gender | Total | % | Age | Total | % |
-------------------------------------------------------------
Male | 6514 | 60.32 | 0-18 | 279 | 1.58 |
Female | 4285 | 39.68 | 19-24 | 1748 | 9.93 |
| | | 25-34 | 5423 | 30.80 |
| | | 35-64 | 9546 | 54.21 |
| | | 65+ | 614 | 3.50 |
In particular I am not sure how I would use a pivot to combine the multiple (12) SELECT statements that require it.
Any help on how to format this would be much appreciated.
Whilst I believe the layout really should be achieved elsewhere, the following may work for you. Clearly I cannot test it so, without the benefit of testing here goes:
WITH myCTE AS (
SELECT
COUNT(CASE WHEN oa.age >= 0 AND oa.age < 19 THEN p.intPersonID ELSE NULL END) c0019
, COUNT(CASE WHEN oa.age >= 19 AND oa.age < 25 THEN p.intPersonID ELSE NULL END) c1925
, COUNT(CASE WHEN oa.age >= 25 AND oa.age < 35 THEN p.intPersonID ELSE NULL END) c2535
, COUNT(CASE WHEN oa.age >= 35 AND oa.age < 65 THEN p.intPersonID ELSE NULL END) c3565
, COUNT(CASE WHEN oa.age >= 65 THEN p.intPersonID ELSE NULL END) c65on
, COUNT(CASE WHEN g.nvhGenderName = 'Male' THEN p.intPersonID ELSE NULL END) cmale
, COUNT(CASE WHEN g.nvhGenderName = 'Female' THEN p.intPersonID ELSE NULL END) cfemale
, COUNT(*) ctotal
FROM tblClient c
LEFT JOIN tblPerson p ON p.intPersonID = c.intPersonID
OUTER APPLY (
SELECT
dbo.fncReportClient_Age(p.dteBirthdate) AS age
) AS oa
LEFT JOIN tblGender g ON g.intGenderID = p.intGenderID
)
SELECT
ca.Gender, ca.Total2, ca.Pct, ca.Age, ca.Total2, ca.Pct2
FROM myCTE
CROSS APPLY (
VALUES
(1, 'Male' , t.cmale , (cmale * 100.0 / ctotal), '0-18', c0019, (c0019 * 100.0 / ctotal))
, (2, 'Female',t.cfemale,(cfemale * 100.0 / ctotal), '19-24', c1925, (c1925 * 100.0 / ctotal))
, (3, NULL,NULL,NULL, '25-34', c2535, (c2535 * 100.0 / ctotal))
, (4, NULL,NULL,NULL, '35-64', c3565, (c3565 * 100.0 / ctotal))
, (5, NULL,NULL,NULL, '65+' , c65on, (c65on * 100.0 / ctotal))
) AS ca (rn, Gender, Total2, Pct, Age, Total2, Pct2)
ORDER BY ca.rn
;
The second (lower) part of the query above uses a technique for unpivoting data that combines cross apply with values and this allows us to "layout" each row of the wanted final result row by row.
Using a CTE (the upper part of the query above) isn't essential, it could be moved into a subquery instead, but the query inside the CTE should be trialed standalone first and it should produce all the numbers you need in one pass of the data (assuming that the gender table doesn't disturb the count results). Note that using count(case expression here) removes the need for multiple separate queries. I suspect you don't need left joins by the way, and if that is true you could also change the outer apply to a cross apply. Note that the apply is used to execute your function, and by doing it ths way we are able to reference the result of that function by an alias in the remainder of the query (I used "age" as that alias).
I am unsure why you involve a client table when counting the persons table. I suspect it isn't needed. If my suspicions are correct the CTE detail could be replaced by this:
SELECT
COUNT(CASE WHEN oa.age >= 0 AND oa.age < 19 THEN 1 ELSE NULL END) c0019
, COUNT(CASE WHEN oa.age >= 19 AND oa.age < 25 THEN 1 ELSE NULL END) c1925
, COUNT(CASE WHEN oa.age >= 25 AND oa.age < 35 THEN 1 ELSE NULL END) c2535
, COUNT(CASE WHEN oa.age >= 35 AND oa.age < 65 THEN 1 ELSE NULL END) c3565
, COUNT(CASE WHEN oa.age >= 65 THEN 1 ELSE NULL END) c65on
, COUNT(CASE WHEN g.nvhGenderName = 'Male' THEN 1 ELSE NULL END) cmale
, COUNT(CASE WHEN g.nvhGenderName = 'Feale' THEN 1 ELSE NULL END) cfemale
, COUNT(*) ctotal
FROM tblPerson p
CROSS APPLY (
SELECT
dbo.fncReportClient_Age(p.dteBirthdate) AS age
) AS oa
INNER JOIN tblGender g ON g.intGenderID = p.intGenderID
Create VIEW view_name AS
SELECT * FROM (
select ColumnName1,ColumnName2 from TableName
) as s
PIVOT (
Sum(ColumnName2)
FOR ColumnName3 in ("Row1","Row2","Row3"
)
) As pvt
DROP VIEW view_name;
Select * from view_name

How to trim a string after and before certain character for one column by a different where conditions in SQL Server?

For example, I have in my dbo.table column named Gas.
I need to sum values nested inside this column.
The Gas has next values:
1:0.5;2:0.455;3:0.578;
I need to trim values that are after num and before ; - 0.5, 0.455, 0.578 etc;
So the desired query should be like:
SELECT user,
SUM(CASE WHEN 1: ; then trim(1: to ;) else 0 end) as case1
SUM(CASE WHEN 2: ; then trim(2: to ;) else 0 end) as case2
SUM(CASE WHEN 3: ; then trim(3: to ;) else 0 end) as case3
SUM(CASE WHEN 4: ; then trim(4: to ;) else 0 end) as case4
SUM(CASE WHEN 5: ; then trim(5: to ;) else 0 end) as case5
from table group by user;
Described table is like:
+---------+-------------------------+
| user | gas |
+---------+-------------------------+
| adaf | 1:0.5;2:0.455;3:0.578;|
| rich | 4:0.5;1:0.5;2:0.455; |
| call | 4:0.5;1:0.5;2:0.455; |
| alen | 6:0.78;7:89;1:789; |
| courney| 3:0.34;5:0.44; |
+---------+-------------------------+
What I want to get is:
+---------+--------+---------+
| user | case 1| case 2 | //etc
+---------+--------+---------+
| adaf | 0.5 | 0.455 |
| rich | 0.5 | 0.455 |
| call | 0.5 | 0.455 |
| alen | 789 | 0 |
| courney| 0 | 0 |
+---------+--------+---------+
Any ideas?
This will produce the desired output for your sample data. This is using the splitter from Jeff Moden. you can find it here. It is super fast but has some features other splitters don't (the ordinal position of the value). http://www.sqlservercentral.com/articles/Tally+Table/72993/
declare #Something table
(
userName varchar(10)
, gas varchar(100)
)
insert #Something values
('adaf', '1:0.5;2:0.455;3:0.578;')
, ('rich', '4:0.5;1:0.7;2:0.455;')
, ('call', '4:0.5;1:0.5;2:0.455;')
, ('alen', '6:0.78;7:89;1:789;')
, ('courney', '3:0.34;5:0.44;')
select userName
, max(case when left(x.Item, 1) = '1' and x2.ItemNumber = 2 then x2.Item end) as case1
, max(case when left(x.Item, 1) = '2' and x2.ItemNumber = 2 then x2.Item end) as case2
, max(case when left(x.Item, 1) = '3' and x2.ItemNumber = 2 then x2.Item end) as case3
, max(case when left(x.Item, 1) = '4' and x2.ItemNumber = 2 then x2.Item end) as case4
, max(case when left(x.Item, 1) = '5' and x2.ItemNumber = 2 then x2.Item end) as case5
, max(case when left(x.Item, 1) = '6' and x2.ItemNumber = 2 then x2.Item end) as case6
, max(case when left(x.Item, 1) = '7' and x2.ItemNumber = 2 then x2.Item end) as case7
, max(case when left(x.Item, 1) = '8' and x2.ItemNumber = 2 then x2.Item end) as case8
from #Something s
cross apply dbo.DelimitedSplit8K(left(s.gas, len(gas) - 1), ';') x --have to use left here to remove the last delimiter
cross apply dbo.DelimitedSplit8K(x.Item, ':') x2
group by s.userName
No problem, if you are known about Data that would be how stored in Your Table.
Just try below short of SQL Command :
SELECT [user],
SUBSTRING([case 1], CHARINDEX(':', [case 1])+1, LEN([case 1])) [case 1],
SUBSTRING([case 2], CHARINDEX(':', [case 2])+1, LEN([case 2])) [case 2],
SUBSTRING([case 3], CHARINDEX(':', [case 3])+1, LEN([case 3])) [case 3]
FROM
(
SELECT DISTINCT
[user],
split.a.value('/M[1]', 'NVARCHAR(MAX)') [case 1],
split.a.value('/M[2]', 'NVARCHAR(MAX)') [case 2],
split.a.value('/M[3]', 'NVARCHAR(MAX)') [case 3]
FROM
(
SELECT [user],
CAST('<M>'+REPLACE(gas, ';', '</M><M>')+'</M>' AS XML) AS String
FROM <table_name>
) AS A
CROSS APPLY String.nodes('/M') AS split(a)
) AS A;
Output :
user case 1 case 2 case 3
adaf 0.5 0.455 0.578
alen 0.78 89 789
call 0.5 0.5 0.455
courney 0.34 0.44
rich 0.5 0.5 0.455
With this type of problem (columns which contain internally structured data) you have a number of options and techniques:
Build or re-build your database model in a way that is well structured external to the fields not internal to them. (as suggested by Sean Lange in the comments).
Build a view which parses the data (this often useful for simple paring like splitting first and last name)
Transform the data to XML (or JSON) which databases are better at parsing and querying. You can also change your source system to deliver the data to the DB as XML if you need to send structured data.
I will not say anything about perfect table design in your case, but this is what you can do (however if you want to have different columns for "Case1", "Case2" and so on you need to change it):
create table omg (
[user] varchar(20),
[gas] varchar(100)
);
insert into omg VALUES
('adaf','1:0.5;2:0.455;3:0.578;'),
('rich','4:0.5;1:0.5;2:0.455;'),
('call','4:0.5;1:0.5;2:0.455;'),
('alen','6:0.78;7:89;1:789;'),
('courney','3:0.34;5:0.44;')
GO
CREATE FUNCTION [dbo].[fnSplitString]
(
#string NVARCHAR(MAX),
#delimiter CHAR(1)
)
RETURNS #output TABLE(splitdata NVARCHAR(MAX)
)
BEGIN
DECLARE #start INT, #end INT
SELECT #start = 1, #end = CHARINDEX(#delimiter, #string)
WHILE #start < LEN(#string) + 1 BEGIN
IF #end = 0
SET #end = LEN(#string) + 1
INSERT INTO #output (splitdata)
VALUES(SUBSTRING(#string, #start, #end - #start))
SET #start = #end + 1
SET #end = CHARINDEX(#delimiter, #string, #start)
END
RETURN
END
GO
SELECT [user], f.final_result, REPLACE(REPLACE(spl.splitdata, f.final_result , ''),':','') cases
FROM omg o
CROSS APPLY [dbo].[fnSplitString] (SUBSTRING(o.gas, 1, LEN(o.gas) -1),';') spl
CROSS APPLY (SELECT SUBSTRING(spl.splitdata,CHARINDEX(':',spl.splitdata) + 1, LEN(spl.splitdata) - CHARINDEX(':',spl.splitdata)) final_result ) f
I think the idea of Transforming the data to XML is a good idea. Its better than that mess of a table.

In T-SQL, What is the best way to find % of male customers by area

Support I have a table with area, customer and customer's sex info and I want to find out % of male customers in each area. Whats the best way to come up with that?
create table temp(area_id varchar(10),customer_id varchar(10),customer_sex varchar(10))
insert into temp select 1,1,'male'
insert into temp select 1,1,'male'
insert into temp select 1,1,'female'
insert into temp select 1,1,'female'
insert into temp select 2,1,'male'
insert into temp select 2,1,'female'
insert into temp select 2,1,'female'
insert into temp select 3,1,'male'
insert into temp select 3,1,'female'
insert into temp select 4,1,'male'
insert into temp select 5,1,'female'
select * from temp
The result should be like below:
; WITH x AS
(
select
area_id
, count(*) AS total_customers
, SUM(CASE WHEN customer_sex = 'male' THEN 1 ELSE 0 END) AS total_male_customers
FROM temp
GROUP BY area_id
)
SELECT
area_id
, total_customers
, total_male_customers
, CASE WHEN total_male_customers > 0 THEN CAST( (total_male_customers * 100.0) / total_customers AS DECIMAL(6,2)) ELSE 0 END AS Male_percentage
From x
Group by and case will provide your results:
SELECT area_id, count(customer_id) as Total_Customers, Total_Male_Customers = sum(case when customer_sex = 'male' then 1 else 0 end),
Format(sum(case when customer_sex = 'male' then 1 else 0 end)/(count(customer_id)*1.0),'P') as MaleCustomers
FROM dbo.temp
GROUP BY area_id
HAVING sum(case when customer_sex = 'male' then 1 else 0 end) > 0
Here if it is smaller dataset format is better else it has performance issues you can go with custom multiplication and concatenating % symbol.
Output as below:
+---------+-----------------+----------------------+---------------+
| area_id | Total_Customers | Total_Male_Customers | MaleCustomers |
+---------+-----------------+----------------------+---------------+
| 1 | 4 | 2 | 50.00 % |
| 2 | 3 | 1 | 33.33 % |
| 3 | 2 | 1 | 50.00 % |
| 4 | 1 | 1 | 100.00 % |
+---------+-----------------+----------------------+---------------+
Use IIF (Sqlserver 2012+) otherwise CASE, group by and sum of males /count all customers * 100
+ 0.0 to treat sum of males and all customer as float or decimal to get the correct result.
select area_id,count(customer_id) [Total Customers],
sum(iif(customer_sex='male',1,0)) [Total Males],
cast(cast(((sum(iif(customer_sex='male',1,0)) + 0.0) / (count(customer_sex) + 0.0)) * 100 as decimal(18,1)) as varchar(10)) + '%' [percentage of males]
from temp
group by area_id
This will do:
select x.area_id, x.total, x.m, cast(CONVERT(DECIMAL(10,2), x.m * 100.0 / x.total) as nvarchar(max)) + '%'
from
(
select t.area_id, count(1) total, sum(iif(t.customer_sex = 'male', 1, 0)) m
from #temp t
group by t.area_id
)x

Is there a better way to flatten out a table to take up fewer rows by moving fields in rows with duplicate keys into empty (NULL) fields?

I have a table with the recorded date, time and quantity of each item a child was given. My end goal is to pivot on that data, but preserve each individual quantity being given out according to date/time and child.
This is easy to achieve without a pivot, but it still takes up an entire row for each instance. What I want, is to flatten out the results to take up fewer rows. There isn't a huge functional difference, I'm just doing this to take up less real estate in the report that will end up using this data.
Updated to include a query for sample data:
DECLARE #Items TABLE (Name VARCHAR(10), Date DATETIME, ItemID INT, Quantity INT)
INSERT INTO #Items VALUES ('Jimmy', '01/23/2017 10:00:00', 1, 2),
('Jimmy', '01/23/2017 12:00:00', 1, 1),
('Jimmy', '01/23/2017 15:00:00', 2, 2),
('Billy', '01/23/2017 09:00:00', 1, 1),
('Billy', '01/23/2017 10:00:00', 2, 3)
This is what my starting table looks like:
Name Date ItemID Quantity
Jimmy 2017-01-23 10:00:00.000 1 2
Jimmy 2017-01-23 12:00:00.000 1 1
Jimmy 2017-01-23 15:00:00.000 2 2
Billy 2017-01-23 09:00:00.000 1 1
Billy 2017-01-23 10:00:00.000 2 3
I use a join to sum up the quantities for each day, sort the quantities into their own respective columns, and then drop the time:
SELECT d.Name, CAST(d.Date AS DATE) AS Date,
SUM(CASE WHEN s.ItemID = 1 THEN s.Quantity ELSE NULL END) AS SumBooks,
SUM(CASE WHEN s.ItemID = 2 THEN s.Quantity ELSE NULL END) AS SumPencils,
MAX(CASE WHEN d.ItemID = 1 THEN d.Quantity ELSE NULL END) AS Books,
MAX(CASE WHEN d.ItemID = 2 THEN d.Quantity ELSE NULL END) AS Pencils
FROM #Items d
INNER JOIN #Items s ON s.Name = d.Name AND CAST(s.Date AS DATE) = CAST(d.Date AS DATE)
GROUP BY d.Name, d.Date
This is the resulting data:
Name Date SumBooks SumPencils Books Pencils
Billy 2017-01-23 1 3 1 NULL
Billy 2017-01-23 1 3 NULL 3
Jimmy 2017-01-23 3 2 2 NULL
Jimmy 2017-01-23 3 2 1 NULL
Jimmy 2017-01-23 3 2 NULL 2
This is the structure I am trying to achieve:
Name Date SumBooks SumPencils Books Pencils
Billy 2017-01-23 1 3 1 3
Jimmy 2017-01-23 3 2 2 2
Jimmy 2017-01-23 3 2 1 NULL
I was able to do this using a cursor to iterate over each row and check a new table for any matches of Date, Name, and Books = NULL. If a match was found, I update that row with the quantity. Else, I insert a new row with the Books quantity and a NULL value in the Pencils field, later to be updated with a Pencils quantity, and so on.
So, I am able to get the results I need, but this check has to be done for every item column. For just a couple items, it isn't a big deal. When there's a dozen or more items and the result has 30+ columns, it ends up being a lot of declared variables and large, repeating IF/ELSE statements.
I'm not sure if this is commonly done, but if it is, I'm lacking the proper verbiage to find out on my own. Thanks in advance for any Suggestions.
If we trade the inner join for an outer apply() or a left join
and include those values to the group by we can get the results you are looking for based on the test data provided.
;with cte as (
select
i.Name
, [Date] = convert(date,i.[Date])
, SumBooks = sum(case when ItemId = 1 then Quantity else null end)
, SumPencils = sum(case when ItemId = 2 then Quantity else null end)
, Books = b.Books
, Pencils = max(case when ItemId = 2 then Quantity else null end)
, rn = row_number() over (
partition by i.Name, convert(varchar(10),i.[Date],120)
order by b.booksdate
)
from #Items i
outer apply (
select Books = Quantity, BooksDate = b.[Date]
from #Items b
where b.ItemId = 1
and b.Name = i.Name
and convert(date,b.[Date])=convert(date,i.[Date])
) as b
group by
i.Name
, convert(date,i.[Date])
, b.Books
, b.BooksDate
)
select
Name
, Date
, SumBooks
, SumPencils
, Books
, Pencils = Pencils + case when rn > 1 then null else 0 end
from cte
alternate left join for b:
left join (
select Books = Quantity, BooksDate = b.[Date], Name, Date
from Items b
where b.ItemId = 1
) as b on b.Name = i.Name and convert(date,b.[Date])=convert(date,i.[Date])
test setup: http://rextester.com/IXHU81911
create table Items (
Name varchar(64)
, Date datetime
, ItemID int
, Quantity int
);
insert into Items values
('Jimmy','2017-01-23 10:00:00.000',1,2)
, ('Jimmy','2017-01-23 12:00:00.000',1,1)
, ('Jimmy','2017-01-23 13:00:00.000',1,1) /* Another 1 Book */
, ('Jimmy','2017-01-23 15:00:00.000',2,2)
, ('Billy','2017-01-23 09:00:00.000',1,1)
, ('Billy','2017-01-23 10:00:00.000',2,3)
, ('Zim' ,'2017-01-23 10:00:00.000',2,1) /* No books */
query:
;with cte as (
select
i.Name
, [Date] = convert(varchar(10),i.[Date],120)
, SumBooks = sum(case when ItemId = 1 then Quantity else null end)
, SumPencils = sum(case when ItemId = 2 then Quantity else null end)
, Books = b.Books
, Pencils = max(case when ItemId = 2 then Quantity else null end)
, rn = row_number() over (
partition by i.Name, convert(varchar(10),i.[Date],120)
order by b.booksdate
)
from Items i
outer apply (
select Books = Quantity, BooksDate = b.[Date]
from Items b
where b.ItemId = 1
and b.Name = i.Name
and convert(date,b.[Date])=convert(date,i.[Date])
) as b
group by
i.Name
, convert(varchar(10),i.[Date],120)
, b.Books
, b.BooksDate
)
select
Name
, Date
, SumBooks
, SumPencils
, Books
, Pencils = Pencils + case when rn > 1 then null else 0 end
from cte
note: convert(varchar(10),i.[Date],120) is used on rextester to override default formatting of date. Use convert(date,i.[Date]) or cast(i.[Date] as date) outside of rextester.
results:
+-------+------------+----------+------------+-------+---------+
| Name | Date | SumBooks | SumPencils | Books | Pencils |
+-------+------------+----------+------------+-------+---------+
| Billy | 2017-01-23 | 1 | 3 | 1 | 3 |
| Jimmy | 2017-01-23 | 4 | 2 | 1 | 2 |
| Jimmy | 2017-01-23 | 4 | 2 | 1 | NULL |
| Jimmy | 2017-01-23 | 4 | 2 | 2 | NULL |
| Zim | 2017-01-23 | NULL | 1 | NULL | 1 |
+-------+------------+----------+------------+-------+---------+

SQL: Add a column and classify into categories

I have a table which has transaction_id as the primary key and also contains customer_id which is a foreign key.
Now there is a column type which has two values: 'Card' and 'cash'.
Now some of the customers have used both the methods for payment. I want to add a new column and classify the customers as "Only card" "Only cash" and "Both".
Transaction id Customer id Type
1 100 Card
2 101 Cash
3 102 Card
4 103 Cash
5 101 Card
So in this table I want a new column 'Type of payment' which classifies customer 101 as Both since he has used both the methods of payment.
You can use window functions:
select t.*,
(case when min(type) over (partition by customerid) = max(type) over (partition by customerid)
then 'Only ' + min(type) over (partition by customerid)
else 'both'
end)
from transactions t;
You can do better and remove a bit of redondancy (the values cash only and card only will be repeated in the table, in this case we prefer repeating an ID). So you can create a Table for example payement_methods that will have 2 columns for example id and method, you will populate it with the three options you just mentioned (cash only, card only, both), and you'll have in your transaction table a column payment_method_id for example (instead of the type column you were using).
example
|id | method |
|1 | Cash only |
|2 | Card Only |
|3 | Both |
transaction table
|id | other columns ...|payement method |
|1 | other columns ...|1 |
|2 | other columns ...|3 |
//...
sorry for my english, good luck.
Rather than adding a column to the table, if what you want to do is analyze the payment methods, then doing something like this might be better:
SELECT DISTINCT Table1.[Customer ID], T1.*
FROM Table1
CROSS APPLY (SELECT SUM(CASE WHEN [Type] = 'Cash' THEN 1 ELSE 0 END) AS Cash,
SUM(CASE WHEN [Type] = 'Card' THEN 1 ELSE 0 END) AS Card
FROM Table1 T WHERE T.[Customer ID] = Table1.[Customer ID]) T1
Gives you results like this:
CUSTOMER ID CASH CARD
100 0 1
101 1 1
102 0 1
103 1 0
Create table tran1(Transactionid int , Customerid int , Type varchar(100))
insert into tran1(Transactionid , Customerid , Type ) values
(1 , 100 , 'Card ' ),
(2 , 101 , 'Cash' ),
(3 , 102 , 'Card ' ),
(4 , 103 , 'Cash ' ),
(5 , 101 , 'Card ' )
alter table tran1 add NewType varchar(100)
Update tran1 set NewType ='Only card' where Customerid IN (
select d.custid from (
select Customerid as custid,SUM(case when [Type]='Card' then 1 else 0 end) card
,SUM(case when [Type]='Cash' then 1 else 0 end) cash
from tran1
group by Customerid)d
where d.card=1
)
Update tran1 set NewType ='Only Cash' where Customerid IN (
select d.custid from (
select Customerid as custid,SUM(case when [Type]='Card' then 1 else 0 end) card
,SUM(case when [Type]='Cash' then 1 else 0 end) cash
from tran1
group by Customerid)d
where d.cash=1
)
Update tran1 set NewType ='Both' where Customerid IN (
select d.custid from (
select Customerid as custid,SUM(case when [Type]='Card' then 1 else 0 end) card
,SUM(case when [Type]='Cash' then 1 else 0 end) cash
from tran1
group by Customerid)d
where d.card=1 and cash=1
)