I have a table like this:
Id Date Price Item Type
1 2009-09-21 25 1 M
2 2009-08-31 16 2 T
1 2009-09-23 21 1 M
2 2009-09-03 12 3 T
I try to receive the output of ID and column of sum price mult items for type='M' and another column with same logic for type='T'
Only way how to do it for me is using multi-cte but it is kind of complex and big:
with cte as (
select distinct a.id, a.date
sum(price*a.item) as numm
from table a
where a.type='M'
group by a.id),
crx as (
select cte.id, cte.numm, sum(a.price*a.item) as numm_1 from cte
join table a on a.id=cte.id and a.date=cte.date
where a.type='T'
group by cte.id)
select * from crx
Having a certain feeling that it can be done better (for example using subqueries)-asking you how can it be done.
p.s.
SQLlite stuff would be greatly appreciated!
Thanks!
Perhaps this will help
Declare #YourTable table (Id int,Date date,Price money,Item int,Type varchar(25))
Insert into #YourTable values
(1,'2009-09-21',25,1,'M'),
(2,'2009-08-31',16,2,'T'),
(1,'2009-09-23',21,1,'M'),
(2,'2009-09-03',12,3,'T')
Select ID
,sum(case when Type='M' then Price*Item else 0 end) as M
,sum(case when Type='T' then Price*Item else 0 end) as T
From YourTable
Group By ID
Returns
ID M T
1 46.00 0.00
2 0.00 68.00
Related
I am having trouble to find an efficient solution to my problem. I think this is a fairly common issue, that is why I am asking it out here.
Here is the problem.
Let says I have multiple selects such as the following ones :
SELECT CREATE
date
num_created
01-01-2021
10
01-02-2021
2
01-04-2021
13
SELECT Update
date
num_update
01-01-2021
14
01-02-2021
2
01-03-2021
9
SELECT Delete
date
num_delete
01-02-2021
2
01-05-2021
40
I want to have this final output
Final output
date
num_created
num_update
num_deleted
01-01-2021
10
14
0
01-02-2021
2
2
2
01-03-2021
0
9
0
01-04-2021
13
0
0
01-05-2021
0
0
40
*I can't assume that any table has all the dates or have matching dates
It looks like you want count grouped by days:
select
tbls.date,
sum(tbls.num_created) as num_created,
sum(tbls.num_updated) as num_updated,
sum(tbls.num_deleted) as num_deleted
from (
select date, num_created, 0 as num_updated, 0 as num_deleted from tbl_create
union all
select date, 0 as num_created, num_updated, 0 as num_deleted from tbl_update
union all
select date, 0 as num_created, 0 as num_updated, num_deleted from tbl_delete) as tbls
group by tbls.date
Please don't abuse "with statement" when it is not needed (as seen in the other answers).
Edit - extra simple query:
if object_id('tempdb..#tmp_dml_statement') is not null drop table #tmp_dml_statement
select date, num_created, 0 as num_updated, 0 as num_deleted
into #tmp_dml_statement
from tbl_create
union all
select date, 0 as num_created, num_updated, 0 as num_deleted
from tbl_update
union all
select date, 0 as num_created, 0 as num_updated, num_deleted
from tbl_delete
You can perform full outer join between the tables and case statement to choose the date as below (Assume 3 tables/queries as a, b, c):
With temp as
(
select case when a.date is not null then a.date else b.date end as date,num_created,num_update from a full join b on (a.date=b.date)
),
temp1 as
(
select case when temp.date is not null then temp.date else c.date end as date,num_created,num_update,num_delete from temp full join c on (temp.date=c.date)
)
select * from temp1;
assume the tables are:
tbl_create(dt, num_created)
tbl_update(dt, num_updated)
tbl_delete(dt, num_deleted)
start with a (temporary)table that contains all dates from the 3 source tables: all_dates
then (left outer) join the counters from the 3 source tables.
note: 'union' removes duplicates, no need to worry about that (in contrast to 'union all')
note: coalesce(a,b) returns the first non-null parameter, so you have zeroes rather than 'null's
with all_dates(dt) as (
select dt from tbl_create
union
select dt from tbl_update
union
select dt from tbl_delete
)
select all_dates.dt
, coalesce(tbl_create.num_created,0) --
, coalesce(tbl_update.num_updated,0)
, coalesce(tbl_delete.num_deleted,0)
from all_dates
left outer join tbl_create on all_dates.dt = tbl_create.dt
left outer join tbl_update on all_dates.dt = tbl_update.dt
left outer join tbl_delete on all_dates.dt = tbl_delete.dt
I have a table with following format:
ID ID1 ID2 DATE
1 1 1 2018-03-01
2 1 1 2018-03-02
3 1 1 2018-03-05
4 1 1 2018-03-06
5 1 1 2018-03-07
6 2 2 2018-03-05
7 2 2 2018-03-05
8 2 2 2018-03-06
9 2 2 2018-03-07
10 2 2 2018-03-08
From this table I have to get all records where ID1 and ID2 are the same in that column and where DATE is 5 consecutive work days (5 dates in a row, ignoring missing dates for Saturday/Sunday; ignore holidays).
I have really no idea how to achieve this. I did search around, but couldn't find anything that helped me. So my question is, how can I achieve following output?
ID ID1 ID2 DATE
1 1 1 2018-03-01
2 1 1 2018-03-02
3 1 1 2018-03-05
4 1 1 2018-03-06
5 1 1 2018-03-07
SQLFiddle to mess around
Assuming you have no duplicates and work is only on weekdays, then there is a simplish solution for this particular case. We can identify the date 4 rows ahead. For a complete week, it is either 4 days ahead or 6 days ahead:
select t.*
from (select t.*, lead(dat, 4) over (order by id2, dat) as dat_4
from t
) t
where datediff(day, dat, dat_4) in (4, 6);
This happens to work because you are looking for a complete week.
Here is the SQL Fiddle.
select t.* from
(select id1,id2,count(distinct dat) count from t
group by id1,id2
having count(distinct dat)=5) t1 right join
t
on t.id1=t1.id1 and t.id2=t1.id2
where count=5
Check this-
Dates of Two weeks with 10 valid dates
http://sqlfiddle.com/#!18/76556/1
Dates of Two weeks with 10 non-unique dates
http://sqlfiddle.com/#!18/b4299/1
and
Dates of Two weeks with less than 10 but unique
http://sqlfiddle.com/#!18/f16cb/1
This query is very verbose without LEAD or LAG and it is the best I could do on my lunch break. You can probably improve on it given the time.
DECLARE #T TABLE
(
ID INT,
ID1 INT,
ID2 INT,
TheDate DATETIME
)
INSERT #T SELECT 1,1,1,'03/01/2018'
INSERT #T SELECT 2,1,1,'03/02/2018'
INSERT #T SELECT 3,1,1,'03/05/2018'
INSERT #T SELECT 4,1,1,'03/06/2018'
INSERT #T SELECT 5,1,1,'03/07/2018'
--INSERT #T SELECT 5,1,1,'03/09/2018'
INSERT #T SELECT 6,2,2,'03/02/2018'
INSERT #T SELECT 7,2,2,'03/05/2018'
INSERT #T SELECT 8,2,2,'03/05/2018'
--INSERT #T SELECT 9,2,2,'03/06/2018'
INSERT #T SELECT 10,2,2,'03/07/2018'
INSERT #T SELECT 11,2,2,'03/08/2018'
INSERT #T SELECT 12,2,2,'03/15/2018'
INSERT #T SELECT 13,1,1,'04/01/2018'
INSERT #T SELECT 14,1,1,'04/02/2018'
INSERT #T SELECT 15,1,1,'04/05/2018'
--SELECT * FROM #T
DECLARE #LowDate DATETIME = DATEADD(DAY,-1,(SELECT MIN(TheDate) FROM #T))
DECLARE #HighDate DATETIME = DATEADD(DAY,1,(SELECT MAX(TheDate) FROM #T))
DECLARE #DaysThreshold INT = 5
;
WITH Dates AS
(
SELECT DateValue=#LowDate
UNION ALL
SELECT DateValue + 1 FROM Dates
WHERE DateValue + 1 < #HighDate
),
Joined AS
(
SELECT * FROM Dates LEFT OUTER JOIN #T T ON T.TheDate=Dates.DateValue
),
Calculations AS
(
SELECT
ID=MAX(J1.ID),
J1.ID1,J1.ID2,
J1.TheDate,
LastDate=MAX(J2.TheDate),
LastDateWasWeekend = CASE WHEN ((DATEPART(DW,DATEADD(DAY,-1,J1.TheDate) ) + ##DATEFIRST) % 7) NOT IN (0, 1) THEN 0 ELSE 1 END,
Offset = DATEDIFF(DAY,MAX(J2.TheDate),J1.TheDate)
FROM
Joined J1
LEFT OUTER JOIN Joined J2 ON J2.ID1=J1.ID1 AND J2.ID2=J1.ID2 AND J2.TheDate<J1.TheDate
WHERE
NOT J1.ID IS NULL
GROUP BY J1.ID1,J1.ID2,J1.TheDate
)
,FindValid AS
(
SELECT
ID,ID1,ID2,TheDate,
IsValid=CASE
WHEN LastDate=TheDate THEN 0
WHEN LastDate IS NULL THEN 1
WHEN Offset=1 THEN 1
WHEN Offset>3 THEN 0
WHEN Offset<=3 THEN
LastDateWasWeekend
END
FROM
Calculations
UNION
SELECT DISTINCT ID=NULL,ID1,ID2, TheDate=#HighDate,IsValid=0 FROM #T
),
FindMax As
(
SELECT
This.ID,This.ID1,This.ID2,This.TheDate,MaxRange=MIN(Next.TheDate)
FROM
FindValid This
LEFT OUTER JOIN FindValid Next ON Next.ID2=This.ID2 AND Next.ID1=This.ID1 AND This.TheDate<Next.TheDate AND Next.IsValid=0
GROUP BY
This.ID,This.ID1,This.ID2,This.TheDate
),
FindMin AS
(
SELECT
This.ID,This.ID1,This.ID2,This.TheDate,This.MaxRange,MinRange=MIN(Next.TheDate)
FROM
FindMax This
LEFT OUTER JOIN FindMax Next ON Next.ID2=This.ID2 AND Next.ID1=This.ID1 AND This.TheDate<Next.MaxRange-- AND Next.IsValid=0 OR Next.TheDate IS NULL
GROUP BY
This.ID,This.ID1,This.ID2,This.TheDate,This.MaxRange
)
,Final AS
(
SELECT
ID1,ID2,MinRange,MaxRange,SequentialCount=COUNT(*)
FROM
FindMin
GROUP BY
ID1,ID2,MinRange,MaxRange
)
SELECT
T.ID,
T.ID1,
T.ID2,
T.TheDate
FROM #T T
INNER JOIN Final ON T.TheDate>= Final.MinRange AND T.TheDate < Final.MaxRange AND T.ID1=Final.ID1 AND T.ID2=Final.ID2
WHERE
SequentialCount>=#DaysThreshold
OPTION (MAXRECURSION 0)
I am trying to determine how to group records together based the cumulative total of the Qty column so that the group size doesn't exceed 50. The desired group is given in the group column with sample data below.
Is there a way to accomplish this in SQL (specifically SQL Server 2012)?
Thank you for any assistance.
ID Qty Group
1 10 1
2 20 1
3 30 2 <- 60 greater than 50 so new group
4 40 3
5 2 3
6 3 3
7 10 4
8 25 4
9 15 4
10 5 5
You can use CTE to achieve the goal.
If one of the item exceeds Qty 50, a group still assign for it
DECLARE #Data TABLE (ID int identity(1,1) primary key, Qty int)
INSERT #Data VALUES (10), (20), (30), (40), (2), (3), (10), (25), (15), (5)
;WITH cte AS
(
SELECT ID, Qty, 1 AS [Group], Qty AS RunningTotal FROM #Data WHERE ID = 1
UNION ALL
SELECT data.ID, data.Qty,
-- The group limits to 50 Qty
CASE WHEN cte.RunningTotal + data.Qty > 50 THEN cte.[Group] + 1 ELSE cte.[Group] END,
-- Reset the running total for each new group
data.Qty + CASE WHEN cte.RunningTotal + data.Qty > 50 THEN 0 ELSE cte.RunningTotal END
FROM #Data data INNER JOIN cte ON data.ID = cte.ID + 1
)
SELECT ID, Qty, [Group] FROM cte
The following query gives you most of what you want. One more self-join of the result would compute the group sizes:
select a.ID, G, sum(b.Qty) as Total
from (
select max(ID) as ID, G
from (
select a.ID, sum(b.Qty) / 50 as G
from T as a join T as b
where a.ID >= b.ID
group by a.ID
) as A
group by G
) as a join T as b
where a.ID >= b.ID
group by a.ID
ID G Total
---------- ---------- ----------
2 0 30
3 1 60
8 2 140
10 3 160
The two important tricks:
Use a self-join with an inequality to get running totals
Use integer division to calculate group numbers.
I discuss this and other techniques on my canonical SQL page.
You need to create a stored procedure for this.
If you have Group column in your database then you have to take care about it while inserting a new record by fetching the max Group value and its sum of Qty column otherwise if you want Group column as computed in select statement then you have to code stored procedure accordingly.
My table looks like this:
id staus
1 p
1 p
1 c
2 p
2 c
I need to produce counts of rows with the statuses of 'p' and 'c' for each id, so the result I expect should look like this:
id p c
1 2 1 <-- id 1 has two rows with 'p' and one row with 'c'
2 1 1 <-- id 2 has one row with 'p' and one row with 'c'
How can i achieve this?
You can do it like this:
SELECT
id
, SUM (CASE STATUS WHEN 'p' THEN 1 ELSE 0 END) as p
, SUM (CASE STATUS WHEN 'c' THEN 1 ELSE 0 END) as c
FROM my_table
GROUP BY id
When you have more than just a few fixed items like 'p' and 'c' to aggregate, pivoting may provide a better option.
Pivot solution. Works from sql-server 2008
declare #t table(id int, staus char(1))
insert #t values( 1,'p'),( 1,'p'),( 1,'c'),( 2,'p'),( 2,'c')
SELECT id, [p], [c]
from #t
PIVOT
(count([staus])
FOR staus
in([p],[c])
)AS p
Result:
id p c
1 2 1
2 1 1
It seems that you need to do a pivot of your table, there is a simple article that I used when i faced your same problem pivot table sql server
Given following table:
rowId AccountId Organization1 Organization2
-----------------------------------------------
1 1 20 10
2 1 10 20
3 1 40 30
4 2 15 10
5 2 20 15
6 2 10 20
How do I identify the records where Organization2 doesn't exist in Organization1 for a particular account
for instance, in the given data above my results will be a single record which will be AccountId 1 because row3 organization2 value 30 doesn't exist in organization1 for that particular account.
SELECT rowId, AccountId, Organization1, Organization2
FROM yourTable yt
WHERE NOT EXISTS (SELECT 1 FROM yourTable yt2 WHERE yt.AccountId = yt2.AccountId AND yt.Organization1 = yt2.Organization2)
There are two possible interpretations of your question. The first (where the Organization1 and Organization2 columns are not equal) is trivial:
SELECT AccountID FROM Table WHERE Organization1 <> Organization2
But I suspect you're asking the slightly more difficult interpretation (where Organization2 does not appear in ANY Organization1 value for the same account):
SELECT AccountID From Table T1 WHERE Organization2 NOT IN
(SELECT Organization1 FROM Table T2 WHERE T2.AccountID = T1.AccountID)
Here is a how you could do it:
Test data:
CREATE TABLE #T(rowid int, acc int, org1 int, org2 int)
INSERT #T
SELECT 1,1,10,10 UNION
SELECT 2,1,20,20 UNION
SELECT 3,1,40,30 UNION
SELECT 4,2,10,10 UNION
SELECT 5,2,15,15 UNION
SELECT 6,2,20,20
Then perform a self-join to discover missing org2:
SELECT
*
FROM #T T1
LEFT JOIN
#T T2
ON t1.org1 = t2.org2
AND t1.acc = t2.acc
WHERE t2.org1 IS NULL
SELECT
*
FROM
[YorTable]
WHERE
[Organization1] <> [Organization2] -- The '<>' is read "Does Not Equal".
Use left join as Noel Abrahams presented.