Update SQL with CASE and SUM functions - sql

I know this seems like a similar question that has been answered before, and it may be. However, I have looked through the answers and they seem to all work along the table.x = sum (case when (x))) example. None of them seem to work with getting a sum inside the CASE function.
My script before attempting to create an update script is
SELECT
Item
,Loc
,MinDRPQty
,NetNeed
,CASE
WHEN (NetNeed/MinDRPQty) <= 1 THEN MinDRPQty --use existing multiplier if sum is less than multiplier
ELSE CEIL(NetNeed/MinDRPQty)*MinDRPQty --determine appropriate multiple, convert to int, and multiply
END as "ExpectedOrder"
FROM
(
select
s.Item as Item,
s.Loc as Loc,
p.MinDRPQty as MinDRPQty,
SUM (s.OH + s.UDC_ActualIT + s.UDC_CommitIT - s.UDC_AllCustOrd - s.UDC_ADJ_AvgDailyDmd* (p.DRPCovDur/1440) - s.UDC_SafetyStock) as NetNeed
from SKU s, SKUPlanningParam p
where s.Item = p.Item and s.Loc = p.Loc group by s.Item, s.Loc, p.MinDRPQty
)
I am looking to update a field called UDC_NetNeed. So I need the NetNeed from this statement. Any help would be greatly appreciated. If this has been answered before, and I have missed it, I apologize.

update SKU s
set UDC_NetNeed = (
SELECT
CASE
WHEN (NetNeed/nullif(MinDRPQty, 0)) <= 1 THEN MinDRPQty --use existing multiplier if sum is less than multiplier
ELSE CEIL(NetNeed/nullif(MinDRPQty, 0))*MinDRPQty --determine appropriate multiple, convert to int, and multiply
END
FROM
(
select
s.Item as Item,
s.Loc as Loc,
p.MinDRPQty as MinDRPQty,
SUM (s.OH + s.UDC_ActualIT + s.UDC_CommitIT - s.UDC_AllCustOrd - s.UDC_ADJ_AvgDailyDmd* (p.DRPCovDur/1440) - s.UDC_SafetyStock) as NetNeed
from SKU s, SKUPlanningParam p
where s.Item = p.Item and s.Loc = p.Loc group by s.Item, s.Loc, p.MinDRPQty
) a where a.Item = s.Item and a.Loc = s.Loc and ROWNUM = 1
);
It's not clear what you what to update, I suppose it's the SKU table
" and ROWNUM = 1" this condition is unclear and added in order to prevent multiple results in the correlated subquery.
Some info
select
s.Item as Item,
s.Loc as Loc,
p.MinDRPQty as MinDRPQty,
SUM (...)
...
group by s.Item, s.Loc, p.MinDRPQty
It means that result can be something like this
Row 1: item=1, loc=1, MinDRPQty=1, netNeed=100500
Row 2: item=1, loc=1, MinDRPQty=2, netNeed=500
Suppose there is one row with item=1, loc=1 in the table SKU.
When you make the update described above there is a problem: for row "item=1, loc=1" Oracle does't know which netNeed to choose (100500 or 500). That's why I put "and ROWNUM = 1" (any first row that was found). But I'm not sure if it makes sense in your case. Maybe you need some extra condition here!

Related

postgres: COUNT, DISTINCT is not implemented for window functions

I am trying to use COUNT(DISTINC column) OVER(PARTITION BY column) when I am using COUNT + window function(OVER).
I get an error like the one in the title and can't get it to work.
I have looked into how to deal with this error, but I have not found an example of how to deal with such a complex query as the one below.
I cannot find an example of how to deal with such a complex query as shown below, and I am not sure how to handle it.
The COUNT part of the problem exists on line 65.
How can such a complex query be resolved without slowing down?
WITH RECURSIVE "cte" AS((
SELECT
"videos_productvideocomment"."id",
"videos_productvideocomment"."user_id",
"videos_productvideocomment"."video_id",
"videos_productvideocomment"."parent_id",
"videos_productvideocomment"."text",
"videos_productvideocomment"."commented_at",
"videos_productvideocomment"."edited_at",
"videos_productvideocomment"."created_at",
"videos_productvideocomment"."updated_at",
"videos_productvideocomment"."id" AS "root_id"
FROM
"videos_productvideocomment"
WHERE
(
"videos_productvideocomment"."parent_id" IS NULL
AND "videos_productvideocomment"."video_id" = 'f264433c-c0af-49cc-8b40-84453da71b2d'
)
) UNION(
SELECT
"videos_productvideocomment"."id",
"videos_productvideocomment"."user_id",
"videos_productvideocomment"."video_id",
"videos_productvideocomment"."parent_id",
"videos_productvideocomment"."text",
"videos_productvideocomment"."commented_at",
"videos_productvideocomment"."edited_at",
"videos_productvideocomment"."created_at",
"videos_productvideocomment"."updated_at",
"cte"."root_id" AS "root_id"
FROM
"videos_productvideocomment"
INNER JOIN
"cte"
ON "videos_productvideocomment"."parent_id" = "cte"."id"
))
SELECT
*,
EXISTS(
SELECT
(1) AS "a"
FROM
"videos_productvideolikecomment" U0
WHERE
(
U0."comment_id" = t."id"
AND U0."user_id" = '3bd3bc86-0335-481e-9fd2-eb2fb1168f48'
)
LIMIT 1
) AS "liked"
FROM
(
SELECT DISTINCT
"cte"."id",
"cte"."created_at",
"cte"."updated_at",
"cte"."user_id",
"cte"."text",
"cte"."commented_at",
"cte"."edited_at",
"cte"."parent_id",
"cte"."video_id",
"cte"."root_id" AS "root_id",
COUNT(DISTINCT "cte"."root_id") OVER(PARTITION BY "cte"."root_id") AS "reply_count", <--- here
COUNT("videos_productvideolikecomment"."id") OVER(PARTITION BY "cte"."id") AS "liked_count"
FROM
"cte"
LEFT OUTER JOIN
"videos_productvideolikecomment"
ON (
"cte"."id" = "videos_productvideolikecomment"."comment_id"
)
) t
WHERE
t."id" = t."root_id"
ORDER BY
CASE
WHEN t."user_id" = '3bd3bc86-0335-481e-9fd2-eb2fb1168f48' THEN 0
ELSE 1
END ASC,
"liked_count" DESC
DISTINCT will look for duplicates and remove it, but in big data it will take a lot of time to process this query, you should process the middle of the record in the programming part I think it will be fast than. Thank

Combine 2 complex queries into 1

I am trying to figure out if there's a way to combine these 2 queries into a single one. I've run into the limits of what I know and can't figure out if this is possible or not.
This is the 1st query that gets last year sales for each day per location (for one month):
if object_id('tempdb..#LY_Data') is not null drop table #LY_Data
select
[LocationId] = ri.LocationId,
[LY_Date] = convert(date, ri.ReceiptDate),
[LY_Trans] = count(distinct ri.SalesReceiptId),
[LY_SoldQty] = convert(money, sum(ri.Qty)),
[LY_RetailAmount] = convert(money, sum(ri.ExtendedPrice)),
[LY_NetSalesAmount] = convert(money, sum(ri.ExtendedAmount))
into #LY_Data
from rpt.SalesReceiptItem ri
join #Location l
on ri.LocationId = l.Id
where ri.Ignored = 0
and ri.LineType = 1 /*Item*/
and ri.ReceiptDate between #_LYDateFrom and #_LYDateTo
group by
ri.LocationId,
ri.ReceiptDate
Then the 2nd query computes a ratio based on the total sales for that month for each day (to be used later):
if object_id('tempdb..#LY_Data2') is not null drop table #LY_Data2
select
[LocationId] = ly.LocationId,
[LY_Date] = ly.LY_Date,
[LY_Trans] = ly.LY_Trans,
[LY_RetailAmount] = ly.LY_RetailAmount,
[LY_NetSalesAmount] = ly.LY_NetSalesAmount,
[Ratio] = ly.LY_NetSalesAmount / t.MonthlySales
into #LY_Data2
from (
select
[LocationId] = ly.LocationId,
[MonthlySales] = sum(ly.LY_NetSalesAmount)
from #LY_Data ly
group by
ly.LocationId
) t
join #LY_Data ly
on t.LocationId = ly.LocationId
I've tried using the first query as a subquery in the 2nd query group-by from clause, but that won't let me select those columns in the outer most select statement (multi part identifier couldn't be bound).
As well as putting the first query into the join clause at the end of the 2nd query with the same issue.
There's probably something I'm missing, but I'm still pretty new to SQL so any help or just a pointer in the right direction would be greatly appreciated! :)
You can try using a Common Table Expression (CTE) and window function:
if object_id('tempdb..#LY_Data') is not null drop table #LY_Data
;with
cte AS
(
select
[LocationId] = ri.LocationId,
[LY_Date] = convert(date, ri.ReceiptDate),
[LY_Trans] = count(distinct ri.SalesReceiptId),
[LY_SoldQty] = convert(money, sum(ri.Qty)),
[LY_RetailAmount] = convert(money, sum(ri.ExtendedPrice)),
[LY_NetSalesAmount] = convert(money, sum(ri.ExtendedAmount))
from rpt.SalesReceiptItem ri
join #Location l
on ri.LocationId = l.Id
where ri.Ignored = 0
and ri.LineType = 1 /*Item*/
and ri.ReceiptDate between #_LYDateFrom and #_LYDateTo
group by
ri.LocationId,
ri.ReceiptDate
)
select
[LocationId] = cte.LocationId,
[LY_Date] = cte.LY_Date,
...
[Ratio] = cte.LY_NetSalesAmount / sum(cte.LY_NetSalesAmount) over (partition by cte.LocationId)
into #LY_Data
from cte
sum(cte.LY_NetSalesAmount) over (partition by cte.LocationId) gives you the sum for each locationId. The code assume that this sum is always non-zero. Otherwise, a divide-by-0 error will occur.
Seems like all you need to do is calculate ratio in the first query.
You can do this with a correlated subquery.
SELECT
...
convert(money, sum(ri.ExtendedAmount)/(SELECT sum(ri2.ExtendedAmount)
FROM rpt.SalesReceiptItem ri2
WHERE ri2.LocationId=ri.LocationId
)
) AS ratio --extended amount/total extended amount for this location

Adding a new computed variable back to main dataset in SQL

I am trying to compute a variable (say last_week) and add it back to my main dataset (say new_j). I managed to join it to new_j. However, if I want to use that variable (last_week) now for further calculations, it does not recognise it. Here's my code:
SELECT [Weekkey] AS weekkey
,[article / colour] as prod_id
,[Current MP Department No/Desc] as prod_dept
,[Total Stock] as total_stock
INTO #new_j
FROM [J_20160831] --(that’s the db in server and I created a temp db #new_j)
SELECT prod_id, max(weekkey) as last_week
into #lastweeksales
FROM #new_j
group by prod_id
select *
from #new_j
left join #lastweeksales
on #lastweeksales.prod_id = #new_j.prod_id
So, I joined both successfully and if I run this code, I see column last_week. Now what I want to do is this:
select *
,case
when last_week = max(weekkey) then total_stock
else 0
end as last_stock_position
from #new_j
But it says last_week is not found in new_j. I also tried #lastweeksales.last_week instead of just last_week in the last bit of code, but it didn't either. What's the best way out here? Moreover, is there a better way to do it instead?. The output I am looking to have at the end is a table with these variables: WeekKey, prod_dept, prod_id, total_stock, last_week, last_stock_position
Thanks for the help!!! Much appreciate it.
This normal behaviour of joins..
by selecting this
select * from #new_j left join #lastweeksales
on #lastweeksales.prod_id = #new_j.prod_id'
all the columns of newj and lastweekales will be displayed in same order (first new_j columns and then lastweeksales columns ).So 'last_week' is the last column of lastweeksales.
Secondly,
select *,
case when last_week = max(weekkey) then total_stock
else 0
end as last_stock_position
from #new_j
in above query,your are selecting 'last_week' column which belongs to the table #lastweeksales.
Be careful while selecting the columns.
I guess your expecting,
select a.WeekKey, a.prod_dept, a.prod_id, a.total_stock, b.last_week,
case
when b.last_week = max(a.weekkey) then total_stock
else 0
end as last_stock_position
from #new_j as a
left join #lastweeksales as b
on b.prod_id = a.prod_id
group by a.weekkey,a.prod_dept,a.prod_id,a.total_stock,b.last_week

troubles with next and previous query

I have a list and the returned table looks like this. I took the preview of only one car but there are many more.
What I need to do now is check that the current KM value is larger then the previous and smaller then the next. If this is not the case I need to make a field called Trustworthy and should fill it with either 1 or 0 (true/ false).
The result that I have so far is this:
validKMstand and validkmstand2 are how I calculate it. It did not work in one list so that is why I separated it.
In both of my tries my code does not work.
Here is the code that I have so far.
FullList as (
SELECT
*
FROM
eMK_Mileage as Mileage
)
, ValidChecked1 as (
SELECT
UL1.*,
CASE WHEN EXISTS(
SELECT TOP(1)UL2.*
FROM FullList AS UL2
WHERE
UL2.FK_CarID = UL1.FK_CarID AND
UL1.KM_Date > UL2.KM_Date AND
UL1.KM > UL2.KM
ORDER BY UL2.KM_Date DESC
)
THEN 1
ELSE 0
END AS validkmstand
FROM FullList as UL1
)
, ValidChecked2 as (
SELECT
List1.*,
(CASE WHEN List1.KM > ulprev.KM
THEN 1
ELSE 0
END
) AS validkmstand2
FROM ValidChecked1 as List1 outer apply
(SELECT TOP(1)UL3.*
FROM ValidChecked1 AS UL3
WHERE
UL3.FK_CarID = List1.FK_CarID AND
UL3.KM_Date <= List1.KM_Date AND
List1.KM > UL3.KM
ORDER BY UL3.KM_Date DESC) ulprev
)
SELECT * FROM ValidChecked2 order by FK_CarID, KM_Date
Maybe something like this is what you are looking for?
;with data as
(
select *, rn = row_number() over (partition by fk_carid order by km_date)
from eMK_Mileage
)
select
d.FK_CarID, d.KM, d.KM_Date,
valid =
case
when (d.KM > d_prev.KM /* or d_prev.KM is null */)
and (d.KM < d_next.KM /* or d_next.KM is null */)
then 1 else 0
end
from data d
left join data d_prev on d.FK_CarID = d_prev.FK_CarID and d_prev.rn = d.rn - 1
left join data d_next on d.FK_CarID = d_next.FK_CarID and d_next.rn = d.rn + 1
order by d.FK_CarID, d.KM_Date
With SQL Server versions 2012+ you could have used the lag() and lead() analytical functions to access the previous/next rows, but in versions before you can accomplish the same thing by numbering rows within partitions of the set. There are other ways too, like using correlated subqueries.
I left a couple of conditions commented out that deal with the first and last rows for every car - maybe those should be considered valid is they fulfill only one part of the comparison (since the previous/next rows are null)?

SQL issue with NULL values on SUM

I'm currently working on some sql stuff, but running in a bit of an issue.
I've got this method that looks for cash transactions, and takes off the cashback but sometimes there are no cash transactions, so that value turns into NULL and you can't subtract from NULL.
I've tried to put an ISNULL around it, but it still turns into null.
Can anyone help me with this?
;WITH tran_payment AS
(
SELECT 1 AS payment_method, NULL AS payment_amount, null as tran_header_cid
UNION ALL
SELECT 998 AS payment_method, 2 AS payment_amount, NULL as tran_header_cid
),
paytype AS
(
SELECT 1 AS mopid, 2 AS mopshort
),
tran_header AS
(
SELECT 1 AS cid
)
SELECT p.mopid AS mopid,
p.mopshort AS descript,
payment_value AS PaymentValue,
ISNULL(DeclaredValue, 0.00) AS DeclaredValue
from paytype p
LEFT OUTER JOIN (SELECT CASE
When (tp.payment_method = 1)
THEN
(ISNULL(SUM(tp.payment_amount), 0)
- (SELECT ISNULL(SUM(ABS(tp.payment_amount)), 0)
FROM tran_payment tp
INNER JOIN tran_header th on tp.tran_header_cid = th.cid
WHERE payment_method = 998
) )
ELSE SUM(tp.payment_amount)
END as payment_value,
tp.payment_method,
0 as DeclaredValue
FROM tran_header th
LEFT OUTER JOIN tran_payment tp
ON tp.tran_header_cid = th.cid
GROUP BY payment_method) pmts
ON p.mopid = pmts.payment_method
Maybe COALESCE() can help you?
You can try this:
SUM(COALESCE(tp.payment_amount, 0))
or
COALESCE(SUM(tp.payment_amount), 0)
COALESCE(arg1, arg2, ..., argN) returns the first non-null argument from the list.
try to put ISNULL inside SUM and ABS, i.e. around the actual field, like this
SUM(ISNULL(tp.payment_amount, 0))
SUM(ABS(ISNULL(tp.payment_amount, 0)))
I don't have MS SQL to test here, but would it work to put the ISNULL around the SELECT? Maybe, ISNULL isn't triggered at all, if there are no matching rows...