Calculate how long a process took, taking into account opening hours - sql

I have two tables. An opening hours table that says for each seller and store, which are the opening and closing times for each day of the week. The second table is the operation one which has all information about the processes.
What I need is to calculate how many seconds each process took considering only the hours when the store was opened.
I tried to solve that with case when. I solved the problem when the process take less than 2 days. But I don't know how to handle it when it takes more days. The other problem I had with this code is that case when takes a lot of time to process. Can anybody help me with these issues?
Opening hours table:
sellerid
sellerstoreid
day
dayweek
opening
closing
next_day
opening_next_day
days_to_next
123
abc
1
monday
09:00:00
17:00:00
2
09:00:00
1
123
abc
2
tuesday
09:00:00
17:00:00
4
09:00:00
2
123
abc
4
thursday
09:00:00
17:00:00
5
09:30:00
1
123
abc
5
friday
09:30:00
17:00:00
1
09:00:00
3
Where:
sellerid + sellerstoreid + day works as a primary key;
dayweek translates day from number to name;
opening and closing are the opening and closing time for that day;
opening_next_day shows the opening time o the next available date for that store and seller;
days_to_next informes in how many days will the store reopen
Process table:
delivery_id
sellerid
sellerstoreid
process
end_time
a1
123
abc
p1
05/12/2022 16:00:00.000
a1
123
abc
p2
06/12/2022 16:00:00.000
a1
123
abc
p3
06/12/2022 16:00:00.000
a1
123
abc
p4
08/12/2022 16:00:00.000
a1
123
abc
p5
13/12/2022 16:00:00.000
Where:
The end_time of the previous process will be the the start time of the process.
with
table_1 as (
select
delivery_id
, sellerid
, sellerstoreid
, process
, lag(end_time, 1) over (partition by delivery_id order by end_time) as start_time
, extract(dow from lag(end_time, 1) over (partition by delivery_id order by end_time)) as dow_start_time
, end_time
, extract(dow from end_time) as dow_end_time
from process_table
),
table_2 as (
select
table_1.*
, oh_start.opening as start_opening
, oh_start.closing as start_closing
, oh_end .opening as end_opening
, oh_end .closing as end_closing
from table_1 tb1
left join opening_hours oh_start
on oh_start.sellerid = tb1.sellerid
and oh_start.sellerstoreid = tb1.sellerstoreid
and oh_start.day = dow_start_time
left join opening_hours oh_end
on oh_end .sellerid = tb1.sellerid
and oh_end.sellerstoreid = tb1.sellerstoreid
and oh_end.day = dow_end_time
)
select
*
, case
when dow_start_time = dow_end_time then
extract(epoch from
(case
when end_time::time > start_opening then
(case
when end_time::time > start_closing then start_closing
else end_time::time
end)
else start_opening
end
-
case
when start_time::time > start_opening then
(case
when start_time::time < start_closing then start_time::time
else start_closing
end
)
else start_opening
end))
when dow_start_time <> dow_end_time then
extract(epoch from
(start_closing
-
case
when start_time::time > start_opening then
(case
when start_time::time < start_closing then start_time::time
else start_closing
end)
else start_opening
end)
+
(case
when end_time::time > end_opening then
(case
when end_time::time > end_closing then end_closing
else end_time::time
end)
else end_opening
end
-
end_opening)
end status_duration
from table_2

Related

Create sql Key based on datetime that is persistent overnight

I have a time series with a table like this
CarId
EventDateTime
Event
SessionFlag
CarId
EventDateTime
Event
SessionFlag
ExpectedKey
1
2022-01-01 7:00
Start
1
1-20220101-7
1
2022-01-01 7:05
Drive
1
1-20220101-7
1
2022-01-01 8:00
Park
1
1-20220101-7
1
2022-01-01 10:00
Drive
1
1-20220101-7
1
2022-01-01 18:05
End
0
1-20220101-7
1
2022-01-01 23:00
Start
1
1-20220101-23
1
2022-01-01 23:05
Drive
1
1-20220101-23
1
2022-01-02 2:00
Park
1
1-20220101-23
1
2022-01-02 3:00
Drive
1
1-20220101-23
1
2022-01-02 15:00
End
0
1-20220101-23
1
2022-01-02 16:00
Start
1
1-20220102-16
Other CarIds do exist.
What I am attempting to do is create the last column, ExpectedKey.
The problem I face though is midnight, as the same session can exist over two days.
The record above with ExpectedKey 1-20220101-23 is the prime example of what I'm trying to achieve.
I've played with using:
CASE
WHEN SessionFlag<> 0
AND
SessionFlag= LAG(SessionFlag) OVER (PARTITION BY Carid ORDER BY EventDateTime)
THEN FIRST_VALUE(CarId+'-'+Convert(CHAR(8),EventDateTime,112)+'-'+CAST(DATEPART(HOUR,EventDateTime)AS
VARCHAR))OVER (PARTITION BY CarId ORDER BY EventDateTime)
ELSE CarId+'-'+Convert(CHAR(8),EventDateTime,112)+'-'+CAST(DATEPART(HOUR,EventDateTime)AS VARCHAR) END AS SessionId
But can't seem to make it partition correctly overnight.
Can anyone off advice?
This is a classic gaps-and-islands problem. There are a number of solutions.
The simplest (if not that efficient) is partitioning over a windowed conditional count
WITH Groups AS (
SELECT *,
GroupId = COUNT(CASE WHEN t.Event = 'Start' THEN 1 END)
OVER (PARTITION BY t.CarId ORDER BY t.EventDateTime)
FROM YourTable t
)
SELECT *,
NewKey = CONCAT_WS('-',
t.CarId,
CONVERT(varchar(8), EventDateTime, 112),
FIRST_VALUE(DATEPART(hour, t.EventDateTime))
OVER (PARTITION BY t.CarId, t.GroupId ORDER BY t.EventDateTime
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
)
FROM Groups t;
db<>fiddle
using APPLY to get the Start event datetime and form the key with concat_ws
select *
from time_series t
cross apply
(
select top 1
ExpectedKey = concat_ws('-',
CarId,
convert(varchar(10), EventDateTime, 112),
datepart(hour, EventDateTime))
from time_series x
where x.Event = 'Start'
and x.EventDateTime <= t.EventDateTime
order by x.EventDateTime desc
) k

Split current date into hourly intervals and get count of production

How can I split the current date into hourly intervals like 00:00 - 01:00 for 24 hours and based on that I need to get the count of production which is another column.
This is the code for date column and count column which I wanted to group by hour interval.
select count(*),order_start_time_T
from UDA_Order UDA INNER JOIN WORK_Order WO ON WO.order_key = UDA.object_key
where order_state = 'BOOKED' OR order_state = 'CLOSED'
GROUP BY order_start_time_T
this returns me
Count order_start_time_T
2 2019-07-02 10:54:27.000
7 2019-07-02 10:55:27.000
1 2019-07-02 11:51:58.000
1 2019-07-02 11:58:41.000
1 2019-07-02 12:19:13.000
The result I expect is
Count Hour interval till 24 hours for current day
2 00:00 - 01:00
7 01:00 - 02:00
1 02:00 - 03:00
1 03:00 - 04:00
1 04:00 - 05:00
1 05:00 - 06:00
and so on till 24 hours for current day.
You need to use the DATEPART function that returns the part of the date that you need, which is hrs in your case.
select count(*), CAST(order_start_time_T AS DATE) StartDate, DATEPART(HOUR, order_start_time_T) StartHr
from UDA_Order UDA INNER JOIN WORK_Order WO ON WO.order_key = UDA.object_key
where order_state = 'BOOKED' OR order_state = 'CLOSED'
GROUP BY CAST(order_start_time_T AS DATE), DATEPART(HOUR, order_start_time_T)
But this will not return the results as you wish.
It will returns it like this (for example):
Count StartDate StartHr
2 2019-07-02 10
7 2019-07-02 11
1 2019-07-02 12
I would try with a helper table, which would hold a start hour (h1 column) and end hour (h2 column). I used a temporary table, but it can be a standard table or a table variable. Column display is just for display purposes.
First of all I populate the table with start and end hour, starting from 0.
Secondly, I use DATEPART to identify an hour of an order (order_start_time_T) and check, in which period that hour depends to.
h1 h2 display
--- --- ---
0 1 00:00 - 01:00
1 2 01:00 - 02:00
....
23 24 23:00 - 24:00
Query:
-- Populate time table
if object_id('tempdb..#t') is not null drop table #t
create table #t (
h1 tinyint,
h2 tinyint,
display varchar(30)
);
declare #i tinyint =0
while #i<24 begin
insert into #t (h1, h2, display) values(#i, #i+1
, case when #i<10 then '0' else '' end+cast(#i as varchar)
+':00 - ' + case when #i<9 then '0' else '' end+ cast(#i+1 as varchar)+':00')
set #i = #i + 1
end
-- Group per period
select count(*) [Count], t.display
from UDA_Order UDA INNER JOIN WORK_Order WO ON WO.order_key = UDA.object_key
JOIN #t t ON datepart(hour, order_start_time_T) between t.h1 and t.h2
where order_state = 'BOOKED' OR order_state = 'CLOSED'
GROUP BY t.display

Partition rows where dates are between the previous dates

I have the below table.
I want to identify overlapping intervals of start_date and end_date.
*edit I would like to remove the row that has the least amount of days between the start and end date where those rows overlap.
Example:
pgid 1 & pgid 2 have overlapping days. Remove the row that has the least amount of days between start_date and end_date.
Table A
id pgid Start_date End_date Days
1 1 8/4/2018 9/10/2018 37
1 2 9/8/2018 9/8/2018 0
1 3 10/29/2018 11/30/2018 32
1 4 12/1/2018 sysdate 123
Expected Results:
id Start_date End_date Days
1 8/4/2018 9/10/2018 37
1 10/29/2018 11/30/2018 32
1 12/1/2018 sysdate 123
I am thinking exists:
select t.*,
(case when exists (select 1
from t t2
where t2.start_date < t.start_date and
t2.end_date > t.end_date and
t2.id = t.id
)
then 2 else 1
end) as overlap_flag
from t;
Maybe lead and lag:
SELECT
CASE
WHEN END_DATE > LEAD (START_DATE) OVER (PARTITION BY id ORDER BY START_DATE) THEN 1
WHEN START_DATE < LAG (END_DATE) OVER (PARTITION BY id ORDER BY START_DATE) THEN 1
ELSE 0
END OVERLAP_FLAG
FROM A

Tolerance with Min Max

I am trying to adjust the below code by adding a 2 week tolerance piece.
What it does it looks when the first time a customer (identifier) created a request and the first time it was completed and counts the days which happened in between.
However I am trying to add a tolerance piece. Which says count the number of NCO which occurred between those dates and if there were further requests past the completion date which happened within 2 weeks of the completion date then count those as well (part of the same request). Anything past 2 weeks of the completions date consider as a new request.
CREATE TABLE #temp
(
Identifier varchar(40)NOT NULL
,Created_Date DATETIME NOT NULL
,Completed_Date DATETIME NULL
,SN_Type varchar(20) NOT NULL
,SN_Status varchar(20) NOT NULL
)
;
INSERT INTO #temp
VALUES ('3333333','2017-02-14 15:00:40.000','2017-02-15 00:00:00.000','Re-Activattion', 'COMP');
INSERT INTO #temp
VALUES ('3333333','2017-05-24 16:41:04.000','2017-06-05 00:00:00.000','Re-Activattion', 'N-CO');
INSERT INTO #temp
VALUES ('3333333','2017-05-25 11:49:54.000','2017-05-26 00:00:00.000','Re-Activattion', 'COMP');
INSERT INTO #temp
VALUES ('3333333','2017-06-27 10:24:29.000',NULL,'Re-Activattion', 'ACC');
#Alex you code is accurate just I would like to be selecting the min date the record is created a 2nd time, so line 2 of the result should return min date to be 2017-05-24 16:41:04.000.
select identifier
,case
when sum(case when SN_STATUS='COMP' and SN_TYPE = 'Re-Activattion' then 1 else 0 end)>0
then str(datediff(day
,MIN(case
when SN_TYPE = 'Re-Activattion'
then Created_Date
else null
end
)
,min(case
when (SN_TYPE = 'Re-Activattion'
and SN_STATUS='COMP'
)
then Completed_Date
else null
end
)
)
)
when sum(case when SN_TYPE='Re-Activattion' then 1 else 0 end)>0
then 'NOT COMP'
else 'NO RE-ACT'
end
as RE_ACT_COMPLETION_TIME
,Sum(CASE WHEN SN_STATUS = 'N-CO' THEN 1 ELSE 0 END) as [RE-AN NCO #]
from #temp
group by identifier
;
RESULTS I AM AFTER:
Your table design is not optimal for these kinds of queries as there is no definitive record that specified order start and order end. Additionally multiple orders are stored with the same identifier.
To work around this you need to calculate/identify Order start and Order End records yourself.
One way to do it is using Common Table Expressions.
Note: I have added comments to code to explain what each section does.
-- calculate/identify Order start and Order End records
WITH cte AS
(
-- 1st Order start record i.e. earliest record in the table for a given "Identifier"
SELECT Identifier, MIN( Created_Date ) AS Created_Date, CONVERT( VARCHAR( 30 ), 'Created' ) AS RecordType, 1 AS OrderNumber
FROM #temp
GROUP BY Identifier
UNION ALL
-- All records with "COMP" status are treated as order completed events. Add 2 weeks to the completed date to create a "dummy" Order End Date
SELECT Identifier, DATEADD( WEEK, 2, Created_Date ) AS Created_Date, 'Completed' AS RecordType, ROW_NUMBER() OVER( PARTITION BY Identifier ORDER BY Created_Date ) AS OrderNumber
FROM #temp
WHERE SN_STATUS = 'COMP'
UNION ALL
-- Set the start period of the next order to be right after (3 ms) the previous Order End Date
SELECT Identifier, DATEADD( ms, 3, DATEADD( WEEK, 2, Created_Date )) AS Created_Date, 'Created' AS RecordType, ROW_NUMBER() OVER( PARTITION BY Identifier ORDER BY Created_Date ) + 1 AS OrderNumber
FROM #temp
WHERE SN_STATUS = 'COMP'
),
-- Combine Start / End records into one record
OrderGroups AS(
SELECT Identifier, OrderNumber, MIN( Created_Date ) AS OrderRangeStartDate, MAX( Created_Date ) AS OrderRangeEndDate
FROM cte
GROUP BY Identifier, OrderNumber
)
SELECT a.Identifier, a.OrderNumber, OrderRangeStartDate, OrderRangeEndDate,
case
when sum(case when SN_STATUS='COMP' and SN_TYPE = 'Re-Activattion' then 1 else 0 end)>0
then str(datediff(day
,MIN(case
when SN_TYPE = 'Re-Activattion'
then Created_Date
else null
end
)
,min(case
when (SN_TYPE = 'Re-Activattion'
and SN_STATUS='COMP'
)
then Completed_Date
else null
end
)
)
)
when sum(case when SN_TYPE='Re-Activattion' then 1 else 0 end)>0
then 'NOT COMP'
else 'NO RE-ACT'
end as RE_ACT_COMPLETION_TIME,
Sum(CASE WHEN SN_STATUS = 'N-CO' THEN 1 ELSE 0 END) as [RE-AN NCO #]
FROM OrderGroups AS a
INNER JOIN #Temp AS b ON a.Identifier = b.Identifier AND a.OrderRangeStartDate <= b.Created_Date AND b.Created_Date <= a.OrderRangeEndDate
GROUP BY a.Identifier, a.OrderNumber, OrderRangeStartDate, OrderRangeEndDate
Output:
Identifier OrderNumber OrderRangeStartDate OrderRangeEndDate RE_ACT_COMPLETION_TIME RE-AN NCO #
-------------- ------------- ----------------------- ----------------------- ---------------------- -----------
200895691 1 2016-01-27 14:25:00.000 2016-02-10 15:15:00.000 0 2
200895691 2 2016-02-10 15:15:00.003 2017-01-16 12:15:00.000 1 1
Output for the updated data set:
Identifier OrderNumber OrderRangeStartDate OrderRangeEndDate RE_ACT_COMPLETION_TIME RE-AN NCO #
------------ ------------ ----------------------- ----------------------- ---------------------- -----------
200895691 1 2017-01-11 00:00:00.000 2017-03-27 00:00:00.000 61 4
200895691 2 2017-03-27 00:00:00.003 2017-04-20 00:00:00.000 1 1
3333333 1 2017-01-27 00:00:00.000 2017-02-10 00:00:00.000 0 2
44454544 1 2017-01-27 00:00:00.000 2017-01-27 00:00:00.000 NOT COMP 1
7777691 1 2017-02-08 09:36:44.000 2017-02-22 09:36:44.000 63 1
Update 2017-10-05 in response to the comment
Input:
INSERT INTO #temp VALUES
('11111','20170203','20170203','Re-Activattion', 'COMP'),
('11111','20170206','20170202','Re-Activattion', 'N-CO');
Output:
Identifier OrderNumber OrderRangeStartDate OrderRangeEndDate RE_ACT_COMPLETION_TIME RE-AN NCO #
---------- ------------ ----------------------- ----------------------- ---------------------- -----------
11111 1 2017-02-03 00:00:00.000 2017-02-17 00:00:00.000 0 1

Different where condition for each column

Is there a way to write query like this in SQL Server, without using select two times and then join?
select trans_date, datepart(HOUR,trans_time) as hour,
(datepart(MINUTE,trans_time)/30)*30 as minute,
case
when paper_number = 11111/*paperA*/
then sum(t1.price*t1.amount)/SUM(t1.amount)*100
end as avgA,
case
when paper_number = 22222/*PaperB*/
then sum(t1.price*t1.amount)/SUM(t1.amount)*100
end as avgB
from dbo.transactions t1
where trans_date = '2006-01-01' and (paper_number = 11111 or paper_number = 22222)
group by trans_date, datepart(HOUR,trans_time), datepart(MINUTE,trans_time)/30
order by hour, minute
SQL Server asks me to add paper_number to group by, and returns nulls when I do so
trans_date hour minute avgA avgB
2006-01-01 9 30 1802.57199725463 NULL
2006-01-01 9 30 NULL 169125.886524823
2006-01-01 10 0 1804.04742534103 NULL
2006-01-01 10 0 NULL 169077.777777778
2006-01-01 10 30 1806.18773535637 NULL
2006-01-01 10 30 NULL 170274.550381867
2006-01-01 11 0 1804.43466045433 NULL
2006-01-01 11 0 NULL 170743.4
2006-01-01 11 30 1807.04532012137 NULL
2006-01-01 11 30 NULL 171307.00280112
Try:
with cte as
(select trans_date,
datepart(HOUR,trans_time) as hour,
(datepart(MINUTE,trans_time)/30)*30 as minute,
sum(case when paper_number = 11111/*paperA*/
then t1.price*t1.amount else 0 end) as wtdSumA,
sum(case when paper_number = 11111/*paperA*/
then t1.amount else 0 end) as amtSumA,
sum(case when paper_number = 22222/*PaperB*/
then t1.price*t1.amount else 0 end) as wtdSumB,
sum(case when paper_number = 22222/*PaperB*/
then t1.amount else 0 end) as amtSumB
from dbo.transactions t1
where trans_date = '2006-01-01'
group by trans_date, datepart(HOUR,trans_time), datepart(MINUTE,trans_time)/30)
select trans_date, hour, minute,
case amtSumA when 0 then 0 else 100 * wtdSumA / amtSumA end as avgA,
case amtSumB when 0 then 0 else 100 * wtdSumB / amtSumB end as avgB
from cte
order by hour, minute
(SQLFiddle here)
You can derive this without the CTE, like so:
select trans_date,
datepart(HOUR,trans_time) as hour,
(datepart(MINUTE,trans_time)/30)*30 as minute,
case sum(case when paper_number = 11111/*paperA*/ then t1.amount else 0 end)
when 0 then 0
else 100 * sum(case when paper_number = 11111 then t1.price*t1.amount else 0 end)
/ sum(case when paper_number = 11111 then t1.amount else 0 end) end as avgA,
case sum(case when paper_number = 22222/*paperA*/ then t1.amount else 0 end)
when 0 then 0
else 100 * sum(case when paper_number = 22222 then t1.price*t1.amount else 0 end)
/ sum(case when paper_number = 22222 then t1.amount else 0 end) end as avgB
from dbo.transactions t1
where trans_date = '2006-01-01'
group by trans_date, datepart(HOUR,trans_time), datepart(MINUTE,trans_time)/30
order by 1,2,3
Use SUM() function on the entire CASE expression
select trans_date, datepart(HOUR,trans_time) as hour, (datepart(MINUTE,trans_time)/30)*30 as minute,
sum(case when paper_number = 11111/*paperA*/ then t1.price*t1.amount end) * 1.00
/ sum(case when paper_number = 11111/*paperA*/ then t1.amount end) * 100 as avgA,
sum(case when paper_number = 22222/*PaperB*/ then t1.price*t1.amount end) * 1.00
/ sum(case when paper_number = 22222/*paperB*/ then t1.amount end) * 100 as avgB
from dbo.transactions t1
where trans_date = '2006-01-01'
group by trans_date, datepart(HOUR,trans_time), datepart(MINUTE,trans_time)/30
order by hour, minute
Demo on SQLFiddle
You could also try using UNPIVOT and PIVOT like below:
WITH prepared AS (
SELECT
trans_date,
trans_time = DATEADD(MINUTE, DATEDIFF(MINUTE, '00:00', trans_time) / 30 * 30, CAST('00:00' AS time)),
paper_number,
total = price * amount,
amount
FROM transactions
),
unpivoted AS (
SELECT
trans_date,
trans_time,
attribute = attribute + CAST(paper_number AS varchar(10)),
value
FROM prepared
UNPIVOT (value FOR attribute IN (total, amount)) u
),
pivoted AS (
SELECT
trans_date,
trans_time,
avgA = total11111 * 100 / amount11111,
avgB = total22222 * 100 / amount22222
FROM unpivoted
PIVOT (
SUM(value) FOR attribute IN (total11111, amount11111, total22222, amount22222)
) p
)
SELECT *
FROM pivoted
;
As an attempt at explaining how the above query works, below is a description of transformations that the original dataset undergoes in the course of the query's execution, using the following example:
trans_date trans_time paper_number price amount
---------- ---------- ------------ ----- ------
2013-04-09 11:12:35 11111 10 15
2013-04-09 11:13:01 22222 24 10
2013-04-09 11:28:44 11111 12 5
2013-04-09 11:36:20 22222 20 11
The prepared CTE produces the following column set:
trans_date trans_time paper_number total amount
---------- ---------- ------------ ----- ------
2013-04-09 11:00:00 11111 150 15
2013-04-09 11:00:00 22222 240 10
2013-04-09 11:00:00 11111 60 5
2013-04-09 11:30:00 22222 220 11
where trans_time is the original trans_time rounded down to the nearest half-hour and total is price multiplied by amount.
The unpivoted CTE unpivots the total and amount values to produce attribute and value:
trans_date trans_time paper_number attribute value
---------- ---------- ------------ --------- -----
2013-04-09 11:00:00 11111 total 150
2013-04-09 11:00:00 11111 amount 15
2013-04-09 11:00:00 22222 total 240
2013-04-09 11:00:00 22222 amount 10
2013-04-09 11:00:00 11111 total 60
2013-04-09 11:00:00 11111 amount 5
2013-04-09 11:30:00 22222 total 220
2013-04-09 11:30:00 22222 amount 11
Then paper_number is combined with attribute to form a single column, also called attribute:
trans_date trans_time attribute value
---------- ---------- ----------- -----
2013-04-09 11:00:00 total11111 150
2013-04-09 11:00:00 amount11111 15
2013-04-09 11:00:00 total22222 240
2013-04-09 11:00:00 amount22222 10
2013-04-09 11:00:00 total11111 60
2013-04-09 11:00:00 amount11111 5
2013-04-09 11:30:00 total22222 220
2013-04-09 11:30:00 amount22222 11
Finally, the pivoted CTE pivots the value data back aggregating them along the way with SUM() and using the attribute values for column names:
trans_date trans_time total11111 amount11111 total22222 amount22222
---------- ---------- ---------- ----------- ---------- -----------
2013-04-09 11:00:00 210 20 240 10
2013-04-09 11:30:00 NULL NULL 220 11
The pivoted values are then additionally processed (every totalNNN is multiplied by 100 and divided by the corresponding amountNNN) to form the final output:
trans_date trans_time avgA avgB
---------- ---------- ---- ----
2013-04-09 11:00:00 1050 2400
2013-04-09 11:30:00 NULL 2000
There's a couple of issues that may need to be addressed:
If price and amount are different data types, the total and amount may end up different data types as well. For UNPIVOT, it is mandatory that the values being unpivoted are of exactly the same type, and so you'll need to add an explicit conversion of total and amount to some common type, possibly one which would prevent data/precision loss. That would could be done in the prepared CTE like this (assuming the common type to be decimal(10,2)):
total = CAST(price * amount AS decimal(10,2)),
amount = CAST(amount AS decimal(10,2))
If aggregated amounts may ever end up 0, you'll need to account for the division by 0 issue. One way to do that could be to substitute the 0 amount with NULL, which would make the result of the division NULL as well. Applying ISNULL or COALESCE to that result would allow you to transform it to some default value, 0 for instance. So, change this bit in the pivoted CTE:
avgA = ISNULL(total11111 * 100 / NULLIF(amount11111, 0), 0),
avgB = ISNULL(total22222 * 100 / NULLIF(amount22222, 0), 0)