Transposing number from one column into another one where = 0 - sql

My question is this,
I have a query I'm working on and I have some values that are 0. I want to be able to take a value from a previous month that is not zero and put it in place of the zero. See the example below.
SELECT item,
stock,
sold,
level,
month,
year
FROM agingdata
GROUP BY item,
stock,
sold,
month,
year,
level
HAVING ( item = #Item )
ORDER BY year,
month
So I want take the number 2455 and input it into the stock where it says 0, taking last months balance number as the current months stock level. Is that even possible?

You can find the previous non-zero level per item (for zero level items) using this query:
SELECT find.item, find.month, find.year, result.level
FROM AgingData result
JOIN (
SELECT original.item, original.month, original.year, max(cast(cast(previous.year as varchar) + '-' + cast(previous.month as varchar) + '-1' as datetime)) previous_date
FROM AgingData original
JOIN AgingData previous
ON original.item = previous.item
AND ((original.year > previous.year)
OR (original.year = previous.year AND original.month > previous.month))
WHERE original.level = 0
AND previous.level != 0
GROUP BY original.item, original.month, original.year ) find
ON result.item = find.item
AND cast(cast(result.year as varchar) + '-' + cast(result.month as varchar) + '-1' as datetime) = find.previous_date
This will work even if the previous non-zero level is several months before.

Something along these lines, this only works if the previous month is non-zero....
SELECT cur.Item,
CASE
WHEN (cur.Stock<>0)
THEN cur.Stock
ELSE prev.Stock
END as Stock,
cur.Sold,
cur.Level,
cur.Month,
cur.Year
FROM AgingData cur
LEFT OUTER JOIN AgingData prev
ON prev.item=cur.item and prev.Month = cur.Month - 1
GROUP BY cur.Item, cur.Stock, cur.Sold, cur.Month, cur.Year, cur.Level
HAVING (cur.Item = #Item)
ORDER BY cur.Year, cur.Month

Related

How to optimize my query speed (avoid using subselect for every row)?

I have a table called CisLinkLoadedData. Is has Distributor, Network, Product, DocumentDate, Weight, AmountCP and Quantity columns. It used to store some product daily sales. AmountCP / Quantity is the price for the product at certain date. There are promo and regular sales, but no flag for it. We can tell if certain record is regular or promo by comparing it's price with the maximum recorded price within month. I did explained it on this picture.
I need to make a query to display summarized regular and promo sales of certain product per month. Well, I made it, but it very slow (6 minutes to execute at 1.6 millions records).
I suspect this is because I use subquery to determine max price for every record, but I don't know how to make it another way.
This is what I made:
SELECT
Distributor,
Network,
Product,
cast(month(DocumentDate) as VARCHAR) + '.' + cast(year(DocumentDate) as VARCHAR) AS MonthYear,
SUM(Weight) AS MonthlyWeight,
IsPromo
FROM (SELECT
main_clld.Distributor,
main_clld.Network,
main_clld.Product,
main_clld.DocumentDate,
main_clld.Weight,
main_clld.Quantity,
main_clld.AmountCP,
CASE WHEN (main_clld.AmountCP / main_clld.Quantity) < (SELECT MAX(sub_clld.AmountCP / NULLIF(sub_clld.Quantity, 0)) FROM CisLinkLoadedData AS sub_clld WHERE sub_clld.Distributor = main_clld.Distributor AND sub_clld.Network = main_clld.Network AND sub_clld.Product = main_clld.Product AND cast(month(sub_clld.DocumentDate) as VARCHAR) + '.' + cast(year(sub_clld.DocumentDate) as VARCHAR) = cast(month(main_clld.DocumentDate) as VARCHAR) + '.' + cast(year(main_clld.DocumentDate) as VARCHAR) AND sub_clld.Quantity > 0 AND sub_clld.GCRecord IS NULL) THEN 1 ELSE 0 END AS IsPromo
FROM CisLinkLoadedData AS main_clld
WHERE main_clld.Quantity > 0 AND main_clld.GCRecord IS NULL) AS bad_query
GROUP BY
Distributor,
Network,
Product,
cast(month(DocumentDate) as VARCHAR) + '.' + cast(year(DocumentDate) as VARCHAR),
IsPromo;
What is possible to do in such case? By the way, if you can do result table with another structure like that (Distributor, Network, Product, MonthYear, RegularWeight, PromoWeight) - it's even better. This is what I tried initially, but failed.
I use Microsoft SQL Server.
Rather than a correlated subquery, you could use a windowed function to retrieve the maximum price per group (each group is defined by the partition by clause):
MAX(main_clld.AmountCP / NULLIF(main_clld.Quantity, 0))
OVER(PARTITION BY main_clld.Distributor, main_clld.Network,
main_clld.Product, EOMONTH(main_clld.DocumentDate))
I think your full query would end up something like:
SELECT
Distributor,
Network,
Product,
MonthYear,
SUM(Weight) AS MonthlyWeight,
IsPromo
FROM (SELECT
main_clld.Distributor,
main_clld.Network,
main_clld.Product,
main_clld.DocumentDate,
main_clld.Weight,
main_clld.Quantity,
main_clld.AmountCP,
CAST(MONTH(DocumentDate) AS VARCHAR(2)) + '.' + cast(year(DocumentDate) as VARCHAR(2)) AS MonthYear,
CASE WHEN (main_clld.AmountCP / main_clld.Quantity) < MAX(main_clld.AmountCP / NULLIF(main_clld.Quantity, 0))
OVER(PARTITION BY main_clld.Distributor, main_clld.Network,
main_clld.Product, EOMONTH(main_clld.DocumentDate))
THEN 1 ELSE 0 END AS IsPromo
FROM CisLinkLoadedData AS main_clld
WHERE main_clld.Quantity > 0
AND main_clld.GCRecord IS NULL
) AS bad_query
GROUP BY
Distributor,
Network,
Product,
MonthYear,
IsPromo;

How to build logic to find the history of Red and Amber records with below example

with archer_summary_base as (select * from kri.archer_kri_latest_summary
where segment='All Segments' and Level_I_Risk!='' and metric_identifier in ('Mandatory') and (metric_results_status='Amber' or metric_results_status='Red' or metric_results_status is null) and
(month='January' or month='February' or month='March' or month='December' or month='November' or month='April' or month is null) AND (year=2021 or year=2020 or year is null)
and status='Active' and originated_source ='Archer'),
--select * from archer_summary_base
archer_summary_main as (select * from kri.archer_kri_latest_summary where metric_id in (select metric_id from archer_summary_base)
and (month='January' or month='February' or month='March' or month='December' or month='November' or month='April' or month is null)
AND (year=2021 or year=2020 or year is null) and segment='All Segments' and Level_I_Risk !='' and originated_source ='Archer' and
metric_identifier in ('Mandatory')),
breach_kri_report AS (select metric_id,
trim(Level_I_Risk) as Level_I_Risk,
trim(Metric_Name) as Metric_Name,
(case when format in ('Percentage') then concat(regexp_replace(regexp_replace(Green_threshold_min,'\\.0+$',''),'(\\d+\\.\\d+?)0+$','$1'),'%','-',regexp_replace(regexp_replace(Green_threshold_max,'\\.0+$',''),'(\\d+\\.\\d+?)0+$','$1'),'%')
when format in ('Monetary Amount') then concat('$',format_number(cast(Green_threshold_min as bigint),0),' -','$',format_number(cast(Green_threshold_max as bigint),0))
when format in ('Number') then concat(cast(Green_threshold_min as bigint),'-',cast(Green_threshold_max as bigint))
end) as Green_Threshold,
trim(PATH_TO_GREEN_INDICATOR) as P2G_Indicator,
concat(substring(Month,1,3), '-', substring(year,3,4)) as Month_Name,
(case when format in ('Percentage') then concat(cast(metric_value as bigint),'%')
when format in ('Monetary Amount') then concat('$',(format_number(cast(metric_value as bigint),0)))
when format in ('Number') then format_number(cast(metric_value as bigint),0) end) as DerivedValue_Avg
from archer_summary_main group by metric_id,Level_I_Risk,Metric_Name,Green_threshold_min,Green_threshold_max,PATH_TO_GREEN_INDICATOR,month,year,format,metric_value),
--Select * from breach_kri_report,
exsummary as (SELECT b.Level_I_Risk,
b.Metric_Name,
b.Green_Threshold,
b.P2G_Indicator,
case when Month_Name='Nov-20' then DerivedValue_Avg end as Nov_20,
case when Month_Name='Dec-20' then DerivedValue_Avg end as Dec_20,
case when Month_Name='Jan-21' then DerivedValue_Avg end as Jan_21,
case when Month_Name='Feb-21' then DerivedValue_Avg end as Feb_21,
case when Month_Name='Mar-21' then DerivedValue_Avg end as Mar_21,
case when Month_Name='Apr-21' then DerivedValue_Avg end as Apr_21
FROM breach_kri_report b)
select
t.Level_I_Risk,
t.Metric_Name,
t.Green_Threshold,
t.P2G_Indicator,
collect_list(Nov_20)[0] as Nov_20,
collect_list(Dec_20)[0] as Dec_20,
collect_list(Jan_21)[0] as Jan_21,
collect_list(Feb_21)[0] as Feb_21,
collect_list(Mar_21)[0] as Mar_21,
collect_list(Apr_21)[0] as Apr_21
from exsummary t
group by t.Level_I_Risk, t.Metric_Name, t.Green_Threshold, t.P2G_Indicator
Below results i am getting shown in screen shot
the 'Green' which are circled should not come when I run the query because I want to get history for Red and amber. I am ok to get the green records as part of the history but not in the beginning or in the latest months
Below is what I am trying to achieve
Here is an outline to a possible query:
Issue a subquery using a window function, partition by year and month, pick the lowest pk_key by sorting by year, month and pk_key ascending and select the first result for each partition.
Then use the results in combination with a where clause and a second constraint:
... where not(pk_key in (first_query.pk_key) and metric_results_status = 'GREEN')
Sorry, but I can't provide anything more specific, since the details you gave are not very clear on as to what your database structure in actuality looks like.
May God have mercy upon your soul!

Can't figure out how to get duplicate values out of table in SQL redshift

I am trying to return the foretasted value per item, per warehouse, per day and then add them up for the week. I am pulling from two tables depending on the demand date, but the issue is that both tables have a "creation_date" column with timestamps, so it's creating multiple raw_forecast entries per warehouse/item/day when I only want one. I tried to join on the creation dates, but because each table has different timestamps on the creation dates, SQL is returning both forecast quantities of that day. I just want whatever the largest forecast amount was for the day. Any help is so appreciated!
output columns: demand_date, item, fulfillment center, type quantity, raw_forecasts
there are multiple quantities and raw_forecast rows
SELECT
DISTINCT d.demand_date,
d.item,
r.fulfillment_center_external_id,
d.type,
d.quantity,
CASE WHEN d.type IN ('RAW') THEN MAX(DISTINCT d.quantity) ELSE 0 END as Raw_Forecast
FROM
f3_rsc.fab_reporting_demand_forecasts d
Left join f3_rsc.runs r on d.output_id = r.output_id
and TRUNC(d.creation_date) = TRUNC(r.creation_date)
where
1 = 1
and d.demand_date between to_date('{RUN_DATE_YYYY-MM-DD}', 'YYYY-MM-DD') + 11
and to_date('{RUN_DATE_YYYY-MM-DD}', 'YYYY-MM-DD') + 17
and d.type in ('RAW')
and requester_id = 'SWF-PRODUCTION'
and po_placement_status = 'SHOULD_CUT_PO'
and TRUNC(d.creation_date) > to_date('{RUN_DATE_YYYY-MM-DD}', 'YYYY-MM-DD') -3
GROUp BY
1,2,3,4,5
You are getting multiple rows because you are grouping on quantity and the quantities are different. Based on your description stop grouping on quantity (5 in your group by list) and take the MAX() of quantity in your select line. (You also don't need DISTINCT if the column is in the group by list.)
SELECT
d.demand_date,
d.item,
r.fulfillment_center_external_id,
d.type,
MAX(d.quantity),
CASE WHEN d.type IN ('RAW') THEN MAX(DISTINCT d.quantity) ELSE 0 END as Raw_Forecast
FROM
f3_rsc.fab_reporting_demand_forecasts d
Left join f3_rsc.runs r on d.output_id = r.output_id
and TRUNC(d.creation_date) = TRUNC(r.creation_date)
where
1 = 1
and d.demand_date between to_date('{RUN_DATE_YYYY-MM-DD}', 'YYYY-MM-DD') + 11
and to_date('{RUN_DATE_YYYY-MM-DD}', 'YYYY-MM-DD') + 17
and d.type in ('RAW')
and requester_id = 'SWF-PRODUCTION'
and po_placement_status = 'SHOULD_CUT_PO'
and TRUNC(d.creation_date) > to_date('{RUN_DATE_YYYY-MM-DD}', 'YYYY-MM-DD') -3
GROUp BY
1,2,3,4
Let me know if I have misread your situation.

How to calculate prior year sales data in SQL

I'm attempting to build a table summarizing sales data by week. In it, I'm trying to have one of the adjacent columns show the sales figures for the same fiscal week during the prior year (which due to my organizations fiscal calendar, had a 53rd week last year). I also have need to compare (Comp Units/Comp Sales) to a period 52 weeks ago which is an entirely different fiscal week (Think Week 9 of 2019 comparing to Week 10 2018).
I've tried using both unions and full outer joins, but given the way the way my data is, they're inefficient (Because this is weekly data, unions ended up being inefficient as I needed to leave the date information out of the initial query, then updating columns in my table to reflect the week the data is for. This is obviously rife with opportunity for error, but also time consuming to do 105 times), or just didn't work (attempting a full outer join was returning the wrong answers for all columns). I've also tried utilizing CTEs as well, and that's not working for me either. I'm currently trying a CASE Statement, but that's also returning a null value. I'm not quite sure where to go next
#STANDARDSQL
SELECT
DTL.SKU_NBR AS SKU_NBR
, SLS.STR_NBR AS STR_NBR
, CONCAT(TRIM(CAST(SKU_HIER.SKU_NBR AS STRING)), ' ', '-', ' ', TRIM(SKU_HIER.SKU_DESC)) AS SKU
, CONCAT(TRIM(CAST(SKU_HIER.EXT_SUB_CLASS_NBR AS STRING)), ' ', '-', ' ', TRIM(SKU_HIER.SUB_CLASS_DESC)) AS SUB_CLASS
, CONCAT(TRIM(CAST(SKU_HIER.EXT_SUB_SC_NBR AS STRING)), ' ', '-', ' ', TRIM(SKU_HIER.SUB_SC_DESC)) AS SUB_SUB_CLASS
, LOCATION.MKT_NM AS MARKET_NAME
, LOCATION.RGN_NM AS REGION_NAME
, LOCATION.DIV_NM AS DIVISION_NAME
, LOCATION.DIV_NBR AS DIVISION_NUMBER
, LOCATION.RGN_NBR AS REGION_NUMBER
, LOCATION.MKT_NBR AS MARKET_NUMBER
, COMP.STR_COMP_IND AS COMP_IND
, COMP.PY_STR_COMP_IND AS PRIOR_COMP_IND
, CALENDAR.FSCL_WK_DESC AS FISCAL_WEEK
, CALENDAR.FSCL_PRD_DESC AS FISCAL_PERIOD
, CALENDAR.FSCL_WK_END_DT AS END_DATE
, CALENDAR.FSCL_WK_BGN_DT AS BEGIN_DATE
, CALENDAR.FSCL_YR AS FISCAL_YEAR_NBR
, CALENDAR.FSCL_WK_NBR AS WEEK_NUMBER
, CALENDAR.FSCL_YR_WK_KEY_VAL AS FISCAL_KEY
, CALENDAR.LY_FYR_WK_KEY_VAL AS LY_FISCAL_KEY
, SUM(COALESCE(DTL.UNT_SLS,0)) AS UNITS
, SUM(COALESCE(DTL.EXT_RETL_AMT,0) + COALESCE(DTL.TOT_GDISC_DTL_AMT,0))
AS SALES
, SUM(CASE
WHEN 1=1 THEN (COALESCE(DTL.EXT_RETL_AMT,0) + COALESCE(DTL.TOT_GDISC_DTL_AMT,0)) * COMP.STR_COMP_IND
ELSE 0 END) AS COMP_SALES
, SUM(CASE
WHEN 1=1 THEN (COALESCE(DTL.UNT_SLS,0)) * COMP.STR_COMP_IND
ELSE 0 END) AS COMP_UNITS
, SUM(CASE
WHEN 1=1 AND SLS.SLS_DT = DATE_SUB(SLS.SLS_DT, INTERVAL 364 DAY)
THEN (COALESCE(DTL.EXT_RETL_AMT,0) +
COALESCE(DTL.TOT_GDISC_DTL_AMT,0)) * COMP.PY_STR_COMP_IND
ELSE NULL END)
AS LY_COMP_SALES
, SUM(CASE
WHEN 1=1 AND SLS.SLS_DT = DATE_SUB(SLS.SLS_DT, INTERVAL 364 DAY)
THEN (COALESCE(DTL.UNT_SLS,0)) * COMP.PY_STR_COMP_IND
ELSE NULL END)
AS LY_COMP_UNITS
, SUM(CASE
WHEN SLS.SLS_DT = DATE_SUB(SLS.SLS_DT, INTERVAL 371 DAY)
THEN (COALESCE(DTL.EXT_RETL_AMT,0) +
COALESCE(DTL.TOT_GDISC_DTL_AMT,0))
ELSE NULL END)
AS LY_SALES
, SUM(CASE
WHEN SLS.SLS_DT = DATE_SUB(SLS.SLS_DT, INTERVAL 371 DAY)
THEN (COALESCE(DTL.UNT_SLS,0))
ELSE NULL END)
AS LY_UNITS
FROM `pr-edw-views.SLS.POS_SLS_TRANS_DTL` AS SLS
INNER JOIN
UNNEST (SLS.DTL) AS DTL
JOIN `pr-edw-views.SHARED.MVNDR_HIER` AS MVNDR
ON DTL.DERIV_MVNDR.MVNDR_NBR = MVNDR.MVNDR_NBR
JOIN `pr-edw-views.SHARED.SKU_HIER_FD` AS SKU_HIER
ON SKU_HIER.SKU_NBR = DTL.SKU_NBR
AND SKU_HIER.SKU_CRT_DT = DTL.SKU_CRT_DT
JOIN `pr-edw-views.SHARED.LOC_HIER_FD` AS LOCATION
ON LOCATION.LOC_NBR = SLS.STR_NBR
JOIN `pr-edw-views.SHARED.CAL_PRD_HIER_FD` AS CALENDAR
ON CALENDAR.CAL_DT = SLS_DT
JOIN `pr-edw-views.SLS.STR_COMP_DAY` AS COMP
ON COMP.CAL_DT = CALENDAR.CAL_DT
AND COMP.STR_NBR = SLS.STR_NBR
WHERE CALENDAR.FSCL_WK_END_DT BETWEEN '2018-01-29' AND '2019-04-07'
AND SLS.SLS_DT BETWEEN '2018-01-29' AND '2019-04-07'
AND POS_TRANS_TYP_CD in ('S', 'R')
AND SKU_HIER.EXT_CLASS_NBR = '025-004'
AND MVNDR.MVNDR_NBR IN (74798, 60002238, 73059, 206820, 76009, 40263, 12879, 76722, 10830, 206823, 87752, 60052261, 70401, 51415, 51414)
AND SKU_HIER.LATEST_SKU_CRT_DT_FLG = TRUE
GROUP BY
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20
I'm currently getting null values in my LY_SALES, LY_UNITS, LY_COMP_SALES and LY_COMP_UNITS columns, though I know there should have been locations with sales of those items from the same period the previous year. What I'm trying to get to is having those prior year values showing up along side the current year values. Any help would be hugely appreciated!
Thanks!
Such a condition can never be fulfilled : SLS.SLS_DT = DATE_SUB(SLS.SLS_DT, INTERVAL 371 DAY). Simply because a SLS_DT is not equal to SLS_DT-371.
You can pre-aggregate the table in a CTE (adding SLS_DT to the group by columns) and then replace the CASE with a join to the pre-aggregated table. Aim at something like this: and it will become something like (notice - no SUM in the case):
CASE WHEN AGGSLS.SLS_DT = DATE_SUB(SLS.SLS_DT, INTERVAL 371 DAY)
THEN (COALESCE(AGGSLS.SUM_EXT_RETL_AMT,0) +
COALESCE(AGGSLS.SUM_TOT_GDISC_DTL_AMT,0))
ELSE NULL END
Two things:
1) WHEN 1=1 can be expressed simply as WHEN TRUE, this way it is easier to move statements around without breaking the AND/OR chaining
2) to get the last year's sales. You can either omit the year from the final query and limit the output with a where clause or create a smaller table that has the sales this year, sales last year per week.
In my humble opinion sales last year for weeknum is the best option, as you can use it elsewhere. But it's pretty similar to what you wr
It would look something like:
SELECT CALENDAR.FSCL_WK_DESC as week_num,
sum(case when year = year(current_date()) then (COALESCE(DTL.UNT_SLS,0)) * COMP.STR_COMP_IND else 0 end) as this_year
sum(case when year = year(current_date())-1 then (COALESCE(DTL.UNT_SLS,0)) * COMP.STR_COMP_IND else 0 end) as last_year
And then you join back to the original table using week_num
Hope you find it useful
Cheers!

SQL - Value difference between specific rows

My query is as follows
SELECT
LEFT(TimePeriod,6) Period, -- string field with YYYYMMDD
SUM(Value) Value
FROM
f_Trans_GL
WHERE
Account = 228
GROUP BY
TimePeriod
And it returns
Period Value
---------------
201412 80
201501 20
201502 30
201506 50
201509 100
201509 100
I'd like to know the Value difference between rows where the period is 1 month apart. The calculation being [value period] - [value period-1].
The desired output being;
Period Value Calculated
-----------------------------------
201412 80 80 - null = 80
201501 20 20 - 80 = -60
201502 30 30 - 20 = 10
201506 50 50 - null = 50
201509 100 (100 + 100) - null = 200
This illustrates a second challenge, as the period needs to be evaluated if the year changes (the difference between 201501 and 201412 is one month).
And the third challenge being a duplicate Period (201509), in which case the sum of that period needs to be evaluated.
Any indicators on where to begin, if this is possible, would be great!
Thanks in advance
===============================
After I accepted the answer, I tailored this a little to suit my needs, the end result is:
WITH cte
AS (SELECT
ISNULL(CAST(TransactionID AS nvarchar), '_nullTransactionId_') + ISNULL(Description, '_nullDescription_') + CAST(Account AS nvarchar) + Category + Currency + Entity + Scenario AS UID,
LEFT(TimePeriod, 6) Period,
SUM(Value1) Value1,
CAST(LEFT(TimePeriod, 6) + '01' AS date) ord_date
FROM MyTestTable
GROUP BY LEFT(TimePeriod, 6),
TransactionID,
Description,
Account,
Category,
Currency,
Entity,
Scenario,
TimePeriod)
SELECT
a.UID,
a.Period,
--a.Value1,
ISNULL(a.Value1, 0) - ISNULL(b.Value1, 0) Periodic
FROM cte a
LEFT JOIN cte b
ON a.ord_date = DATEADD(MONTH, 1, b.ord_date)
ORDER BY a.UID
I have to get the new value (Periodic) for each UID. This UID must be determined as done here because the PK on the table won't work.
But the issue is that this will return many more rows than I actually have to begin with in my table. If I don't add a GROUP BY and ORDER by UID (as done above), I can tell that the first result for each combination of UID and Period is actually correct, the subsequent rows for that combination, are not.
I'm not sure where to look for a solution, my guess is that the UID is the issue here, and that it will somehow iterate over the field... any direction appreciated.
As pointed by other, first mistake is in Group by you need to Left(timeperiod, 6) instead of timeperiod.
For remaining calculation try something like this
;WITH cte
AS (SELECT LEFT(timeperiod, 6) Period,
Sum(value) Value,
Cast(LEFT(timeperiod, 6) + '01' AS DATE) ord_date
FROM f_trans_gl
WHERE account = 228
GROUP BY LEFT(timeperiod, 6))
SELECT a.period,
a.value,
a.value - Isnull(b.value, 0)
FROM cte a
LEFT JOIN cte b
ON a.ord_date = Dateadd(month, 1, b.ord_date)
If you are using SQL SERVER 2012 then this can be easily done using LAG analytic function
Using a derived table, you can join the data to itself to find rows that are in the preceding period. I have converted your Period to a Date value so you can use SQL Server's dateadd function to check for rows in the previous month:
;WITH cte AS
(
SELECT
LEFT(TimePeriod,6) Period, -- string field with YYYYMMDD
CAST(TimePeriod + '01' AS DATE) PeriodDate
SUM(Value) Value
FROM f_Trans_GL
WHERE Account = 228
GROUP BY LEFT(TimePeriod,6)
)
SELECT c1.Period,
c1.Value,
c1.Value - ISNULL(c2.Value,0) AS Calculation
FROM cte c1
LEFT JOIN cte c2
ON c1.PeriodDate = DATEADD(m,1,c2.PeriodDate)
Without cte, you can also try something like this
SELECT A.Period,A.Value,A.Value-ISNULL(B.Value) Calculated
FROM
(
SELECT LEFT(TimePeriod,6) Period
DATEADD(M,-1,(CONVERT(date,LEFT(TimePeriod,6)+'01'))) PeriodDatePrev,SUM(Value) Value
FROM f_Trans_GL
WHERE Account = 228
GROUP BY LEFT(TimePeriod,6)
) AS A
LEFT OUTER JOIN
(
SELECT LEFT(TimePeriod,6) Period
(CONVERT(date,LEFT(TimePeriod,6)+'01')) PeriodDate,SUM(Value) Value
FROM f_Trans_GL
WHERE Account = 228
GROUP BY LEFT(TimePeriod,6)
) AS B
ON (A.PeriodDatePrev = B.PeriodDate)
ORDER BY 1