Nested CASE statements in SQL - sql

I am running the below SQL and I need to add a case statement for the svcState column.
I have a value defined for each number in that column which I need to have in my query. For instance 7 is OK, 4 is down etc. I tried adding this in the CASE statement as below and it seems, the syntax is incorrect. Any help will be greatly appreciated.
SELECT * FROM
(
SELECT
A.NodeName AS NodeName,
MAX(CASE WHEN Poller_Name='svcServiceName' THEN CAST(Status AS varchar) ELSE ''END) svcServiceName,
MAX(CASE (CASE WHEN Poller_Name='svcState' AND Status ='7' THEN 'OK'
WHEN Poller_Name='svcstate' AND Status ='4' THEN 'OUT OF SERVICE' END)
THEN CAST(Status AS bigint) ELSE '' END) svcState
FROM
(
SELECT
Nodes.Caption AS NodeName, CustomNodePollers_CustomPollers.UniqueName AS Poller_Name, CustomNodePollerStatus_CustomPollerStatus.Status AS Status, CustomNodePollerStatus_CustomPollerStatus.rowid as row, CustomNodePollerStatus_CustomPollerStatus.RawStatus as RawStatus
FROM
((Nodes INNER JOIN CustomPollerAssignment CustomNodePollerAssignment_CustomPollerAssignment ON (Nodes.NodeID = CustomNodePollerAssignment_CustomPollerAssignment.NodeID)) INNER JOIN CustomPollers CustomNodePollers_CustomPollers ON (CustomNodePollerAssignment_CustomPollerAssignment.CustomPollerID = CustomNodePollers_CustomPollers.CustomPollerID)) INNER JOIN CustomPollerStatus CustomNodePollerStatus_CustomPollerStatus ON (CustomNodePollerAssignment_CustomPollerAssignment.CustomPollerAssignmentID = CustomNodePollerStatus_CustomPollerStatus.CustomPollerAssignmentID)
WHERE
(
(CustomNodePollers_CustomPollers.UniqueName = 'svcServiceName') OR
(CustomNodePollers_CustomPollers.UniqueName = 'svcState')
)
AND
(
(CustomNodePollerAssignment_CustomPollerAssignment.InterfaceID = 0)
)
and Nodes.Caption = '101'
)A
GROUP BY NodeName, row
--ORDER BY svcServiceName
) B
Desired Output

MAX(CASE WHEN Poller_Name = 'svcState' THEN (CASE WHEN status = '7' THEN 'OK' ELSE 'DOWN' END) END)
Or...
MAX(CASE WHEN Poller_Name = 'svcState' AND status = '7' THEN 'OK'
WHEN Poller_Name = 'svcState' AND status = '4' THEN 'DOWN' END)
Or...
MAX(CASE WHEN Poller_Name != 'svcState' THEN NULL -- Assumes the poller_name is never NULL
WHEN status = '7' THEN 'OK'
WHEN status = '4' THEN 'DOWN'
END)
Where there is no ELSE specified, it is implicitly ELSE NULL, and NULL values are skipped by the MAX().

Related

SQL (BigQuery ANSI) Efficiently get last value that was updated before 1st of each month

So my database has a changes history table which I am looking up to know my user's status on 1st of each month. Since changing dates are arbitrary, I am trying to get date of last update before 1st of that month (considering the fact that users stay in same status unless recorded by the same table again) then checking what status user had on that timestamp and regarding that as the status of user on the first of the month. So doing something like this:
WITH converted_before_time_changed as (
SELECT dch.user_id,
max(CASE WHEN dch.time_changed <= '2022-01-01' THEN dch.time_changed ELSE NULL END) as time_changed_before_jan_1,
max(CASE WHEN dch.time_changed <= '2022-02-01' THEN dch.time_changed ELSE NULL END) as time_changed_before_feb_1,
max(CASE WHEN dch.time_changed <= '2022-03-01' THEN dch.time_changed ELSE NULL END) as time_changed_before_mar_1,
max(CASE WHEN dch.time_changed <= '2022-04-01' THEN dch.time_changed ELSE NULL END) as time_changed_before_apr_1,
max(CASE WHEN dch.time_changed <= '2022-05-01' THEN dch.time_changed ELSE NULL END) as time_changed_before_may_1,
max(CASE WHEN dch.time_changed <= '2022-06-01' THEN dch.time_changed ELSE NULL END) as time_changed_before_jun_1,
max(CASE WHEN dch.time_changed <= '2022-07-01' THEN dch.time_changed ELSE NULL END) as time_changed_before_jul_1,
FROM my_database.defacto_users_changes_history dch
WHERE dch.table = 'all_users' AND dch.column='status'
GROUP BY user_id
),
c2_before_flags as (SELECT
c2b.user_id,
jan_dch.new_value as status_on_jan_1,
feb_dch.new_value as status_on_feb_1,
mar_dch.new_value as status_on_mar_1,
apr_dch.new_value as status_on_apr_1,
may_dch.new_value as status_on_may_1,
jun_dch.new_value as status_on_jun_1,
jul_dch.new_value as status_on_jul_1
FROM
converted_before_time_changed c2b
LEFT JOIN my_database.defacto_users_changes_history jan_dch on jan_dch.time_changed = time_changed_before_jan_1 AND c2b.user_id = jan_dch.user_id
LEFT JOIN my_database.defacto_users_changes_history feb_dch on feb_dch.time_changed = time_changed_before_feb_1 AND c2b.user_id = feb_dch.user_id
LEFT JOIN my_database.defacto_users_changes_history mar_dch on mar_dch.time_changed = time_changed_before_mar_1 AND c2b.user_id = mar_dch.user_id
LEFT JOIN my_database.defacto_users_changes_history apr_dch on apr_dch.time_changed = time_changed_before_apr_1 AND c2b.user_id = apr_dch.user_id
LEFT JOIN my_database.defacto_users_changes_history may_dch on may_dch.time_changed = time_changed_before_may_1 AND c2b.user_id = may_dch.user_id
LEFT JOIN my_database.defacto_users_changes_history jun_dch on jun_dch.time_changed = time_changed_before_jun_1 AND c2b.user_id = jun_dch.user_id
LEFT JOIN my_database.defacto_users_changes_history jul_dch on jul_dch.time_changed = time_changed_before_jul_1 AND c2b.user_id = jul_dch.user_id
)
SELECT * FROM c2_before_flags
This already takes a lot of time which increases exponentially with each month added, plus its not scalable as I have to edit the query to add each month. What would be the ideal way of achieving the same, dynamically and efficiently?

How to join two queries in SQL

I have two SQL queries that I want to join.
First query:
SELECT
rp_date, key_code,
sum(case when rp_id=15102 then rp_value else null end) as users_completed,
sum(case when rp_id=15108 then rp_value else null end) as users_inProgress
FROM
te_rp_pc_rate
WHERE
abc_code = 'A204'
AND organisation_id = '444-4'
AND key_ code = '#KL0560'
GROUP BY
rp_date, key_code
ORDER BY
rp_date DESC, key_code
LIMIT 100;
Second query:
SELECT
cr_date
sum(case when rp_id=23101 then rp_value else null end) AS prim_kfc
FROM
te_emk_rate
WHERE
abc_code = 'A204'
AND organisation_id = '444-4'
AND ref_value = 0
GROUP BY
cr_date
ORDER BY
cr_date DESC
LIMIT 100;
The dates (cr_date, rp_date) should be used for joining. For the first query's rp_date for the second cr_date.
The goal is to get the columns in a row for the same date. I've tried, but the results are too high.
You can put your queries into subqueries and join them. Something like this:
SELECT *
FROM (
SELECT rp_date, key_code,
sum(case when rp_id=15102 then rp_value else null end) as users_completed,
sum(case when rp_id=15108 then rp_value else null end) as users_inProgress
from te_rp_pc_rate
WHERE abc_code = 'A204'
AND organisation_id = '444-4'
AND key_ code = '#KL0560'
group by rp_date, key_code
Order By rp_date DESC, key_code
LIMIT 100;
) as q1
LEFT JOIN (
SELECT cr_date, key_code
sum(case when rp_id=23101 then rp_value else null end) as prim_kfc
from te_emk_rate
WHERE abc_code = 'A204'
AND organisation_id = '444-4'
AND ref_value = 0
group by cr_date, key_code
Order By cr_date DESC, key_code
LIMIT 100;
) as q2
on q1.rp_date = q2.cr_date

how to return a zero in sql instead of no row selected using case when

If I query a output that doesn't exist then I will get nothing returned. i'm looking for default (0) is returned in that scenario
select sum(case when a2.status='SUCCESS' THEN A2.a else 0 end) as success,
sum(case when a2.status='FAILED' THEN A2.a else 0 end) as failed,
sum(case when a2.status='ERROR' THEN A2.a else 0 end) as error
from
(select a.stauts,count(1) a
from table1 a,table2 b
where a.id=b.id
a.date=sysdate
group by a.status)a2;
Note: There is no records for sysdate. I required default value "0" should be return for status.
This query should always return one row, even if nothing matches:
select sum(case when a.status = 'SUCCESS' then 1 else 0 end) as success,
sum(case when a.status = 'FAILED' then 1 else 0 end) as failed,
sum(case when a.status = 'ERROR' then 1 else 0 end) as error
from table1 a join
table2 b
on a.id = b.id
where a.date = trunc(sysdate);
Note that I changed the where logic. sysdate (despite its name) has a time component. If date has a time component, you may want:
where a.date >= trunc(sysdate) and a.date < trunc(sysdate + 1)
EDIT:
If the filter condition matches no rows, then you will get 0 using:
select count(case when a.status = 'SUCCESS' then 1 end) as success,
count(case when a.status = 'FAILED' then 1 end) as failed,
count(case when a.status = 'ERROR' then 1 end) as error
from table1 a join
table2 b
on a.id = b.id
where a.date = trunc(sysdate);
You could generate missing values:
WITH cte AS (
select a.status,count(1) a
from table1 a --JOIN syntax
join table2 b
on a.id=b.id
WHERE a.date=sysdate -- are you sure you want precision with time?
group by a.status
), placeholder AS (
SELECT *
FROM cte
UNION ALL
SELECT *
FROM (SELECT 'SUCCESS' AS status, 0 AS a FROM dual UNION ALL
SELECT 'ERROR', 0 FROM dual UNION ALL
SELECT 'FAILED', 0 FROM dual) p
WHERE NOT EXISTS (SELECT * FROM cte WHERE cte.status = p.status)
)
SELECT
sum(case when status='SUCCESS' THEN a else 0 end) as success,
sum(case when status='FAILED' THEN a else 0 end) as failed,
sum(case when status='ERROR' THEN a else 0 end) as error
FROM placeholder;
The only suggestion which comes to mind would be to use a left join in your subquery and move the entire WHERE logic to the ON clause:
SELECT
SUM(CASE WHEN a2.status = 'SUCCESS' THEN A2.a ELSE 0 END) AS success,
SUM(CASE WHEN a2.status = 'FAILED' THEN A2.a ELSE 0 END) AS failed,
SUM(CASE WHEN a2.status = 'ERROR' THEN A2.a ELSE 0 END) AS error
FROM
(
SELECT a.status, COUNT(1) a
FROM table1 a
LEFT JOIN table2 b
ON a.id = b.id AND
a.date = SYSDATE
GROUP BY a.status
) a2;
Your current query is using archaic join syntax which makes it hard to see what is actually happening. In particular, it makes it hard to see whether or not you might be discarding information during the join which you wish to retain.
If you use COUNT(), you don't need NVL() or COALESCE() to handle NULLs ,unlike the case for SUM(). COUNT() will always return a row with value=0 when the argument is NULL or when no rows are matched.GROUP BY too wouldn't be required.
SELECT COUNT(CASE WHEN a.status = 'SUCCESS' THEN 1 END) AS success,
COUNT(CASE WHEN a.status = 'FAILED' THEN 1 END) AS failed,
COUNT(CASE WHEN a.status = 'ERROR' THEN 1 END) AS error
FROM table1 a
JOIN table2 b ON a.id = b.id
WHERE a.date = TRUNC(SYSDATE);
If you just want to be clear, test these queries and pay attention to the result.
select SUM(1) FROM DUAL WHERE 1=0; --NULL
select SUM(NULL) FROM DUAL WHERE 1=0; --NULL
select SUM(NULL) FROM DUAL WHERE 1=1; --NULL
select COUNT(1) FROM DUAL WHERE 1=0; -- 0
select COUNT(NULL) FROM DUAL WHERE 1=0; -- 0
select COUNT(NULL) FROM DUAL WHERE 1=1; -- 0
Demo
Aggregation without GROUP BY always returns a row, so your existing query will return NULLs.
To change a NULL to zero simply apply COALESCE:
select
coalesce(sum(case when a2.status='SUCCESS' THEN A2.a end), 0) as success,
coalesce(sum(case when a2.status='FAILED' THEN A2.a end), 0) as failed,
coalesce(sum(case when a2.status='ERROR' THEN A2.a end), 0) as error
from
(
select a.status,count(1) a
from table1 a join table2 b
on a.id=b.id
where a.date=sysdate
group by a.status
) a2;
If I wanted to ensure there is always a result even for a query that wouldn't find any row to return, I would do a left join on dual table (for oracle):
select q.* FROM DUAL d LEFT JOIN ( your_query )q on 1=1
This way you will always get back a row, no matter what!

Even splitting of a group to within 1%

I have been tasked with taking a group of customers and splitting them into two equal groups for each store location. The result set requested would have the two groups for each store location within 1% of each other on customer count, within 1% of each other on order count, and within 1% of each other on amount ordered.
Below is the code I came up with and it works fairly well and most of the time it gets the desired result but sometimes(I think due to an outlier in the group) the % will be further off than 1%.
If OBJECT_ID('tempdb.dbo.#Orders') IS NOT NULL DROP TABLE #Orders
Select
StoreID
,CustomerID
,Sum(OrderID) as Orders
,Sum(OrderAmount) as AmountSold
Into #Orders
From CustomerOrders
Group by StoreID,CustomerID
IF OBJECT_ID('tempdb.dbo.#OrderRanking') IS NOT NULL DROP TABLE #OrderRanking
Select
O.*
,ROW_NUMBER() Over(Partition by StoreID Order by AmountSold, Orders) as Ranking
Into #OrderRanking
From #Orders as O
Select
R.StoreID
,Count(CustomerID) as CustomerCount
,Sum(R.Orders) as Orders
,Sum(R.AmountSold) as Amountsold
,Case When Ranking%2 = 0 Then 'A' Else 'B' End as 'Grouping'
From #OrderRanking as R
Group by
R.StoreID
,Case When Ranking%2 = 0 Then 'A' Else 'B' End
Is there a better way to split the groups to ensure the 1% variance? Or maybe a way to loop through several different splits until it finds a 1%? If looping would need a fail safe to prevent infinite loop in case of impossible split something like after x loops just take closest split.
I am using SQL Server 2012 and SSMS 2016. Thanks for any help you can provide.
Edit:
I had tried to convert the code into something not company specific I messed up the code. I realized that and adjusted the code to show what is truly being sought after.
Edit2: I made some progress on my own and wanted to update the question.
So I was working on this some more and I was able to get it to sort on a random order each time you run the code and have it display the Variance for each of the groups. Now all I want to add is a way to loop through X number times and keep the one that has lowest overall variance. This weekend I might try a few more things. But for now below is the new code I spoke of.
If OBJECT_ID('tempdb.dbo.#Orders') IS NOT NULL DROP TABLE #Orders
Select
StoreID
,CustomerID
,Sum(OrderID) as Orders
,Sum(OrderAmount) as AmountSold
,Rand() as Random
Into #Orders
From CustomerOrders
Group by StoreID,CustomerID
IF OBJECT_ID('tempdb.dbo.#OrderRanking') IS NOT NULL DROP TABLE #OrderRanking
Select
O.*
,ROW_NUMBER() Over(Partition by StoreID Order by Random) as Ranking
Into #OrderRanking
From #Orders as O
If OBJECT_ID('tempdb.dbo.#Split') IS NOT NULL DROP TABLE #Split
Select
R.StoreID
,Count(CustomerID) as CustomerCount
,Sum(R.Orders) as Orders
,Sum(R.AmountSold) as Amountsold
,Case When Ranking%2 = 0 Then 'A' Else 'B' End as 'Grouping'
Into #Split
From #OrderRanking as R
Group by
R.StoreID
,Case When Ranking%2 = 0 Then 'A' Else 'B' End
Select
S.StoreID
,((Cast(Max(Case When S.[Grouping] = 'A' Then S.CustomerCount Else 0 End) as decimal(18,2))-Cast(Max(Case When S.[Grouping] = 'B' Then S.CustomerCount Else 0 End) as decimal(18,2)))
/ Cast(Max(Case When S.[Grouping] = 'B' Then S.CustomerCount Else 0 End) as decimal(18,2)))*100 as CustomerCountVar
,((Cast(Max(Case When S.[Grouping] = 'A' Then S.Orders Else 0 End) as decimal(18,2))-Cast(Max(Case When S.[Grouping] = 'B' Then S.Orders Else 0 End) as decimal(18,2)))
/ Cast(Max(Case When S.[Grouping] = 'B' Then S.Orders Else 0 End) as decimal(18,2)))*100 as OrderVar
,((Cast(Max(Case When S.[Grouping] = 'A' Then S.Amountsold Else 0 End) as decimal(18,2))-Cast(Max(Case When S.[Grouping] = 'B' Then S.Amountsold Else 0 End) as decimal(18,2)))
/ Cast(Max(Case When S.[Grouping] = 'B' Then S.Amountsold Else 0 End) as decimal(18,2)))*100 as AmountsoldVar
From #Split as S
Group by S.StoreID
So it is truly impossible to always get within 1% as we all expected but like I said we were ok with trying to get as close as possible after X number of tries. I have figured out how to make this happen. Below is the code I used currently set at 10 tries but can be changed to whatever number works for the business.
If OBJECT_ID('tempdb.dbo.#TestB') IS NOT NULL DROP TABLE #TestB
Create Table #TestB
(
StoreID int
,CustomerID VarChar(11)
,Orders int
,AmountSold Float
,Random Float
,Ranking bigint
,CombinedVar Decimal(18,2)
)
If OBJECT_ID('tempdb.dbo.#BestPrep') IS NOT NULL DROP TABLE #BestPrep
Create Table #BestPrep
(
StoreID int
,CustomerID VarChar(11)
,Orders int
,AmountSold Float
,Random Float
,Ranking bigint
,CombinedVar Decimal(18,2)
)
Declare #Giveup int
Set #GiveUp = 10
WHILE #GiveUp > 0
BEGIN
If OBJECT_ID('tempdb.dbo.#Orders') IS NOT NULL DROP TABLE #Orders
Select
StoreID
,CustomerID
,Sum(OrderID) as Orders
,Sum(OrderAmount) as AmountSold
,Rand() as Random
Into #Orders
From CustomerOrders
Group by StoreID,CustomerID
IF OBJECT_ID('tempdb.dbo.#OrderRanking') IS NOT NULL DROP TABLE #OrderRanking
Select
O.*
,ROW_NUMBER() Over(Partition by StoreID Order by Random) as Ranking
Into #OrderRanking
From #Orders as O
If OBJECT_ID('tempdb.dbo.#Split') IS NOT NULL DROP TABLE #Split
Select
R.StoreID
,Count(CustomerID) as CustomerCount
,Sum(R.Orders) as Orders
,Sum(R.AmountSold) as Amountsold
,Case When Ranking%2 = 0 Then 'A' Else 'B' End as 'Grouping'
Into #Split
From #OrderRanking as R
Group by
R.StoreID
,Case When Ranking%2 = 0 Then 'A' Else 'B' End
If OBJECT_ID('Tempdb.dbo.#Var') IS NOT NULL DROP TABLE #Var
Select
S.StoreID
,ABS(((Cast(Max(Case When S.[Grouping] = 'A' Then S.CustomerCount Else 0 End) as decimal(18,2))-Cast(Max(Case When S.[Grouping] = 'B' Then S.CustomerCount Else 0 End) as decimal(18,2)))
/ Cast(Max(Case When S.[Grouping] = 'B' Then S.CustomerCount Else 0 End) as decimal(18,2)))*100) as CustomerCountVar
,ABS(((Cast(Max(Case When S.[Grouping] = 'A' Then S.Orders Else 0 End) as decimal(18,2))-Cast(Max(Case When S.[Grouping] = 'B' Then S.Orders Else 0 End) as decimal(18,2)))
/ Cast(Max(Case When S.[Grouping] = 'B' Then S.Orders Else 0 End) as decimal(18,2)))*100) as OrderVar
,ABS(((Cast(Max(Case When S.[Grouping] = 'A' Then S.Amountsold Else 0 End) as decimal(18,2))-Cast(Max(Case When S.[Grouping] = 'B' Then S.Amountsold Else 0 End) as decimal(18,2)))
/ Cast(Max(Case When S.[Grouping] = 'B' Then S.Amountsold Else 0 End) as decimal(18,2)))*100) as AmountsoldVar
,ABS(((Cast(Max(Case When S.[Grouping] = 'A' Then S.Orders Else 0 End) as decimal(18,2))-Cast(Max(Case When S.[Grouping] = 'B' Then S.Orders Else 0 End) as decimal(18,2)))
/ Cast(Max(Case When S.[Grouping] = 'B' Then S.Orders Else 0 End) as decimal(18,2)))*100)
+
ABS(((Cast(Max(Case When S.[Grouping] = 'A' Then S.Amountsold Else 0 End) as decimal(18,2))-Cast(Max(Case When S.[Grouping] = 'B' Then S.Amountsold Else 0 End) as decimal(18,2)))
/ Cast(Max(Case When S.[Grouping] = 'B' Then S.Amountsold Else 0 End) as decimal(18,2)))*100) as CombinedVar
INTO #Var
From #Split as S
Group by S.StoreID
If Exists (Select * From #Var Where (OrderVar < 1 and AmountSoldVar <1) Or CombinedVar < 2)
If Object_ID('tempdb.dbo.#TestA') IS NOT NULL DROP TABLE #TestA
Select
A.StoreID
,A.CustomerID
,A.Orders
,A.AmountSold
,A.Random
,A.Ranking
,V.CombinedVar
Into #TestA
From #OrderRanking as A
Join #var as V
on A.StoreID = V.StoreID
Where A.StoreID in
(Select StoreID From #Var Where (OrderVar < 1 and AmountSoldVar <1) Or CombinedVar < 2)
Insert Into #TestB
Select
A.StoreID
,A.CustomerID
,A.Orders
,A.AmountSold
,A.Random
,A.Ranking
,A.CombinedVar
From #TestA as A
Left Join #TestB as B
on A.CustomerID = B.CustomerID
Where
B.CustomerID is null
Insert Into #BestPrep
Select
A.StoreID
,A.CustomerID
,A.Orders
,A.AmountSold
,A.Random
,A.Ranking
,V.CombinedVar
From #OrderRanking as A
Join #Var as V
on A.StoreID = V.StoreID
Left Join #BestPrep as B
on A.CustomerID = B.CustomerID
and V.CombinedVar > B.CombinedVar
Where
B.CustomerID is null
Set #Giveup = #Giveup-1
END
If Object_ID('tempdb.dbo.#bestPrep2') IS NOT NULL DROP TABLE #bestPrep2
Select
A.StoreID
,Min(CombinedVar) as CombinedVar
Into #BestPrep2
From #BestPrep as A
Group by
A.StoreID
Select A.*
From #BestPrep as A
Join #BestPrep2 as B
on A.StoreID = B.StoreID
and A.CombinedVar = B.CombinedVar
Union
Select * From #TestB

SQL Query to Calculate two Amounts on the same row

probably a very simple answer but i'm new to T-SQL so could do with some help!
I need a 3rd column that works out TotInc - (minus) TotEx to give me a TotalDisposableIncome
Here is my SQL:
--This gives me the Total Income and Total Expenditure on the same row
SELECT
SUM(CASE WHEN Type = '1' THEN Amount ELSE 0 END) as TotalInc,
SUM(CASE WHEN Type = '2' THEN Amount ELSE 0 END) as TotEx
FROM ClaimFinancials
Thanks!
You could use a Common Table Expression (CTE):
WITH T1 AS
(
SELECT
SUM(CASE WHEN Type = '1' THEN Amount ELSE 0 END) as TotalInc,
SUM(CASE WHEN Type = '2' THEN Amount ELSE 0 END) as TotEx
FROM ClaimFinancials
)
SELECT TotalInc, TotEx, TotalInc - TotEx AS TotalDisposableIncome
FROM T1
Or an ordinary subquery:
SELECT TotalInc, TotEx, TotalInc - TotEx AS TotalDisposableIncome
FROM
(
SELECT
SUM(CASE WHEN Type = '1' THEN Amount ELSE 0 END) as TotalInc,
SUM(CASE WHEN Type = '2' THEN Amount ELSE 0 END) as TotEx
FROM ClaimFinancials
) T1
You can't reference your column aliases elsewhere in your SELECT clause. Here is one alternative.
SELECT TotalInc, TotEx, TotInc - TotEx as TotalDisposable
FROM (
SELECT
SUM(CASE WHEN Type = '1' THEN Amount ELSE 0 END) as TotalInc,
SUM(CASE WHEN Type = '2' THEN Amount ELSE 0 END) as TotEx
FROM ClaimFinancials
) AS Total
There's more than one way to do it.
WITH T1 AS
(
SELECT
Type,
SUM(Amount as Total
FROM ClaimFinancials
)
SELECT
Inc.Total as TotalInc,
Ex.Total as TotEx,
Inc.Total - Ex.Total AS TotalDisposableIncome
FROM T1 Inc, T2 Ex
where T1.Type = 1 and T2.Type = 2
SELECT
SUM(CASE WHEN Type = '1' THEN Amount ELSE 0 END) as TotalInc,
SUM(CASE WHEN Type = '2' THEN Amount ELSE 0 END) as TotEx,
SUM(CASE
WHEN Type = '1' THEN Amount
WHEN Type = '2' Then -Amount
ELSE 0
END) AS TotalDisposableIncome
FROM ClaimFinancials