I'm looking for advice on best way to build a compound interest module in SQL server. Basic set up:
Table: Transactions {TransID, MemberID, Trans_Date, Trans_Code, Trans_Value).
Table: Interest {IntID, Int_eff_Date, Int_Rate}
In the interest table, there may be different rates with an effective date - can never have an over lapping date though. For example:
Int_Eff_Date Int_Rate
01/01/2016 7%
01/10/2016 7.5%
10/01/2017 8%
I want to calculate the interest based on the transaction date and transaction value, where the correct interest rate is applied relative to transaction date.
So if Table transaction had:
TransID MemberID Trans_Date Trans_Value
1 1 15/04/2016 150
2 1 18/10/2016 200
3 1 24/11/2016 200
4 1 15/01/2017 250
For transID 1 it would use 7% from 15/04/2016 until 30/09/2016 (168 days) from 1/10/2016 to 09/01/2017 it would use 7.% and then from 10/01/2007 to calculation date (input parameter) it would use 8%.
It would apply similar methodology for all transactions, add them up and display the interest value.
I'm not sure if I should use cursors, UDF, etc.
This should provide an outline of what you're trying to do.
--Build Test Data
CREATE TABLE #Rates(Int_Eff_Date DATE
, Int_Rate FLOAT)
CREATE TABLE #Transactions(TransID INT
,MemberID INT
,Trans_Date DATE
,Trans_Value INT)
INSERT INTO #Rates
VALUES ('20160101',7)
,('20161001',7.5)
,('20170110',8)
INSERT INTO #Transactions
VALUES
(1,1,'20160415',150)
,(2,1,'20161018',200)
,(3,1,'20161124',200)
,(4,1,'20170115',250)
;WITH cte_Date_Rates
AS
(
SELECT
S.Int_Eff_Date
,ISNULL(E.Int_Eff_Date,'20490101') AS "Expire"
,S.Int_Rate
FROM
#Rates S
OUTER APPLY (SELECT TOP 1 Int_Eff_Date
FROM #Rates E
WHERE E.Int_Eff_Date > S.Int_Eff_Date
ORDER BY E.Int_Eff_Date) E
)
SELECT
T.*
,R.Int_Rate
FROM
#Transactions T
LEFT JOIN cte_Date_Rates R
ON
T.Trans_Date >= R.Int_Eff_Date
AND
T.Trans_Date < R.Expire
Related
Please, help me figure out how to find out the date of the currently debt and the number of days since its inception I have this table:
Date
Customer
Deal
Sum
20.11.2009
220000
222221
25000
27.11.2009
220001
222221
-30000
20.12.2009
220000
222221
20000
31.12.2009
220001
222221
-10000
12.12.2009
111110
111111
12000
25.12.2009
111110
111111
5000
12.01.2010
111110
111111
-10100
12.12.2009
111110
122222
10000
29.12.2009
111110
122222
-10000
On the loan, payments can be made by co-borrowers. If a client with a loan misses the next payment on schedule, he has a debt. In this case, a corresponding record appears in the table, where Sum is the unpaid amount (with a positive sign). If, then, the client makes a payment (the full amount or part of it), a new record appears, where Sum is the amount paid (with a “-” sign). It should be noted that the client's payment does not necessarily completely extinguish the accumulated debt, it can only be part of the debt.
DROP TABLE IF EXISTS #PDCL
set dateformat dmy
CREATE TABLE #PDCL
(
Payment_dt date,
Customer int,
Deal int,
Currency varchar(5),
Sum_payment int
)
INSERT INTO #PDCL VALUES ('12.12.2009', 111110, 111111, 'RUR', 12000)
INSERT INTO #PDCL VALUES ('25.12.2009', 111110, 111111, 'RUR', 5000)
INSERT INTO #PDCL VALUES ('12.12.2009', 111110, 122222, 'RUR', 10000)
INSERT INTO #PDCL VALUES ('12.01.2010', 111110, 111111, 'RUR', -10100)
INSERT INTO #PDCL VALUES ('20.11.2009', 220000, 222221, 'RUR', 25000)
INSERT INTO #PDCL VALUES ('20.12.2009', 220000, 222221, 'RUR', 20000)
INSERT INTO #PDCL VALUES ('31.12.2009', 220001, 222221, 'RUR', -10000)
INSERT INTO #PDCL VALUES ('29.12.2009', 111110, 122222, 'RUR', -10000)
INSERT INTO #PDCL VALUES ('27.11.2009', 220001, 222221, 'RUR', -30000)
--Start date of the current debt
SELECT Deal
, MIN(Payment_dt) AS Start_date_current_debt
FROM #PDCL
WHERE Sum_payment > 0
GROUP BY Deal
--Number of days of current debt
SELECT Deal
, DATEDIFF(d, MIN(Payment_dt), MAX(Payment_dt)) AS Num_days_current_debt
FROM #PDCL
GROUP BY Deal
The dataset has many different Customers and Deal. I gave an illustrative example, because of which the question arose. In it, the client was twice in debt.
My desired answer:
Deal
Start_date_current_debt
111111
2009-12-12
122222
2009-12-12
222221
2009-12-20
Deal
Num_days_current_debt
111111
todate - 2009-12-12
122222
17
222221
todate - 2009-12-20
After reading the comments on this answer, here is an approach that solves the question asked. I have taken a slightly verbose approach so that you can follow the logic, but feel free to collapse some of the common table expressions to make it shorter.
We can compute the running SUM for each deal, and I will number the rows for each deal. We can then compare the SUM for a current row of a deal to the SUM of a previous row of a deal using LAG. When the SUM goes positive from negative, or the sum is positive and the previous SUM is NULL, we have found where there is a debt crossing. I will multiply the row number by -1 in these situations so that I can find the MIN row number for each deal and that will be the most recent date when there was money owed. As I mentioned, this can be shortened but I left it a bit verbose so you can follow the logic:
;WITH sums AS (
SELECT Deal,
Payment_Dt,
SUM(Sum_payment) OVER (PARTITION BY Deal ORDER BY Payment_dt) AS [currentSum],
ROW_NUMBER() OVER (PARTITION BY Deal ORDER BY Payment_dt) AS [num]
FROM #PDCL
), sumsWithLag AS (
SELECT Deal, Payment_dt,
currentSum,
LAG(currentSum) OVER (PARTITION BY Deal ORDER BY Payment_dt) AS [prevSum],
num
FROM sums
), markedCrossings AS (
SELECT Deal, Payment_dt,
CASE WHEN currentSum > 0 AND (prevSum IS NULL OR prevSum < 0) THEN -1 ELSE 1 END * num AS num
FROM sumsWithLag
), debtCrossings AS (
SELECT Deal, MIN(num) AS num
FROM markedCrossings
GROUP BY Deal
)
SELECT s.Deal, s.Payment_dt AS Start_date_current_debt
FROM debtCrossings AS c
INNER JOIN sums AS s ON s.Deal = c.Deal and s.num = ABS(c.num)
And it gives this result:
Deal
Start_date_current_debt
111111
2009-12-12
122222
2009-12-12
222221
2009-12-20
Those are the expected values. At this point, we can use the same common table expressions to answer the number of days in debt. We know the start date, so we just have to see if the deal has a positive amount at the most recent sum.
;WITH sums AS (
SELECT Deal,
Payment_Dt,
SUM(Sum_payment) OVER (PARTITION BY Deal ORDER BY Payment_dt) AS [currentSum],
ROW_NUMBER() OVER (PARTITION BY Deal ORDER BY Payment_dt) AS [num]
FROM #PDCL
), sumsWithLag AS (
SELECT Deal, Payment_dt,
currentSum,
LAG(currentSum) OVER (PARTITION BY Deal ORDER BY Payment_dt) AS [prevSum],
num
FROM sums
), markedCrossings AS (
SELECT Deal, Payment_dt,
CASE WHEN currentSum > 0 AND (prevSum IS NULL OR prevSum < 0) THEN -1 ELSE 1 END * num AS num
FROM sumsWithLag
), debtCrossings AS (
SELECT Deal, MIN(num) AS num
FROM markedCrossings
GROUP BY Deal
), startDates AS (
SELECT s.Deal, s.Payment_dt AS Start_date_current_debt
FROM debtCrossings AS c
INNER JOIN sums AS s ON s.Deal = c.Deal and s.num = ABS(c.num)
), balances AS (
SELECT Deal, SUM(Sum_payment) AS balance, MAX(Payment_dt) AS Payment_dt
FROM #PDCL
GROUP BY Deal
)
SELECT s.Deal,
DATEDIFF(day, s.Start_date_current_debt, CASE WHEN b.balance > 0 THEN GETDATE() ELSE b.Payment_dt END) AS Num_days_current_debt
FROM startDates AS s
INNER JOIN balances AS b ON s.Deal = b.Deal;
And the result is:
Deal
Num_days_current_debt
111111
4274
122222
17
222221
4266
I have a database with transactions of accounts. The relevant columns for me are: Account,Amount, Date, description and Transaction_Code.
My goal is to extract rows for a given account which meets my trigger points.
The trigger points which I've succeeded writing are Amount greater than 200 and Transaction_Code in ('1,'2','3').
the only trigger point I'm struggling with is that: The account has no other transactions with this counterparty in the last 21 days. I've only succeeded in taking the range of dates I need.
Example for the Dataset:
**Account** **Amount** **Date** **Description** **Transaction_Code**
555 280 2019-10-06 amt_fee 1
555 700 2019-09-20 refund 2
555 250 2019-10-01 amt_fee 1
snippet of sql I wrote for the example for better understanding:
select Account, Amount, Date, Description
from MyTable
where Account = '555' and Date between '2019-09-15' and '2019-10-06'
and Amount >= 200
and Transaction_Code in ('1','2','3')
The problem I have is how to do the condition of: ''The account has no other transactions with this counterparty in the last 21 days.'' Counterparty refers to the Description or Transaction_Code columns.
How should I do that condition for my true larger dataset? with groupby and count distinct?
You could add a not exists condition with a correlated subquery that ensures that the same Account did not have a transaction with the same Description or Transaction_Code within the last 21 days.
select Account, Amount, Date, Description
from MyTable t
where
Account = '555' and Date between '2019-09-15' and '2019-10-06'
and Amount >= 200
and Transaction_Code in (1, 2, 3)
and not exists (
select 1
from MyTable t1
where
t1.Account = t.Account
and (t1.Description = t.Description or t1.Transaction_Code = t.Transaction_Code)
and t1.date < t.date
and t1.date >= dateadd(day, -21, t.date)
)
Situation: I have the exchange rate table like this:
date_from cur1 coef1 cur2 coef2
2017-01-01 CZK 27.000000000 EUR 1.000000000
2017-07-03 EUR 1.000000000 CZK 26.150000000
2017-07-03 JPY 100.000000000 CZK 19.500000000
2017-10-05 JPY 1000.0000000 EUR 7.54761885
Notice that sometimes the cur1 and cur2 can be switched for the same pair. The table contains also other currency pairs. The reason for the two coefficients is that the table is filled manually (to get the numbers more comprehensible by a human brain -- see the JPY conversion).
Then I have another table with invoice rows where the price is expressed in the local currency (that is, each row has it own currency unit near the price value).
I need to do some SELECT over the invoice-row table and transform the price to be shown in the chosen target currency (say, everything in Euro). How to do that efficiently?
My first attempts: I know the target currency in advance. It means it should probably be better to build a temporary table with simplified structure to be joined easily. Let the target currency be EUR. Then only subset of the above table will be used, some pairs be switched, and the two coefficients be converted to one rate. The target currency will be fixed or implicit. From the above table, the JPY-CZK pair would not be part of the table:
date_from cur rate
2017-01-01 CZK 27.000000000
2017-07-03 CZK 26.150000000
2017-10-05 JPY 0.00754761885
To join the rows with another table I need not only the date_from but also date_to. To be able to use BETWEEN in the join condition, I would like to have the date_to as the one just before the next period. Here for CZK, I need to have a record like:
date_from date_to cur rate
2017-01-01 2017-07-02 CZK 27.000000000
Notice the one day off in the date_to from the next date_from.
However, I need to add automatically also some boundary values for the dates before and after the explicitly expressed intervals. I need something like that:
date_from date_to cur rate
1900-01-01 2016-12-31 CZK 27.000000000 <-- guessed rate from the next; fixed start at the year 1900
2017-01-01 2017-07-02 CZK 27.000000000
2017-07-03 3000-01-01 CZK 26.150000000 <-- explicit rate; boundary year to 3000
Plus similarly for the other currencies in the same temporary table...
1900-01-01 2017-10-04 JPY 0.00754761885 <-- rate guessed from the next; fictional date_from
2017-10-05 3000-01-01 JPY 0.00754761885 <-- explicit rate; fictional date_to
How can I efficiently construct such temporary table?
Do you have any other suggestions related to the problem?
Update: I have posted my solution to Code Review https://codereview.stackexchange.com/q/177517/16189 Please, have a look to find the flaws.
Suppose the exchange rate table is as follows, with exchange rates to your target currency:
CREATE TABLE currency_rate (
currency_id INT NOT NULL,
update_date DATE NOT NULL,
rate DECIMAL(18,6) NOT NULL,
CONSTRAINT PK_currency_rate PRIMARY KEY(currency_id,update_date)
);
You can use a correlated subquery to link invoices to the exchange rate:
SELECT
i.*,
cr.rate
FROM
invoice AS i
INNER JOIN currency_rate AS cr ON
cr.currency_id=i.currency_id AND
cr.update_date=(
SELECT
MAX(cr_i.update_date)
FROM
currency_rate AS cr_i
WHERE
cr_i.currency_id=i.currency_id AND
cr_i.update_date<=i.invoice_date
);
If you do have a lot of invoices and a lot of rates, a solution based on a temporary table might improve performance. Best to measure which one wins. Based on the same currency_rate table definition:
CREATE TABLE #cr (
date_from DATETIME,
date_to DATETIME,
currency_id INT,
rate DECIMAL(18,6)
);
CREATE CLUSTERED INDEX IX_tcr_curr_dt ON #cr(currency_id,date_from);
INSERT INTO #cr (
date_from,
date_to,
currency_id,
rate
)
SELECT
date_from=ISNULL(DATEADD(DAY,1,LAG(update_date) OVER (PARTITION BY currency_id ORDER BY update_date)), '17530101'),
date_to=CASE WHEN LEAD(update_date) OVER (PARTITION BY currency_id ORDER BY update_date) IS NULL THEN '99991231' ELSE update_date END,
currency_id,
rate
FROM
currency_rate AS cr;
SELECT
i.*,
c.rate
FROM
invoices AS i
INNER JOIN #cr AS c ON
c.currency_id=i.currency_id AND
c.date_from<=i.invoice_date AND
c.date_to>=i.invoice_date;
DROP TABLE #cr;
I don't think that you will need a temporary table.
You first need to get the rates that have the highest date_from value for every invoice. That is simply a MAX on date_from with the limitation of the rate's date being smaller than the invoice's date. For the example I used CZK as the currency to convert to:
SELECT
invoices.id
, invoices.cur
, MAX(date_from) AS current
FROM invoices
JOIN rates
ON rates.cur1 = invoices.cur
AND invoices.date > rates.date_from
AND rates.cur2 = 'CZK'
GROUP BY invoices.id, invoices.cur, invoices.date
Because of the limitations on columns available for SELECT caused by the GROUP BY we now have to join the two tables again and then join it with our effort of getting the current rate:
SELECT
invoices.id
, invoices.cur
, invoices.amount
, 'CZK' AS otherCurrency
, invoices.amount / rates.coef1 * rates.coef2 AS converted
FROM invoices
JOIN
(SELECT
invoices.id
, invoices.cur
, MAX(date_from) AS current
FROM invoices
JOIN rates
ON rates.cur1 = invoices.cur
AND invoices.date > rates.date_from
AND rates.cur2 = 'CZK'
GROUP BY invoices.id, invoices.cur, invoices.date) AS current_rate
ON invoices.id = current_rate.id
JOIN rates
ON current_rate.current = rates.date_from
AND rates.cur1 = invoices.cur
AND rates.cur2 = 'CZK'
I prepared a fiddle to show the SQL in action.
I hope someone can help with this issue I have, which is I am trying to work out a weekly average from the following data example:
Practice ID Cost FiscalWeek
1 10.00 1
1 33.00 2
1 55.00 3
1 18.00 4
1 36.00 5
1 24.00 6
13 56.00 1
13 10.00 2
13 24.00 3
13 30.00 4
13 20.00 5
13 18.00 6
What I want is to group by the Practice ID but work out the average for each practice (there are over 500 of these not just those above) and work this out for each week so for example at Week 1 there will be no average, but Week 2 will be the average of Weeks 1 and 2, then Week 3 will be the average of Weeks 1, 2 and 3 and then so on. I need to then show this by Practice ID and for each Fiscal Week.
At the moment I have some code that is not pretty and there has to be an easier way, this code is:
I pass all the data into a table variable then using a CTE I then use case statements to set each individual week like:
CASE WHEN fiscalweek = 1 THEN cost ELSE 0 END AS [1],
CASE WHEN fiscalweek = 2 THEN cost ELSE 0 END AS [2],
CASE WHEN fiscalweek = 3 THEN cost ELSE 0 END AS [3]
This would then bring back the week 1 cost and so on into it's own column e.g. 1, 2, 3 etc. , then I've used a second CTE to sum the columns for each week so for example to work out week 6 I would use this code:
sum([1]) as 'Average Wk 1',
sum([1]+[2])/2 as 'Average Wk 2',
sum([1]+[2]+[3])/3 as 'Average Wk 3',
sum([1]+[2]+[3]+[4])/4 as 'Average Wk 4',
sum([1]+[2]+[3]+[4]+[5])/5 as 'Average Wk 5'
sum([1]+[2]+[3]+[4]+[5]+[6])/6 as 'Average Wk 6'
I've thought about various different ways of working out this average accurately in T-SQL so I can then drop this into SSRS eventually. I've thought about using a While..Loop, Cursor but failing to see an easy way of doing this.
You are looking for the cumulative average of the averages. In databases that support window/analytic functions, you can do:
select fiscalweek, avg(cost) as avgcost,
avg(avg(cost)) over (order by fiscalweek) as cumavg
from practices p
group by fiscalweek
order by 1;
If you don't have window functions, then you need to use some form of correlated subquery or join:
select p1.fiscalweek, avg(p1.avgcost)
from (select fiscalweek avg(cost) as avgcost
from practices p
group by fiscalweek
) p1 join
(select fiscalweek avg(cost) as avgcost
from practices p
group by fiscalweek
) p2
on p12 <= p1
group by p1.fiscalweek
order by 1;
I do want to caution you that you are calculating the "average of averages". This is different from the cumulative average, which could be calculated as:
select fiscalweek,
(sum(sum(cost)) over (order by fiscalweek) /
sum(count(*)) over (order by fiscalweek)
) avgcost
from practices p
group by fiscalweek
order by 1;
One treats every week as one data point in the final average (what you seem to want). The other weights each week by the number of points during the week (the latter solution). These can produce very different results when weeks have different numbers of points.
I dont know If I fully understand the question:But Try Executing this: should help you:
create table #practice(PID int,cost decimal,Fweek int)
insert into #practice values (1,10,1)
insert into #practice values (1,33,2)
insert into #practice values (1,55,3)
insert into #practice values (1,18,4)
insert into #practice values (1,36,5)
insert into #practice values (1,24,6)
insert into #practice values (13,56,1)
insert into #practice values (13,10,2)
insert into #practice values (13,24,3)
insert into #practice values (13,30,4)
insert into #practice values (13,20,5)
insert into #practice values (13,18,6)
select * from #practice
select pid,Cost,
(select AVG(cost) from #practice p2 where p2.Fweek <= p1.Fweek and p1.pid = p2.pid) WeeklyAVG,
Fweek,AVG(COST) over (Partition by PID) as PIDAVG
from #practice p1;
I think this would work:
SELECT t1.pid,
t1.fiscalweek,
(
SELECT SUM(t.cost)/COUNT(t.cost)
FROM tablename AS t
WHERE t.pid = t1.pid
AND t.fiscalweek <= t1.fiscalweek
) AS average
FROM tablename AS t1
GROUP BY t1.pid, t1.fiscalweek
EDIT
To take into account for fiscal weeks without an entry you can simply exchange
SELECT SUM(t.cost)/COUNT(t.cost)
for
SELECT SUM(t.cost)/t1.fiscalweek
to calculate from week 1 or
SELECT SUM(t.cost)/(t1.fiscalweek - MIN(t.fiscalweek) + 1)
to calculate from the first week of this practice.
If all practice averages should start the same week (and not necessarily week no 1) then you'd have to find the minimum of all week numbers.
Also, this won't work if you're calculating across multiple years, but I assume that is not he case.
For a development aid project I am helping a small town in Nicaragua improving their water-network-administration.
There are about 150 households and every month a person checks the meter and charges the houshold according to the consumed water (reading from this month minus reading from last month). Today all is done on paper and I would like to digitalize the administration to avoid calculation-errors.
I have an MS Access Table in mind - e.g.:
*HousholdID* *Date* *Meter*
0 1/1/2013 100
1 1/1/2013 130
0 1/2/2013 120
1 1/2/2013 140
...
From this data I would like to create a query that calculates the consumed water (the meter-difference of one household between two months)
*HouseholdID* *Date* *Consumption*
0 1/2/2013 20
1 1/2/2013 10
...
Please, how would I approach this problem?
This query returns every date with previous date, even if there are missing months:
SELECT TabPrev.*, Tab.Meter as PrevMeter, TabPrev.Meter-Tab.Meter as Diff
FROM (
SELECT
Tab.HousholdID,
Tab.Data,
Max(Tab_1.Data) AS PrevData,
Tab.Meter
FROM
Tab INNER JOIN Tab AS Tab_1 ON Tab.HousholdID = Tab_1.HousholdID
AND Tab.Data > Tab_1.Data
GROUP BY Tab.HousholdID, Tab.Data, Tab.Meter) As TabPrev
INNER JOIN Tab
ON TabPrev.HousholdID = Tab.HousholdID
AND TabPrev.PrevData=Tab.Data
Here's the result:
HousholdID Data PrevData Meter PrevMeter Diff
----------------------------------------------------------
0 01/02/2013 01/01/2013 120 100 20
1 01/02/2013 01/01/2012 140 130 10
The query above will return every delta, for every households, for every month (or for every interval). If you are just interested in the last delta, you could use this query:
SELECT
MaxTab.*,
TabCurr.Meter as CurrMeter,
TabPrev.Meter as PrevMeter,
TabCurr.Meter-TabPrev.Meter as Diff
FROM ((
SELECT
Tab.HousholdID,
Max(Tab.Data) AS CurrData,
Max(Tab_1.Data) AS PrevData
FROM
Tab INNER JOIN Tab AS Tab_1
ON Tab.HousholdID = Tab_1.HousholdID
AND Tab.Data > Tab_1.Data
GROUP BY Tab.HousholdID) As MaxTab
INNER JOIN Tab TabPrev
ON TabPrev.HousholdID = MaxTab.HousholdID
AND TabPrev.Data=MaxTab.PrevData)
INNER JOIN Tab TabCurr
ON TabCurr.HousholdID = MaxTab.HousholdID
AND TabCurr.Data=MaxTab.CurrData
and (depending on what you are after) you could only filter current month:
WHERE
DateSerial(Year(CurrData), Month(CurrData), 1)=
DateSerial(Year(DATE()), Month(DATE()), 1)
this way if you miss a check for a particular household, it won't show.
Or you might be interested in showing last month present in the table (which can be different than current month):
WHERE
DateSerial(Year(CurrData), Month(CurrData), 1)=
(SELECT MAX(DateSerial(Year(Data), Month(Data), 1))
FROM Tab)
(here I am taking in consideration the fact that checks might be on different days)
I think the best approach is to use a correlated subquery to get the previous date and join back to the original table. This ensures that you get the previous record, even if there is more or less than a 1 month lag.
So the right query looks like:
select t.*, tprev.date, tprev.meter
from (select t.*,
(select top 1 date from t t2 where t2.date < t.date order by date desc
) prevDate
from t
) join
t tprev
on tprev.date = t.prevdate
In an environment such as the one you describe, it is very important not to make assumptions about the frequency of reading the meter. Although they may be read on average once per month, there will always be exceptions.
Testing with the following data:
HousholdID Date Meter
0 01/12/2012 100
1 01/12/2012 130
0 01/01/2013 120
1 01/01/2013 140
0 01/02/2013 120
1 01/02/2013 140
The following query:
SELECT a.housholdid,
a.date,
b.date,
a.meter,
b.meter,
a.meter - b.meter AS Consumption
FROM (SELECT *
FROM water
WHERE Month([date]) = Month(Date())
AND Year([date])=year(Date())) a
LEFT JOIN (SELECT *
FROM water
WHERE DateSerial(Year([date]),Month([date]),Day([date]))
=DateSerial(Year(Date()),Month(Date())-1,Day([date])) ) b
ON a.housholdid = b.housholdid
The above query selects the records for this month Month([date]) = Month(Date()) and compares them to records for last month ([date]) = Month(Date()) - 1)
Please do not use Date as a field name.
Returns the following result.
housholdid a.date b.date a.meter b.meter Consumption
0 01/02/2013 01/01/2013 120 100 20
1 01/02/2013 01/01/2013 140 130 10
Try
select t.householdID
, max(s.theDate) as billingMonth
, max(s.meter)-max(t.meter) as waterUsed
from myTbl t join (
select householdID, max(theDate) as theDate, max(meter) as meter
from myTbl
group by householdID ) s
on t.householdID = s.householdID and t.theDate <> s.theDate
group by t.householdID
This works in SQL not sure about access
You can use the LAG() function in certain SQL dialects. I found this to be much faster and easier to read than joins.
Source: http://blog.jooq.org/2015/05/12/use-this-neat-window-function-trick-to-calculate-time-differences-in-a-time-series/