Calculating interest using SQL - sql

I am using PostgreSQL, and have a table for a billing cycle and another for payments made in a billing cycle.
I am trying to figure out how to calculate interest based on how much amount was left after each billing cycle's last payment date. Problem is that every time a repayment is made, the interest has to be calculated on the amount remaining after that.
My thoughts on building this query are like this. Build data for all dates from last pay date of the billing cycle to today. Using partitioning, get the remaining amount for the first date. For second date, use amount from previous row and add interest to it, and then calculate interest on this one.
Unfortunately I am stuck just at the thought and can't figure out how to make this into a query!
Here's some sample data to make things easier to understand.
Billing Cycles:
id | ends_at
-----+---------------------
1 | 2017-11-30
2 | 2017-11-30
Payments:
amount | billing_cycle_id | type | created_at
-----------+------------------+---------+----------------------------
6000.0000 | 1 | payment | 2017-11-15 18:40:22.151713
2000.0000 | 1 |repayment| 2017-11-19 11:45:15.6167
2000.0000 | 1 |repayment| 2017-12-02 11:46:40.757897
So if we see, user made a repayment on the 19th, so amount due for interest post ends date(30th Nov 2017), is only 4000. So, from 30th to the 2nd, interest will be calculated daily on 4000. However, from the 2nd, interest needs to be calculated on 2000 only.
Interest Calculations(Today being 2017-12-04):
date | amount | interest
------------+---------+----------
2017-12-01 | 4000 | 100 // First day of pending dues.
2017-12-02 | 2100 | 52.5 // Second day of pending dues.
2017-12-03 | 2152.5 | 53.8125 // Third day of pending dues.
2017-12-04 |2206.3125| // Fourth's day interest will be added tomorrow

Your data is too sparse. It doesn't make any sense to need to write this query, because over time the query will get significantly more complicated. What happens when interest rates change over time?
The table itself (or a secondary table, depending on how you want to structure it) could have a running balance you add every time a deposit / withdrawal is made. (I suggest this table be add-only) Otherwise you're making both the calculation and accounting far harder on yourself than it should be. Even with the way you've presented the problem here, there's not enough information to do the calculation. (interest rate is missing) When that's the case, your stored procedure is going to be too complicated. Complicated means bugs, and people get irritated about bugs when you're talking about their money.

Related

HR Cube in SSAS

I have to design a cube for students attendance, we have four status (Present, Absent, Late, in vacation). the cube has to let me know the number of students who are not present in a gap of time (day, month, year, etc...) and the percent of that comparing the total number.
I built a fact table like this:
City ID | Class ID | Student ID | Attendance Date | Attendance State | Total Students number
--------------------------------------------------------------------------------------------
1 | 1 | 1 | 2016-01-01 | ABSENT | 20
But in my SSRS project I couldn't use this to get the correct numbers. I have to filter by date, city and attendance status.
For example, I must know that in date X there is 12 not present which correspond to 11% of total number.
Any suggestion of a good structure to achieve this.
I assume this is homework.
Your fact table is wrong.
Don't store aggregated data (Total Students) in the fact as it can make calculations difficult.
Don't store text values like 'Absent' in the fact table. Attributes belong in the dimension.
Reading homework for you:
Difference between a Fact and Dimension and how they work together
What is the grain of a Fact and how does that affect aggregations and calculations.
There is a wealth of information at the Kimball Groups pages. Start with the lower # tips as they get more advanced as you move on.

SQL SUM expression and Lock

I have a problem with right SQL solution.
Current situation:
My database contains table with bank transactions (credit and debit).
Credit transactions are signed as posivitive amount (+), and
debit transactions as negative amount (-).
Application which uses the DB is a multiuser webapp, so Transactions Table contains many rows, which reference to different users.
Some webapp actions need to check actual balance of logged user, using Transactions table and save debit Transaction (action price).
I think about architecture of this mechanism and have some questions:
Is it a good idea to calculate balance as a SUM of Transactions credits and debits each time user requests? I know it may be inefficient for db. Maybe should I save a snapshot somewhere?
How to ensure data cohesion when one user checks ""balance"" as a SUM of credit/debit transactions, and another user in the same time saves debit transaction (because he/she was faster)? I think about a pessimistic lock but what should I lock? I know that lock with aggregation (SUM) may be impossible on Postgresql (database which I use)."
Sorry for my English, I hope my problem is understandable. :)
I would consider EITHER:
Storing a balance on the account record, along with the date for which the balance is accurate.
Getting the current balance is a matter of reading the account balance, and then including any transactions since that date.
You can have a scheduled job that recalculates and timestamps that balance at an hour past midnight.
OR (and this is my preferred solution):
Every time a transaction or batch of transactions is loaded, lock the relevant account records and update them with the values from the insert as part of the same transaction.
This has the advantage of serialising access to the account, which can then help with determining whether a transaction can go ahead or not because of decisions based on the balance calculation.
If you want to avoid having the balance on the user account, something that could have a better performance, the approach I would experiment would be:
Each transaction would be related to only one account.
Each transaction would have the account balance after that transaction.
Therefore, the last transaction for that account would have the current balance.
Ex.:
TransactionId | AccountId | Datetime | Ammount | Balance
1 | 1 | 7/11/16 | 0 | 0
2 | 1 | 7/11/16 | 500 | 500
3 | 1 | 7/11/16 | -20 | 480
4 | 1 | 8/11/16 | 50 | 530
5 | 1 | 8/11/16 | -200 | 330
This way you would be able to get the account balance (last transaction with that accountId) and you would be able to provide a better view into the balance change over time.

Run a query to check consistency in SQL Server

I need some help with a SQL query and logic in general. (Using MSSQL Server)
I need to check the consistency of payments at certain retailers over a period of three months.
So I've got a table with all my transactions and the following columns:
TransactionID , AccountNumber , Retailer, Date .... (few other irrelevant ones)
Now one Accountnumber could have many transaction IDs. (One account could decide to make several payments during one month).
I have 4 unique retailers' ids, let's call them (101,102,103,104)
Now for consistency I want to get the following data:
The count of transactions where there was only one payment per account for the month at each retailer.
So I'd have:
| # Payments For Month | Retailer | Number of Transactions
| 1 Payment | 101 | 5000
...
But I also want to see how many transactions there were from accounts that made payments at multiple retailers
So I'd want something like:
| 2 Payments | 102 & 104 | 20
Which would mean that an account made 20 payments at retailer 102 & 104.
I don't as much care about how many accounts, more the amount of transactions.
I also want it broken down by month, but I've decided to do a seperate query for each month.
I've imported the data into a local DB on my personal laptop so I could go crazy, so I'll be able to try any method.
The goal of this query is to check the consistency of payments by people (accounts) at certain retailers. How many transactions do they loyally make at one retailer every month, how many transactions are there where they've gone to two retailers? or three? or all four?

Designing a scalable points leaderboard system using SQL Server

I'm looking for suggestions for scaling a points leaderboard system. I already have a working version using a very normalized strategy. This first version was essentially a table which looked something like this.
UserPoints - PK: (UserId,Date)
+------------+--------+---------------------+
| UserId | Points | Date |
+------------+--------+---------------------+
| 1 | 10 | 2011-03-17 07:16:36 |
| 2 | 35 | 2011-03-17 08:09:26 |
| 3 | 40 | 2011-03-17 08:05:36 |
| 1 | 65 | 2011-03-17 09:01:37 |
| 2 | 16 | 2011-03-17 10:12:35 |
| 3 | 64 | 2011-03-17 12:51:33 |
| 1 | 300 | 2011-03-17 12:19:21 |
| 2 | 1200 | 2011-03-17 13:24:13 |
| 3 | 510 | 2011-03-17 17:29:32 |
+------------+--------+---------------------+
I then have a stored procedure which basically does a GroupBy UserID and Sums the Points. I can also pass #StartDate and #EndDate parameters to create a leaderboard for a specific time period. For example, time windows for Top Users for the Day / Week / Month / Lifetime.
This seemed to work well with a moderate amount of data, but things became noticeably slower as the number of points records passed a million or so. The test data I'm working with is just over a million point records created by about 500 users distributed over a timespan of 3 months.
Is there a different way to approach this? I have experimented with denormalizing the data by pre-grouping the points into hour datetime buckets to reduce the number of rows. But I'm starting to think the real problem I need to worry about is the increasing number of users that need to be accounted for in the leaderboard. The time window sizes will generally be small but more and more users will start generating points within any given window.
Unfortunately I don't have access to 'Jobs' since I'm using SQL Azure and the Agent is not available (yet). But, I am open to the idea of scaling this using a different storage system if you are convincing enough.
My past work experience tells me I should look into data warehousing since this is almost a reporting problem. But at the same time I need it to be as real-time as possible.
Update
Ultimately, I would like to support custom leaderboards that could span from Monday 8am - Friday 6pm every week. But that's down the road and why I'm trying to not get too fancy with the aggregation. I'm willing to settle with basic Day/Week/Month/Year/AllTime windows for now.
The tricky part is that I really can't store them denormalized because I need these windows to be TimeZone convertible. The system is mult-tenant and therefore all data is stored as UTC. The problem is a week starts at different hours for different customers. Aggregating the sums together will cause some points to fall into the wrong buckets.
here are a few thoughts:
Sticking with SQL Azure: you can have another table, PointsTotals. Every time you add a row to your UserPoints table, also increment the TotalPoints value for a given UserId in PointsTotals (or insert a new row if they don't have a row to increment). Now you always have totals computed for each UserId.
Going with Azure Table Storage: Create a UserPoints table, with Partition Key being userId. This keeps all of a user's points rows together, where you'd easily be able to sum them. And... you can borrow the idea from suggestion #1, creating a separate PointsTotals table, with PartitionKey being UserId and RowKey probably being the total points.
If it were my problem, I'd ignore the timestamps and store the user and points totals by day
I decided to go with the idea of storing points along with a timespan (StartDate and EndDate columns) localized to the customer's current TimeZone setting. I realized an extra benefit with this is that I can 'purge' old leaderboard round data after a few monts without affecting the lifetime total of points.

Is there a set based solution for this problem?

We have a table set up as follows:
|ID|EmployeeID|Date |Category |Hours|
|1 |1 |1/1/2010 |Vacation Earned|2.0 |
|2 |2 |2/12/2010|Vacation Earned|3.0 |
|3 |1 |2/4/2010 |Vacation Used |1.0 |
|4 |2 |5/18/2010|Vacation Earned|2.0 |
|5 |2 |7/23/2010|Vacation Used |4.0 |
The business rules are:
Vacation balance is calculated by vacation earned minus vacation used.
Vacation used is always applied against the oldest vacation earned amount first.
We need to return the rows for Vacation Earned that have not been offset by vacation used. If vacation used has only offset part of a vacation earned record, we need to return that record showing the difference. For example, using the above table, the result set would look like:
|ID|EmployeeID|Date |Category |Hours|
|1 |1 |1/1/2010 |Vacation Earned|1.0 |
|4 |2 |5/18/2010|Vacation Earned|1.0 |
Note that record 2 was eliminated because it was completely offset by used time, but records 1 and 4 were only partially used, so they were calculated and returned as such.
The only way we have thought of to do this is to get all of the vacation earned records in a temporary table. Then, get the total vacation used and loop through the temporary table, deleting the oldest record and subtracting that value from the total vacation used until the total vacation used is zero. We could clean it up for when the remaining vacation used is only part of the oldest vacation earned record. This would leave us with just the outstanding vacation earned records.
This works, but it is very inefficient and performs poorly. Also, the performance will just degrade over time as more and more records are added.
Are there any suggestions for a better solution, preferable set based? If not, we'll just have to go with this.
EDIT: This is a vendor database. We cannot modify the table structure in any way.
The following should do it..
(but as others mention, the best solution would be to adjust remaining vacations as they are spent..)
select
id, employeeid, date, category,
case
when earned_so_far + hours - total_spent > hours then
hours
else
earned_so_far + hours - total_spent
end as hours
from
(
select
id, employeeid, date, category, hours,
(
select
isnull(sum(hours),0)
from
vacations
WHERE
category = 'Vacation Earned'
and
date < v.date
and
employeeid = v.employeeid
) as earned_so_far,
(
select
isnull(sum(hours),0)
from
vacations
where
category = 'Vacation Used'
and
employeeid = v.employeeid
) as total_spent
from
vacations V
where category = 'Vacation Earned'
) earned
where
earned_so_far + hours > total_spent
The logic is
calculate for each earned row, the hours earned so far
calculate the total hours used for this user
select the record if the total_hours_so_far + hours of this record - total_spent_hours > 0
In thinking about the problem, it occurred to me that the only reason you need to care about when vacation is earned is if it expires. And if that's the case, the simplest solution is to add 'vacation expired' records to the table, such that the amount of vacation remaining for an employee is always just the sum(vacation earned) - (sum(vacation expired) + sum(vacatation used)). You can even show the exact records you want by using the last vacation expired record as a starting point for the query.
But I'm guessing that's not an option. To address the problem as asked, keep in mind that whenever you find yourself using a temporary table try putting that data into CTE (common table expression) instead. Unfortunately I have a meeting right now and so I don't have time to write the query (maybe later, it sounds like fun), but this should get you started.
I find your whole result set confusing and inaccurate and I can see employees sayng, "no I earned 2 hours on Jan 25th not 1." It is not true that they earned 1 hour on that date that was only partially offset, and you will have no end of problems if you choose to display this way. I'd look at a different way to present the information. Typically you either present a list of all leave actions (earned, expired and used) with a total at the bottom or you present a summary of available for use and used.
In over 30 years in the workforce and having been under many differnt timekeeping systems (as well as having studied even more when I was a managment analyst), I have never seen anyone want to display timekeeping information this way. I'm thinking there is a reason. If this is a requirement, I'd suggest pushing back on it and explaining how it will be confusing to read the data this was as well as being difficult to get a well-performing solution. I would not accept this as a requirement without trying to convince the client that it is a poor idea.
As time passes and records are added, performance will get worse and worse unless you do something about it, such as:
Purge old rows once they're "cancelled out" (e.g. vacation earned has had equivalent vacation used rows added and accounted for; vacation used has been used set "expire" vacation earned as "expended")
Add a column that flags if a a row has been "cancelled out", and incorporate this column into your indexes
Tracking how the data changes in this fashion seems an argument to modify your table sturctures (have several, not just one), but that's outside the scope of your current problem.
As for the query itself, I'd build two aggregates, do some subtraction, make that a subquery, then join it on some clever use of one of the ranking functions. Smells like a correlated subquery in there somewhere, too. I may try and hash this out later (I'm short on time), but I bet someone beats me to it.
I'd suggest modifying the table to keep track of Balance in its own column. That way, you only need to grab the most recent record to know where the employee stands.
That way, you can satisfy the simple case ("How much vacation time do I have"), while still being able to do the awkward rollup you're looking for in your "Which bits of vacation time don't line up with other bits" report, which I'd hope is something you don't need very often.