I'm working on a finance application where people can send money through it.
Each user can deposit their money in our service and they can send money from their balance to another people.
This two transaction will affect the amount of their balance in our application.
I'm wondering, what is the best way to get the value of a user's balance. Should I just store the value directly in a column, and change it whenever there is a transaction made by the user, or should I just infer the balance value from all of the transaction that a user made.
The cons for each method that I already think of are:
Store the value directly:
Data consistency: Value difference may happen when one data is saved successfully but another data is not
Infer from another data:
Slower(?): Whenever I want to get the value of the balance, I have to query all of the transaction data and get the sum of it. A lot of function in my application require you to know the value of a user's balance, so maybe this kind of query will be done a lot. And there is also a concern when the user's transaction data has became large.
I built my application using PHP and MySQL, and Yii2 framework.
What do you think the best method to this kind of problem, that while efficient, but also can keep the data integrity and have no problem with a lot of data in the future?
Thank you.
Combine the two.
Either:
Store the value
Use transactions to make sure no partial results are ever persisted
Schedule a regular (daily?) task that will verify the consistency using last verified consistent value and the increments
Or:
Store yesterday's value
Compute the current value fro yesterday's value and increments
Schedule a daily task that will update the stored value
Consider first approach.
If db is Mysql, You could use the table-locking (or row-locking) abilities of MySQL. Use InnoDB tables on your MySQL instance otherwise your system won't be fully ACID-compilant, meaning you won't get the atomicity nature you need.
Related
For an Android Launcher (Home Screen) app project i want to implement a feature called "Sort by usage". This will sort by the launch count of an app within a user settable timeframe.
The current idea for the implementation is to store an array of unich epoch timestamps, one for each launch.
Additionaly it'll store a counter caching the current amount of launches within the selected timeframe, incremented with every launch. Of course, this would regularly have to be rebuild as time passes, but merely every few hours or at least x percent of the selected timeframe, so computations definitely wouldn't run as often as without the counter, since this information is required everytime when any app entries on screen need to get sorted - but i'm not quite sure if it matters in any way during actual use.
I am now unsure how to store the timestamp array inside the SQL database. As there is a table holding one record with information about each launcher entry i thought about the following options:
Store the array of unix epochs in serialized form (maybe JSON Array) to one field of the entries record
Create a seperate table for launch times with
a. each record starting with an id associated with an entry followed by all launch times, one for each field
b. each record a combination of entry id and one launch time
these options would obvously have the advatage of storing the timestamp using an appropriate type
I probably didn't quite understand why you need a second piece of data for your launch counter - the fact you saved a timestamp already means a launch - why not just count timestamps? Less updating, less record locking, more concurrency.
Now, let's say you've got a separate table with timestamps in a classic one to many setting.
Pros of this setup - you never need to update anything - just keep inserting. You can easily cluster your table by timestamp, run a filter on your timeframe and issue a group by and count rows. The client then will get the numbers and sort by count (I believe it's generally better to not sort in SQL). Cons - you need a join to parent table and probably need to get your indexes right.
Alternatively you store timestamps in a blob text (JSON, CSV, whatever) with your main records. This definitely means you'll have to update your records a lot, which potentially opens you up to locking issues. Then, I'm not entirely sure what you'll have to do to get your final launch counts - you read all entities, deserialise all timestamps, filter by timeframe and then count? It does feel a bit more convoluted in your case.
I don't think there's such thing as a "best" way. You have to consider pros and cons. From what I gather, you might be better off with classic SQL approach unless there's something I didn't catch that will outweigh my points above
I am designing a database which will hold transaction level data. It will work the same way as a bank account - debits/credits to an Account Number.
What is the best / most efficient way of obtaining the aggregation of these transactions.
I was thinking about using a summary table and then adding these to a list of today's transactions in order to derive how much each account has (i.e their balance).
I want this to be scalable (ie. 1 billion transactions) so don't want to have to perform database hits to the main fact table as it will need to find all the debits/credits associated with a desired account number scanning potentially a billion rows.
Thanks, any help or resources would be awesome.
(Have been working in Banks for almost 10years. Here is how it is actually done).
TLDR: your idea is good.
Every now and then you store the balance somewhere else ("carry forward balance"). E.g. every month or so (or aver a given number of transactions). To calculate the actual balance (or any balance in the past) you accumulate all relevant transactions going back in time until the most recent balance you kept ("carry forward balance"), which you need to add, of course.
The "current" balance is not kept anywhere. Just alone for the locking problems you would have if you'd update this balance all the time. (In real banks you'll hit some bank internal accounts with almost every single transactions. There are plenty of bank internal accounts to be able to get the figures required by law. These accounts are hit very often and thus would cause locking issues when you'd like to update them with every transaction. Instead every transactions is just insert — even the carry forward balances are just inserts).
Also in real banks you have many use cases which make this approach more favourable:
Being able to get back dated balances at any time - Being able to get balances based on different dates for any time (e.g. value date vs. transaction date).
Reversals/Cancellations are a fun of it's own. Imagine to reverse a transaction from two weeks ago and still keep all of the above going.
You see, this is a long story. However, the answer to your question is: yes, you cannot accumulate an ever increasing number of transactions, you need to keep intermediate balances to limit the number to accumulate if needed. Hitting the main table for a limited number of rows, should be no issue.
Make sure your main query uses an Index-Only Scan.
Do an Object Oriented Design, Create table for objects example Account, Transaction etc. Here's a good website for your reference. But there's a lot more on the web discussing OODBMS. The reference I gave is just my basis when I started doing an OODBMS.
This is an overall question of best practice when designing a database and I have been unable to correctly word my question to find an answer on the net.
I have a database that is holding information about employee's safety training. The information about requirements needs to be stored for auditing purposes, but year to year the requirements may change. I still need to keep a record of what an employee was required to do and how much of it they completed.
My first reaction was to create a table for historical data and store basically a "snapshot" every time an employee completes an action or the time required to complete the action expires, that way if the requirement changes years down the road the historical table won't be pointing at the wrong information.
I altered this upon implementation and have set up the application so that it doesn't change existing requirements it creates a new one every time the requirements change so that the old requirement is still in the database and pointed to by the historical table. On the main table the requirement table the old requirement is marked as inactive and can never be deleted through the application.
I am sure this problem has been encountered before but I have been unable to articulate it well enough to search viable answers on my own. My question is what is the best way to proceed in this situation? Is my current implementation a good one? Was my first reaction better? Is there another way that is better than both?
On a side note if there is a word or phrase for this type of problem (the storing of redundant data, storing historical data, storing auditing data, I'm not sure what to call it) please let me know.
As far as I understand you have an employee and associated tasks. I would create one table for employees and one for the tasks. Another many to many table will link these to. In the task table you will have a status and date effective from/to columns. if the task is currently effective you can leave the effective to date null or set the value to 1/1/2999. When querying the tasks you will need to provide a date so you will be able to get the task effective as of a specific date. If you decide to use null value the where clause will be like
where t.DateEffectiveFrm<=#AsOfDate and (#AsOfDate < t.DateEffetiveTo or t.DateEffetiveTo is null)
if for currently effective tasks you use a date value in the far future ( 1/1/2999) the where clause will be even simple
where t.DateEffectiveFrm<=#AsOfDate and #AsOfDate < t.DateEffetiveTo
You might consider also to add a status table , which will keep a history of the status changes
Let's pretend I've got a social network.
I'm always showing to the user how many users are registered and have activated their profile.
So, everytime a single user logs in, it goes to DB and make a:
select count(*) from users where status = 'activated'
so if 5.000 users logs in, or simply refreshes the page, it will make 5.000 requests to SQL above.
I was wondering if is better to have a variable some place(that I still have no idea where to put) that everytime a user activates his profile will add 1 and then, when I want to show how many users are registered to that social network, I'll only get the value of this variable.
How can I make this? Is it really a better solution to what I've got?
You could use an indexed view, that SQL Server will automatically maintain:
create table dbo.users (
ID int not null,
Activated bit not null
)
go
create view dbo.user_status_stats (Activated,user_count)
with schemabinding
as
select Activated,COUNT_BIG(*) from dbo.users group by Activated
go
create unique clustered index IX_user_status_stats on dbo.user_status_stats (Activated)
go
This just has two possible statuses, but could expand to more using a different data type. As I say, in this case, SQL Server will maintain the counts behind the scenes, so you can just query the view:
SELECT user_count from user_status_stats with (NOEXPAND) where Activated = 1
and it won't have to query the underlying table. You need to use the WITH (NOEXPAND) hint on editions below (Enterprise/Developer).
Although as #Jim suggested, doing a COUNT(*) against an index when the index column(s) can satisfy the query criteria using equality comparisons should be pretty quick also.
As you've already guessed - it's not a great idea to calculate this value every time someone hits the site.
You could do as you suggest, and update a central value as users are added, although you'll have to ensure that you don't end up with two processes updating the number simultaneously.
Alternatively you could have a job which runs your SQL routinely and updates the central 'user count' value.
Alternatively #2, you could use something like MemCache to hold the calculated value for a period of time, and then when the cache expires, recalculate it again.
There's a few options you could consider:
1) like you say, maintain a global count each time a profile is activated to save the hit on the users table each time. You could just store that count in a "Stats" table and then query that value from there.
2) don't show the actual "live" count, show a count that's "pretty much up to date" - e.g. cache the count in your application and have the value expire periodically so you then requery the count less frequently. Or if you store the count in a "Stats" table per above, you could have a scheduled job that updates the count every hour, instead of every time a profile is activated.
Depends whether you want to show the exact figure in real-time or whether you can live with a delay. Obviously, data volumes matter too - if you have a large database, then having a slightly out of date cached value could be worth while.
From a purely SQL Server standpoint, no, you are not going to find a better way of doing this. Unless, perhaps, your social network is Facebook sized. Denormalizing your data design (such as keeping a count in a separate table) will lead to possible sources of the data getting out of sync. It doesn't have to get out of sync if it is coded properly, but it can...
Just make sure that you have an index on Status. At which point SQL will not scan the table for the count, but it will scan the index instead. The index will be much smaller (that is, more data will fit in a disk page). If you were to convert your status to an int, smallint, or tinyint you would get even more index leaves in a disk page and thus much less IO. To get your description ('activated', etc.), use a reference table. The reference table would be so small, SQL would just keep the whole thing in RAM after the first access.
Now, if you still think this is too much overhead (and it should't be) you could come up with hybrid method. You could store your count in a separate table (which SQL would keep in RAM if it is just the one record) or assuming your site is in asp.net you could create an Application variable to keep track of the count. You could increment it in Session_Start and decrement it in Session_End. But, you will have to come up with a way of making the the increment and decrement thread safe so two sessions don't try and update the value at the same time.
You can also use the Global Temporary table. You will always get fast retrieval. Even
if you are setting 30 seconds ping. The Example Trigger Link1, Example Trigger Link2 will maintain such activities in this table.
I'm working on a project in which we will need to determine certain types of statuses for a large body of people, stored in a database. The business rules for determining these statuses are fairly complex and may change.
For example,
if a person is part of group X
and (if they have attribute O) has either attribute P or attribute Q,
or (if they don't have attribute O) has attribute P but not Q,
and don't have attribute R,
and aren't part of group Y (unless they also are part of group Z),
then status A is true.
Multiply by several dozen statuses and possibly hundreds of groups and attributes. The people, groups, and attributes are all in the database.
Though this will be consumed by a Java app, we also want to be able to run reports directly against the database, so it would be best if the set of computed statuses were available at at the data level.
Our current design plan, then, is to have a table or view that consists of a set of boolean flags (hasStatusA? hasStatusB? hasStatusC?) for each person. This way, if I want to query for everyone who has status C, I don't have to know all of the rules for computing status C; I just check the flag.
(Note that, in real life, the flags will have more meaningful names: isEligibleForReview?, isPastDueForReview?, etc.).
So a) is this a reasonable approach, and b) if so, what's the best way to compute those flags?
Some options we're considering for computing flags:
Make the set of flags a view, and calculate the flag values from the underlying data in real time using SQL or PL-SQL (this is an Oracle DB). This way the values are always accurate, but performance may suffer, and the rules would have to be maintained by a developer.
Make the set of flags consist of static data, and use some type of rules engine to keep those flags up-to-date as the underlying data changes. This way the rules can be maintained more easily, but the flags could potentially be inaccurate at a given point in time. (If we go with this approach, is there a rules engine that can easily manipulate data within a database in this way?)
In a case like this I suggest applying Ward Cunningham's question- ask yourself "What's the simplest thing that could possibly work?".
In this case, the simplest thing might be to come up with a view that looks at the data as it exists and does the calculations and computations to produce all the fields you care about. Now, load up your database and try it out. Is it fast enough? If so, good - you did the simplest possible thing and it worked out fine. If it's NOT fast enough, good - the first attempt didn't work, but you've got the rules mapped out in the view code. Now you can go on to try the next iteration of "the simplest thing" - perhaps your write a background task that watches for inserts and updates and then jumps in to recompute the flags. If that works, fine and dandy. If not, go to the next iteration...and so on.
Share and enjoy.
I would advise against making the statuses as column names but rather use a status id and value. such as a customer status table with columns of ID and Value.
I would have two methods for updating statuses. One a stored procedure that either has all the logic or calls separate stored procs to figure out each status. you could make all this dynamic by having a function for each status evaluation, and the one stored proc could then call each function. The 2nd method would be to have whatever stored proc(s), that updates user info, call a stored proc to go update all the users statuses based upon the current data. These two methods would allow you to have both realtime updates for the data that changed and if you add a new status, you can call the method to update all statuses with new logic.
Hopefully you have one point of updates to the user data, such as a user update stored proc, and you can put the status update stored proc call in that procedure. This would also save having to schedule a task every n seconds to update statuses.
An option I'd consider would be for each flag to be backed by a deterministic function that returns the up-to-date value given the relevant data.
The function might not perform well enough, however, if you're calling it for many rows at a time (e.g. for reporting). So, if you're on Oracle 11g, you can solve this by adding virtual columns (search for "virtual column") to the relevant tables based on the function. The Result Cache feature should improve the performance of the function as well.