POS (Shopping Cart) - Multiple items in a single Transaction No - sql

I'm creating a POS like system and I'm not really sure how to do the Shopping Cart part wherein after the cashier enters all the customer's item (from Inventory table), the items entered will have a single Transaction #, just like what we see in the receipts.
Should I put a Trans_No column in the Cart table? If yes, how will I handle the assigning of a single Trans_No to multiple items? I'm thinking of getting the last Trans_No and increment that to 1 then assign it to all the items in the shopping cart of the casher. But there's a huge possibility that if 2 cashiers are simultaneously using the system they both retrieve the same latest transaction # and will increment it both to 1 resulting to merging of 2 customers' order in to 1 single transaction/receipt.
What's the best way to handle this?

The data object on which your transaction id goes depends on the functional requirements of your application. If whatever is in a cart should share a transaction id, then the cart table is the right place for the transaction id.
Database systems offer a variety of features to prevent the concurrent increment problem you describe. The easiest way to avoid this is to use a serial data type as offered e.g. by PostgreSQL. If you declare a column as serial, the database will care for generating a fresh value for each record you insert.
If no such data type is available, there might still be a mechanism for generating a unique primary key for a record. An example is the auto_increment directive for MySQL.
If all these are not viable for you, e.g. because you want to have some fancy logic of generating your transaction ids, putting the logic of reading, incrementing, and storing the value needs to be enclosed in a database transaction. Statements like
start transaction;
select key from current_key;
update current_key set key = :key + 1;
commit;
will prevent collisions on the key value. However, make sure that your transactions are short, in particular that you don't leave a transaction open during a wait for user input. Otherwise, other users' transactions may be blocked too long.

Related

Database design to store ordered user photos?

For example,
user has [image1.jpg, image2.jpg ,image3.jpg]
user could reorder them to [image2.jpg, image1.jpg, image3.jpg], add to the end, delete from any position
I can think of 2 methods to store them:
just store as Array type in the database. when the user adds/deletes/reorders photos, overwrite the entire array in the database
store many photos rows with position column belongs to 1 user. on insert add to last position + 1. on delete, have to shift the positions after the deleted positions back - 1
What is the recommended design?
I think the most natural design in SQL would be a separate table:
create table userImages (
userImageId serial,
userId int references users(userId),
image varchar(255),
position int
);
As you have noticed, if you want positions to be gapless and ordered, then you need to update all the rows.
This has several features/advantages:
You can put the logic into a trigger or stored procedure so it is inside the database.
You can add additional information about the images, such as the date they were added or soft deletes.
The database can prevent duplicate images.
The alternative would be to store these as an array within a row in users. To maintain the ordering, you would basically need to do this in the application. That is, read the array, do deletes, inserts, and reorders, and then save the row again.
This has several features/disadvantages, such as:
The application has to be responsible for the column, instead of the database.
There is no place to put additional information about images.
I am generally biased toward the first approach, but there are some situations where the second is quite reasonable.
What I think is. if photo ordered by some attribute, (e.g. last edit time) you just select XXX order by attribute.
if photo ordered by user require, we have to use 'position' column.

SQL DB structure - Draft orders and consideration of Order ID as identifier

I am upgrading a system for a client which was developed by myself around 10 years ago.
It is a standard (if there can be such a thing, of course) sales / inventory / accounting system.
One of the additions they have asked me about was the ability to create draft orders. As the company has grown, so have the sizes of the orders. They want the ability to begin entering an order for a client and have the option of saving and coming back to it later.
My initial thoughts would be to have an orders table which includes drafts and a field which signified the status (draft / posted). This would prevent duplicating data across an Orders table and a DraftOrders table.
This seems correct to me but of course the OrderId field (auto-increment int) would no longer be a solid identifier for the Order (since a lot of the numbers in between orders may be missing).
The client would ideally like to keep the OrderId as an identifier so is there any solution which would enable this, rather than creating a draft order table?
Many thanks in advance for your help.
Kind regards
If you are to ensure that the identifier has no gaps for taxation purposes, you can not use the PK in the first place. This is because the sequence may have gaps, too. For example, if an INSERT fails due to some constraint violation you lose the reserved sequence number.
In case you do not want to create a separate table, I may suggest adding a new column to store the tax order ID. It will remain NULL for drafts and will be filled programmatically when the order is placed. On the UI you will show this new column and will possibly allow some searching on it (hint: good candidate for an index), yet internally you will still use same FKs as before (for both orders and drafts).

Database Schema Design: Tracking User Balance with concurrency

In an app that I am developing, we have users who will make deposits into the app and that becomes their balance.
They can use the balance to perform certain actions and also withdraw the balance. What is the schema design that ensures that the user can never withdraw / or perform more than he has, even in concurrency.
For example:
CREATE TABLE user_transaction (
transaction_id SERIAL NOT NULL PRIMARY KEY,
change_value BIGINT NOT NULL,
user_id INT NOT NULL REFERENCES user
)
The above schema can keep track of balance, (select sum() from user_transction); However, this does not hold in concurrency. Because the user can post 2 request simultaneously, and two records could be inserted in 2 simultaneous database connections.
I can't do in-app locking either (to ensure only one transcation gets written at a time), because I run multiple web servers.
Is there a database schema design that can ensure correctness?
P.S. Off the top of my head, I can imagine leveraging the uniqueness constraint in SQL. By having later transaction reference earlier transactions, and since each earlier transaction can only be referenced once, that ensures correctness at the database level.
Relying on calculating an account balance every time you go to insert a new transaction is not a very good design - for one thing, as time goes by it will take longer and longer, as more and more rows appear in the transaction table.
A better idea is to store the current balance in another table - either a new table, or in the existing users table that you are already using as a foreign key reference.
It could look like this:
CREATE TABLE users (
user_id INT PRIMARY KEY,
balance BIGINT NOT NULL DEFAULT 0 CHECK(balance>=0)
);
Then, whenever you add a transaction, you update the balance like this:
UPDATE user SET balance=balance+$1 WHERE user_id=$2;
You must do this inside a transaction, in which you also insert the transaction record.
Concurrency issues are taken care of automatically: if you attempt to update the same record twice from two different transactions, then the second one will be blocked until the first one commits or rolls back. The default transaction isolation level of 'Read Committed' ensures this - see the manual section on concurrency.
You can issue the whole sequence from your application, or if you prefer you can add a trigger to the user_transaction table such that whenever a record is inserted into the user_transaction table, the balance is updated automatically.
That way, the CHECK clause ensures that no transactions can be entered into the database that would cause the balance to go below 0.

SQL Server Business Logic: Deleting Referenced Data

I'm curious on how some other people have handled this.
Imagine a system that simply has Products, Purchase Orders, and Purchase Order Lines. Purchase Orders are the parent in a parent-child relationship to Purchase Order Lines. Purchase Order Lines, reference a single Product.
This works happily until you delete a Product that is referenced by a Purchase Order Line. Suddenly, the Line knows its selling 30 of something...but it doesn't know what.
What's a good way to anticipate the deletion of a referenced piece of data like this? I suppose you could just disallow a product to be deleted if any Purchase Order Lines reference it but that sounds...clunky. I imagine its likely that you would keep the Purchase Order in the database for years, essentially welding the product you want to delete into your database.
The parent entity should NEVER be deleted or the dependent rows cease to make sense, unless you delete them too. While it is "clunky" to display old records to users as valid selections, it is not clunky to have your database continue to make sense.
To address the clunkiness in the UI, some people create an Inactive column that is set to True when an item is no longer active, so that it can be kept out of dropdown lists in the user interface.
If the value is used in a display field (e.g. a readonly field) the inactive value can be styled in a different way (e.g. strike-through) to reflect its no-longer-active status.
I have StartDate and ExpiryDate columns in all entity tables where the entity can become inactive or where the entity will become active at some point in the future (e.g. a promotional discount).
Enforce referential integrity. This basically means creating foreign keys between the tables and making sure that nothing "disappears"
You can also use this to cause referenced items to be deleted when the parent is deleted (cascading deletes).
For example you can create a SQL Server table in such a way that if a PurchaseOrder is deleted it's child PurchaseOrderLines are also deleted.
Here is a good article that goes into that.
It doesn't seem clunky to keep this data (to me at least). If you remove it then your purchase order no longer has the meaning that it did when you created it, which is a bad thing. If you are worried about having old data in there you can always create an archive or warehouse database that contains stuff over a year old or something...
For data like this where parts of it have to be kept for an unknown amount of time while other parts will not, you need to take a different approach.
Your Purchase Order Lines (POL) table needs to have all of the columns that the product table has. When a line item is added to the purchase order, copy all of product data into the POL. This includes the name, price, etc. If the product has options, then you'll have to create a corresponding PurchaseOrderLineOptions table.
This is the only real way of insuring that you can recreate the purchase order on demand at any point. It also means that someone can change the pricing, name, description, and other information about the product at anytime without impacting previous orders.
Yes, you end up with a LOT of duplicate information in your line item table..; but that's okay.
For kicks, you might keep the product id in the POL table for referencing back, but you cannot depend on the product table to have any bearing on the paid for product...

How can I get the Primary Key id of a file I just INSERTED?

Earlier today I asked this question which arose from A- My poor planning and B- My complete disregard for the practice of normalizing databases. I spent the last 8 hours reading about normalizing databases and the finer points of JOIN and worked my way through the SQLZoo.com tutorials.
I am enlightened. I understand the purpose of database normalization and how it can suit me. Except that I'm not entirely sure how to execute that vision from a procedural standpoint.
Here's my old vision: 1 table called "files" that held, let's say, a file id and a file url and appropos grade levels for that file.
New vision!: 1 table for "files", 1 table for "grades", and a junction table to mediate.
But that's not my problem. This is a really basic Q that I'm sure has an obvious answer- When I create a record in "files", it gets assigned the incremented primary key automatically (file_id). However, from now on I'm going to need to write that file_id to the other tables as well. Because I don't assign that id manually, how do I know what it is?
If I upload text.doc and it gets file_id 123, how do I know it got 123 in order to write it to "grades" and the junction table? I can't do a max(file_id) because if you have concurrent users, you might nab a different id. I just don't know how to get the file_id value without having manually assigned it.
You may want to use LAST_INSERT_ID() as in the following example:
START TRANSACTION;
INSERT INTO files (file_id, url) VALUES (NULL, 'text.doc');
INSERT INTO grades (file_id, grade) VALUES (LAST_INSERT_ID(), 'some-grade');
COMMIT;
The transaction ensures that the operation remains atomic: This guarantees that either both inserts complete successfully or none at all. This is optional, but it is recommended in order to maintain the integrity of the data.
For LAST_INSERT_ID(), the most
recently generated ID is maintained in
the server on a per-connection basis.
It is not changed by another client.
It is not even changed if you update
another AUTO_INCREMENT column with a
nonmagic value (that is, a value that
is not NULL and not 0).
Using
LAST_INSERT_ID() and AUTO_INCREMENT
columns simultaneously from multiple
clients is perfectly valid. Each
client will receive the last inserted
ID for the last statement that client
executed.
Source and further reading:
MySQL Reference: How to Get the Unique ID for the Last Inserted Row
MySQL Reference: START TRANSACTION, COMMIT, and ROLLBACK Syntax
In PHP to get the automatically generated ID of a MySQL record, use mysqli->insert_id property of your mysqli object.
How are you going to find the entry tomorrow, after your program has forgotten the value of last_insert_id()?
Using a surrogate key is fine, but your table still represents an entity, and you should be able to answer the question: what measurable properties define this particular entity? The set of these properties are the natural key of your table, and even if you use surrogate keys, such a natural key should always exist and you should use it to retrieve information from the table. Use the surrogate key to enforce referential integrity, for indexing purpuses and to make joins easier on the eye. But don't let them escape from the database