Database design for a multi branches POS system - sql

I am building a POS system that support multi branches.
The system is going to support these features.
Each store should have a local database for it's own inventory list and invoice. (Local database to avoid internet failure).
There is a reporting DB that contains information of all shop (inventory, invoice, etc), the reporting DB can be async to shopDB.
Each shop contains a unique shop code to indentify the record ownership, also a part of key (to avoid issue with primary key).
Shop system can query Reporting DB for inventory list on other shop (customer can place an order, shop may query for full inventory list and get other branches ship them the inventory).
Currently I am building the system with Java, PostgreSQL and Cayenne, but I am open to change the DB or ORM tool in case there is any technology limitation.
I tried to read a lot with Replication and Clustering, but it doesn't appears to suit my need.
Any clues on what I should look for ? Or should I build the replication on app layer instead of DB layer ?

The thing that strikes me here is what happens when shop A sells inventory for shop B while shop B sells the same inventory?
why cant the application access other shops dbs?
have you read about federated databases - http://dev.mysql.com/doc/refman/5.1/en/federated-description.html
http://en.wikipedia.org/wiki/Federated_database_system

Related

Designing a database for a multiple-shop ecom platform

I'm designing database for an mobile-based ecom platform.
Currently, the system have one and only one shop. The design for the database of the system has the tables below:
User
Product
Category
Review
Order
Now, I want to scale up the system so that it can support multiple shops. User can sign-up as a seller and create their own shop, manage their own products and orders.
How can I design such a database from the original database which was designed only for one shop?
I have two options in mind but I have no idea whether they will works or not:
For each record in the table, I add a field shopId that ref to the id of the shop it belongs to. Then I will index this field to increase query's performance.
For each shop, I create a new collection/table to store the data that belongs to it. For example: shop1_product, shop1_order,... are the tables i will create for the shop1.
Are the approach above valid? Is there any other better approach.
P/s: I'm using MongoDB, and the system doesn't require operations across many shops.
Thank you guys!

Online Store and Microservices

I am working for a big online store. At the moment our architecture is something weird where we have microservices which actually all share the same DB (doesn't work well at all...).
I am considering improving that but have some challenges on how to make them independant.
Here is a use case. I have customers, customers purchase products. Let say I have 3 microservices : customer authentication, order management, product management.
An order is linked to a customer and a product.
Could you describe a solution for the following problems :
How do you make the link between an order and a customer?
Let say both services share a customer ID, how do you handle data consistency? If you remove a customer on the customer service side, you end up with inconsistency. If your service has to notify the other services then you end up with tighlty coupled services which to me sounds like what you wanted to avoid in the first place. You could kind of avoid that by having an event mechanism which notify everyone but what about network errors when you don't even know who is supposed to receive the event?
I want to do a simple query : retrieve the customers from US that bought product A. Given that 3million people bought product A and we have 1 million customers in the US; How could you make that reasonably performant? (Our current DB would execute that in few milliseconds)
I can't think of any part of our code where we don't have this kind of relation. One of the solution I can think of is duplicating data. E.g. When a customer purchase something, the order management service will store the customer details and the product details. You end up with massive data replication, not sure if that's a good thing and I would still be worried about consistency.
I couldn't find a paper addressing those issues. What are the different options?
At the moment our architecture is something weird where we have microservices which actually all share the same DB (doesn't work well at all...). I am considering improving that but have some challenges on how to make them independant.
IMHO the architecture is more simple by having one OLTP database for orders, customers, and products since it allows you to make use of JOINS and stored procedures. It could be the case that the DB could use some configuration and tuning TLC vs. software re-architecture. Just keep that door open when you consider how to fix performance problems.
How do you make the link between an order and a customer?
In the orders table have a column for customer_id. The customer_id field in the orders table would be a foreign key to the id field on the customers table. This will give you the best performance.
You can do either periodic cleanup or event based cleanup of deleted users (and their orders). But please make sure that somewhere these old orders and customers are stored. Maybe archive tables or back-end data-warehouse where reports and analysis (OLAP) can be done on this data.
Let say both services share a customer ID, how do you handle data consistency? If you remove a customer on the customer service side, you end up with inconsistency. If your service has to notify the other services then you end up with tighlty coupled services which to me sounds like what you wanted to avoid in the first place. You could kind of avoid that by having an event mechanism which notify everyone but what about network errors when you don't even know who is supposed to receive the event?
There are various ways this can be done. As mentioned you can either create an event to deal with customer deletions or do periodic db cleanups. But one thing is for certain, the orders service does not NEED to be notified when this cleanup is done, unless you want it to. Not a need but could be a want if you want order culling to be done via the order services. The naive way to do this is to create a stored procedure that takes a customer_id (or list of customer_id's) as input and deletes all orders that match that customer_id from the orders table. Please make sure to backup the data for future data analysis and auditing.
I want to do a simple query : retrieve the customers from US that bought product A. Given that 3million people bought product A and we have 1 million customers in the US; How could you make that reasonably performant? (Our current DB would execute that in few milliseconds)
Again this is why it makes sense to keep the customers, products, and orders tables in the same DB as this query can more easily be made to execute quickly when they are on the same DB. You can take advantage of your DB's designing and optimization tools, and EXPLAIN/DESCRIBE output to tweak your tables indexes and such. If you are using Mysql you can change DB engines (I recommend TokuDB DB engine).
In the end my main suggestion is to leave in one DB for OLTP as you will get more efficiency and performance for the same amount of hardware. Splitting the DB into multiple DB's will have an overhead cost for your code, architecture, network, and CPU's. The important thing is that your DB can scale horizontally and is finely tuned for the queries being done on it. Move OLAP to its own DB. This can be done using ETL to move data from OLTP DB to OLAP DB. The query in your example sounds like something that would be done in an OLAP DB. For your OLAP database you can use a columnar DB, like Vertica or something equivalent that can easily scale horizontally. The important thing to note is that by splitting up your OLAP and OLTP you can tune and configure each for their respective purpose.
Whether you run your customer, orders, and products services as a monolith (my recommendation) or as microservices the DB design should not change. What will cause your queries in your code to change is if you split the OLTP DB into multiple DB's because now you can not do simple JOINs or stored procedures.
This is what Martin Fowler calls the Monolith First. http://martinfowler.com/bliki/MonolithFirst.html

How do I create a supplier in Microsoft Dynamics CRM 2015?

Microsoft CRM 2015.. a great little customer management system, but why did do they leave suppliers out?
Anyway how do I create and manage suppliers using Dynamics CRM?
or do you need different software for that?
Cheers,
In previous versions I have created custom entities for Vendor and Supplier. Here are some of the setup I did, of course always depends on business needs.
1:N Supplier to Product relationship
Added fields for supplier details
1:N Contact to Supplier for POC at organization
1:N User to Supplier for POC within our organization
If you are not using the out of the box quote to order to contract process then you will need to accommodate for Product inventory and qty on hand limits.
Used Goals and Rollups to set minimum inventory levels and restock alerts
Make sure finance is involved in the design of this process as they may be particular about how reordering is handled. While users may want it automated there might be internal requirements, specifics on contracts with suppliers that require some additional review.
Created relationships for Contract and opportunity records
This was needed to improve reporting since you cannot report out on second tier relationships. For example - how many opportunities are coming related to this particular supplier who is delayed delivering. If the supplier relationship is not on the Opportunity record itself you cannot get to it.
These were the key items I thought mattered. There is always more, but you should not need to purchase an alternative supplier tracking software.

SQL vs. NoSQL database for 'tags-heavy' CRM application

I'm building a talent management CRM application and I'm having trouble choosing between a SQL or NoSQL database for my data.
The application will only have a few 'core' entities (Person, Job, Company, Interview), and will rely heavily on 'tagging' of those entities. You can add Tags and Notes to a Person, a Job, a Company, and then sort/search data by those tags.
What I learned about NoSQL is that I can just have a Person object (document) with an array of Tags and Notes, where in SQL I would need separate Tags and Notes tables and construct joins to gather all my data for a Person.
Could anyone give me some pointers on what would be the way to go for my particular scenario?
Thanks!
Our ERP system is based on UniData (NoSQL), it is okay for performing the standard tasks needed to do business like entering in customers, creating sales orders, invoicing etc. But when it comes to creating reports that were not originally foreseen it is quite cumbersome. The system only lets you create reports off of one table, if you need data from another table you have two options: 1. Create what is called a virtual attribute for every field you need to look up from a different table, Or write a UniBasic program to retrieve the data needed.
To meet most of our business needs on the reporting front it is more beneficial for us to export the Data to SQL and then perform reports in SQL, the result is the reports run quicker from SQL and most of the time a reporting tool can be used to create the reports - this can usually be performed by a power user as opposed to someone that has to have quite a high level of programming abilities to just build a report.
It would have been nice if it had already been in SQL in the first place.
But maybe some other NoSQL database has better functionality than UniData, that said too usually 3rd party support for NoSQL database engines comes at a higher premium because there are less specialists available than 3rd party support for SQL engines.

Local SQL database interface to cloud database

Excuse me if the question is simple. We have multiple medical clinics running each running their own SQL database EHR.
Is there anyway I can interface each local SQL database with a cloud system?
I essentially want to use the current patient data that one is consulting with at that moment to generate a pathology request that links to a cloud ?google app engine database.
As a medical student / software developer this project of yours interests me greatly!
If you don't mind me asking, where are you based? I'm from the UK and unfortunately there's just no way a system like this would get off the ground as most data is locked in proprietary databases.
What you're talking about is fairly complex anyway, whatever country you're in I assume there would have to be a lot of checks / security around any cloud system that dealt with patient data. Theoretically though, what you would want to do ideally is create an online database (cloud, hosted, intranet etc), and scrap the local databases entirely.
You then have one 'pool' of data each clinic can pull information from (i.e. ALL records for patient #3563). They could then edit that data and/or insert new records and SAVE them, exporting them back to the main database.
If there is a need to keep certain information private to one clinic only this could still be achieved on one database in a number of ways, or you could retain parts of the local database and have them merge with the cloud data as they're requested by the clinic
This might be a bit outdated, but you guys should checkout https://www.firebase.com/. It would let you do what you want fairly easily. We just did this for a client in the exact same business your are.
Basically, Firebase lets you work with a Central Database on the Cloud, that is automatically synchronised with all its front-ends. It even handles losing the connection to the server automagically. It's the best solution I've found so far to keep several systems running against one only cloud database.
We used to have our own backend that would try its best to sync changes, but you need to be really careful with inter-system unique IDs for your tables (i.e. going to one of the branches and making a new user won't yield the same id that one that already exists in any other branch or the central database). It becomes cumbersome very quickly.
CakePHP can automatically generate this kind of Unique IDs pretty easily and automatically, but you still have to work on sync'ing all the local databases with the central repository.