Handling mapping tables to an external API - sql

Our data model has users. We use an external API (I'll call it Z) to handle payments. We create users in Z and have a mapping table that links our internal IDs to Z IDs. This works fine when there is a 1-to-1 association between "environments."
The problem is that Z provides us one testing environment called "staging." But we have multiple environments, "sandbox", "staging", each dev's local, etc. Ideally we could point the various environments to Z's staging, but then the mapping tables will be wrong in each environment. Each environment has a different user base and the emails could clash and point to the wrong Z IDs. Z provides no delete (or archive) functionality either.
How can we manage those mapping tables in this situation?

This is a common problem. It happens not only when dealing with external systems, but very often internal systems as well.
You really only have two choices. Disallow contact with the external system except from your Staging environment, or allow contact with the external system from multiple environments.
Since you want to do the latter, you have to accept that your mapping tables in each of your environments will not match ID for ID with the external Staging environment. This shouldn't be a problem unless you have some requirement that you have the exact same number of IDs in your mapping table as in the external environment. If this is your case, then you are stuck with option 1.
More likely, there is no real need that every ID in the external environment have a corresponding entry in the same mapping table. In this case, you are really only concerned that every ID in a mapping table have a corresponding ID in the external Staging environment.
You can prevent collisions by creating the ID in the external system before creating it in your mapping table. If the ID is already taken, require the user to pick a different one.

Related

Two shops and sync clients between them with passwords

Is this possible to sync customers between two seperate prestashop 1.7 shops? I dont want to use multistore option..is there a module for that or maybe some database operations?
Customers are stored in a single database table (ps_customer) , so if you are able to write a synchronization routine between the two database tables you should be able to achieve that.
There are several additional considerations though :
Both stores must have the same "cookie_key" set in the site parameters for same passwords to be validated in both shops, so you'll have to start with at least one empty store.
Customers have different relationships to databases based on their id_customer auto_increment values (addresses, orders, third party modules etc.), so you'll need to know what you're doing and make sure the two shops can't have conflicts between customer ids (IE: you can start one of the two shops with a very high id_customer..) - Also not sure if you need to handle also addresses synchronzation.. This would add some complexity.
I hope I've given you some good starting points - but I would stick with native "multishop" PS feature for that - It would be far easier despite still having a lot of bugs :)

Extending a set of existing tables into a dynamic client defined structure?

We have an old repair database that has alot of relational tables and it works as it should but i need to update it to be able to handle different clients ( areas ) - currenty this is done as a single client only.
So i need to extend the tables and the sql statements so ex i can login as user A and he will see his own system only and user B will have his own system too.
Is it correctly understood that you wouldnt create new tables for each client but just add a clientID to every record in every ( base ) table and then just filter with a clientid in all sql statements to be able to achieve multiple clients ?
Is this also something that would work ( how is it done ) on hosted solutions ? Am worried about performance if thats an issue lets say i had 500 clients ( i wont but from a theoretic viewpoint ) ?
The normal situation is to add a client key to each table where appropriate. Many tables don't need them -- such as reference tables.
This is preferred for many reasons:
You have the data for all clients in one place, so you can readily answers a question such as "what is the average X for each client".
If you change the data structure, then it affects all clients at the same time.
Your backup and restore strategy is only implemented once.
Your optimization is only implemented once.
This is not always the best solution. You might have requirements that specify that data must be separated -- in which case, each client should be in a separate database. However, indexes on the additional keys are probably a minor consideration and you shouldn't worry about it.
This question has been asked before. The problem with adding the key to every table is that you say you have a working system, and this means every query needs to be updated.
Probably the easiest is to create a new database for each client, so that the only thing you need to change is the connection string. This also means you can get automated query tools for example to work without worrying about cross-client data leakage.
And it also allows you to backup, transfer, delete a single client easily as well.
There are of course pros and cons to this approach, but it will simplify the development effort. Also remember that if you plan to spin it up in a cloud environment then spinning up databases like this is also very easy.

Database design ideas for "master and slave" nodes

We are currently in design phase of a product. The idea is that we have a master (calling it) which contains all user information including user/registration/role/licence etc. and we have a slave database (calling it) which contains main application related data.
Some columns from the master database will be used in the slave database. for e.g. userid will be used everywhere in the slave database. There will be multiple versions of salve in different tiers (depending on customers subscription). So for e.g. Some customer will have a dedicated slave databases for their application.
Also some data/tables/columns from slave will be used in master.
How do we manage this scenario so that we can have maximum referential integrity (I know it will not be possible all the time) without using linked servers. (We dont want to use linked servers because for improper design it can be abused and can effect performance as a result).
Or is this a bad idea. Just have single database design (no master/slave) with different nodes. And customer's data in different nodes depending on their subscription? The problem that I see with this is that the registration/user tables are now fragmented in different database nodes. So for e.g. userA will be in database01 and so on.
Any idea?

Table representation of different kind and connection of multicast machine

I am wandering what is the best way to organize the following into a relational DB structure (specifically an Oracle db).
I have to represent many hosts; every host is streaming (multicasting, input is with and without source filtering) one or more streams which contain an unknown number of components.
There are many kind of machines that will apply different kinds of modification based on some specific configuration (that have to be stored), and almost all kind of combination is valid. Normally based on that info, a machine will take some component, eventually change it in some way and restream it; others will just reorganize multiple multicast components (so they have multiple IP addresses).
My user will probably be looking for information about a specific host or component, but many views will have to be provided, which will navigate the data in every "relation direction".
I've camed up with a lot of ideas, like creating a standard "host" table, that has 2 fields, containing the specialized table name and row (obviously all specialized tables are a lot different from each other). But this clashes with the foreign keys perspective (only one parent table defined on table creation); so maybe it is a better idea to make specialized tables point to host table with foreign key, but then that relation has be "reverse navigable" (possible, but I feel like it is a hack).
Also for multicast I've come up with two alternatives, as it is a many-to-many relationship, it will use a third connection table, but the problem is what to put in that table to keep it aligned when one multicast or machine IP changes.
I'm really lost in that as I can't even find some good keyword to point me to some example or discussion about an organization with such complexity.

Multi-tenancy with SQL/WCF/Silverlight

We're building a Silverlight application which will be offered as SaaS. The end product is a Silverlight client that connects to a WCF service. As the number of clients is potentially large, updating needs to be easy, preferably so that all instances can be updated in one go.
Not having implemented multi tenancy before, I'm looking for opinions on how to achieve
Easy upgrades
Data security
Scalability
Three different models to consider are listed on msdn
Separate databases. This is not easy to maintain as all schema changes will have to be applied to each customer's database individually. Are there other drawbacks? A pro is data separation and security. This also allows for slight modifications per customer (which might be more hassle than it's worth!)
Shared Database, Separate Schemas. A TenantID column is added to each table. Ensuring that each customer gets the correct data is potentially dangerous. Easy to maintain and scales well (?).
Shared Database, Separate Schemas. Similar to the first model, but each customer has its own set of tables in the database. Hard to restore backups for a single customer. Maintainability otherwise similar to model 1 (?).
Any recommendations on articles on the subject? Has anybody explored something similar with a Silverlight SaaS app? What do I need to consider on the client side?
Depends on the type of application and scale of data. Each one has downfalls.
1a) Separate databases + single instance of WCF/client. Keeping everything in sync will be a challenge. How do you upgrade X number of DB servers at the same time, what if one fails and is now out of sync and not compatible with the client/WCF layer?
1b) "Silos", separate DB/WCF/Client for each customer. You don't have the sync issue but you do have the overhead of managing many different instances of each layer. Also you will have to look at SQL licensing, I can't remember if separate instances of SQL are licensed separately ($$$). Even if you can install as many instances as you want, the overhead of multiple instances will not be trivial after a certain point.
3) Basically same issues as 1a/b except for licensing.
2) Best upgrade/management scenario. You are right that maintaining data isolation is a huge concern (1a technically shares this issue at a higher level). The other issue is if your application is data intensive you have to worry about data scalability. For example if every customer is expected to have tens/hundreds millions rows of data. Then you will start to run into issues and query performance for individual customers due to total customer base volumes. Clients are more forgiving for slowdowns caused by their own data volume. Being told its slow because the other 99 clients data is large is generally a no-go.
Unless you know for a fact you will be dealing with huge data volumes from the start I would probably go with #2 for now, and begin looking at clustering or moving to 1a/b setup if needed in the future.
We also have a SaaS product and we use solution #2 (Shared DB/Shared Schema with TenandId). Some things to consider for Share DB / Same schema for all:
As mention above, high volume of data for one tenant may affect performance of the other tenants if you're not careful; for starters index your tables properly/carefully and never ever do queries that force a table scan. Monitor query performance and at least plan/design to be able to partition your DB later on based some criteria that makes sense for your domain.
Data separation is very very important, you don't want to end up showing a piece of data to some tenant that belongs to other tenant. every query must have a WHERE TenandId = ... in it and you should be able to verify/enforce this during dev.
Extensibility of the schema is something that solutions 1 and 3 may give you, but you can go around it by designing a way to extend the fields that are associated with the documents/tables in your domain that make sense (ie. Metadata for tables as the msdn article mentions)
What about solutions that provide an out of the box architecture like Apprenda's SaaSGrid? They let you make database decisions at deploy and maintenance time and not at design time. It seems they actively transform and manage the data layer, as well as provide an upgrade engine.
I've similar case, but my solution is take both advantage.
Where data and how data being placed is the question from tenant. Being a tenant of course I don't want my data to be shared, I want my data isolated, secure and I can get at anytime I want.
Certain data it possibly share eg: company list. So database should be global and tenant database, just make sure to locked in operation tenant database schema, and procedure to update all tenant database at once.
Anyway SaaS model everything delivered as server / web service, so no matter where the database should come to client as service, then only render by client GUI.
Thanks
Existing answers are good. You should look deeply into the issue of upgrading and managing multiple databases. Without knowing the specific app, it might turn out easier to have multiple databases and not have to pay the extra cost of tracking the TenantID. This might not end up being the right decision, but you should certainly be wary of the dev cost of data sharing.