Our company has 4 business entities, so I have created 4 different databases for each company to be used for Human Resource service. Let's call that as a Group, Company1, Company2, and Company3. Even though they are all different databases, tables and store procedures(SP) are almost same except Group. Group is a database that gets summarized data from each company. I did create this way because of security purpose, but now it is kind of hard to manage. When I change a SP for company1, then I have to do same thing for company2 and company3. All 4 databases have same a employee master table,so when someone is added to companies, then that person has to be added to the employee table of Group DB by trigger or SP which I don't have to if they are in one table of a database. by the way, I try not to use view at Group DB. I have to keep adding SP, trigger, and job into the database in order to communicate Group DB and other three company DBs.
Now I have to create two more companies, so I have to consider this matter whether I have to keep creating same tables and SP again as a separate database or not.
What is your opinion for creating this types of database? Would you prefer one DB for all companies or separated 4 databases? Please, I'd like to share your opinion. Thanks.
Put all the common stuff, including sps, in one db, then have a db for whatever is specific to each company. You should never have to update anything in more than one place.
You should put it all into one database. Reasons for that are numerous, but you already bumped into one - shared employees table...
Security can be tailored so the specific groups of people can access specific tables, and so on.
Well, the road is long, take that first step :)
You have some very strong cases for using a single database to store all the data. From your description it sounds like you could design it in a way which would make not only much of your deployment work easier, but also simplify any aggregate reporting which would happen across the group.
You'll have to weigh those benefits against your information security requirements. I know in certain environments that separate databases are a necessity for security, dictated by law or company policy. If you don't have any of those strict security requirements, I recommend the single database approach.
Related
I'm looking for a second opinion from maybe someone with more experience with SQL.
So I have a database that looks like this :
Company has multiple Clients which has multiple Projects which has multiple Tasks, etc.
In my application a user is assigned a company and cannot query information that isn't tied to it. So whenever a user tries to retrieve Client/Project/Task/Punch I need to make sure that my query contains a Where clause that looks like WHERE companyID=[user's company id]. This add a lot of joins when I need to fetch Punch since I need to go up the chain to see if the company is the same as the user.
Since a client/project/task/punch will never switch from a company to another one, I was wondering if there's any red flag to add a companyID field in project/task/punch in order to simplify the querying ?
I'm using PostgreSQL
If I understand correctly, what you are buildng is a multitenant system, where your companies are the tenants. If that is the case then there are no red flags - on the contrary, your main concern is to isolate data belonging to different companies in the most efficient and most secure way.
I find this old blog post to be a basic but clear introduction to multitenancy.
The recommended way to go was then, and is today, the third option: one DB, many schemas. I'm no Postgres expert, but I believe it supports that option quite well.
We have a need to know which database architecture makes more sense to use and why.
We have a list of customers who are all going to use the same table structure (with very few exceptions).
We would have about 10 thousand customers who might all have all about 50 thousand products each.
The processing on products may not be the same for each customer and we would also want to provide a plan where customers could have API access to their data.
Our customers do sell products and their SQL table structure would all have columns such as :
Feed_ID
Product_ID
Product_Description
Price
Weight
etc...
The Feed_ID is used to differentiate the origin of these products and will be unique for each customer - of course.
The 3 choices of relational table structure that we have thought about:
Each customer has its own database and in that database, he has 1 table per product-feed
All customers are hosted under 1 unique database under which all customers all have 1 table per feed - in that case, 1 customer can have 2 tables if he as 2 different product feed.
All customers are hosted under 1 unique database, HOWEVER, in this 3rd solution, we only have 1 unique table that host all products feed of all customers.
Which solution would you use and why you think that the solution you selected is better?
Thank you.
You haven't quite provided enough information. Under almost all circumstances (see below for exceptions), you want one set of tables for all customers. Here are some reasons:
Performance. A proliferation of tables means the data is spread through more data pages, so you have lots of partially filled data pages. The database is bigger and processing is slower.
Coding efficiency. If the tables for a customer all have different names, then all the code is dynamic SQL. That is harder to maintain.
Maintenance. Adding a column or index is very arduous when there are zillions of similar tables.
Analytics. When similar data is spread through tables, it is really hard to answer questions such as "Which client has the most products?".
Security. Granting access permissions on a single set of tables is less error prone than on zillions of tables.
And no doubt, I've missed a few reasons. You can see that it is almost a no-brainer to have a single database with a small number of tables.
There are situations where separate databases might be called for. I cannot think of a good reason to have separate tables for each client in a single database.
The number one reason would be security and isolation. There might be a business or even legal reason for storing data into "physically" separate databases, to further minimize the possibility of one client seeing another client's data (accidentally or through hacking).
Another reason would be if clients had bespoke solutions. That is, there are per-client customizations. I would still be inclined to try to put this into a single database solution, but that might not be possible.
Related to this would be an application that you intend to support both in the cloud and on premises. In that case, separate databases per client would probably simplify the application design.
But, in general, you would store the data in a pretty normalized single database, with one table per entity.
I think having separate tables (or ideally schemas) for each customer is not that bad idea. In addition to benefits you mentioned, this way you can scale your database easily, and you can give customers full control of their data if they want to.
Regarding the drawbacks:
Managing it is more complicated but not as bad either - you can write
a script to create columns/tables/indexes/etc. You
don't have to do it manually.
It will be a challenge to perform analytics on 10K tables,
although it's not the best idea to mix it with production anyway.
I'd create a separate database (or server) for analytics, running
some overnight job to update reporting tables.
Also, if your table is going to have hundreds of millions rows (10Kx50k?), it's a good idea to split it into smaller pieces, regardless which option you'll choose. If not by customer, then by region or some other bigger group (assuming you are building on premises RDBMS)
I'm starting a web application that will be used by a lot of companies (over 20K), and most importantly a lot of information will be recorded daily. I would like your advice on the following idea: create a database for each company to do sql queries like this:
select * from enterprisedb1.tablename;
select * from enterprisedb2.tablename2 where enterprisedb2.tablename2.col='foo'
Pleace i need your advice, i don't find anything on google
If you are selling this to multiple clients then it might come down to separation of their data.
On the one hand everything for the app is in the one database for each client, and provided you get the connection string right you probably don't need to ever specify the company name again for the rest of the app. No more "where customer=123" on every single query.
Also means a client could be deleted, backed up, moved, audited, whatever in a completely independent manner.
And also means there is no risk of a developer or a query accidentally doing cross-client things. So you can even open up to generic query access that still cant accidentally cross a client-to-client border. And security set-up will be simpler.
But if you have a million clients you do end up with a lot of databases. How well this works will depend on all sorts of things, including your database of choice.
You also end up having multiple copies of reference data unless you create an additional database "common" or something like that.
Its going to be very much a "depends" answer, but that's a few things to consider.
I suggest to use common tables for each company. It will better to manage and easy to understand.
Create one table for company data and use Integer reference of that key in another mete data tables. For better performance, Index and Query must be well formed.
I have a scenario, my application is a SAAS based app catering to multiple clients. Data Integrity to clients is very essential.
Is it better to keep my Tables
Client specific
OR
Relational Tables
For Ex: I have a mapping table with fields MapField1,MapField2. I need this kind of data for each client.
Should I have tables like MappingData_
or a Single Table with mapping to the ClientId
MappingData with Fields MapField1,MapField2,ClientId
I would have a separate database for each customer. (Multiple databases in a single SQL Server instance.)
This would allow you to design it once, with a single schema.
No dynamically named tables compromising test & development
Upgrades and maintenance can be designed and tested in one DB, then rolled out to all
A single customer's data can be backed-up, restored or dropped exceedingly simply
Bugs discovered/exploited in one DB won't comprise the integrity of other DBs
Data access (read and write) can be managed using SQL Logins (No re-inventing the wheel)
If there is a need for globally shared data, that would go in another database, with it's own set of permissions for the different SQL Logins.
The use of a single database, with all users in it is my next best choice. You still have a single schema. But you don't get to partition the customers' data, you need to manage access rights and permissions yourself, and a whole host of other additional design and testing work.
I would never go near dynamically creating new tables for additional customers. A new table name means all your queries need to be updated with the new table name, and a whole host of other maintenance head-aches.
I'm pretty much of the opinion that if you want to create tables dynamically during the Business As Usual use of an application/service, you've designed it badly.
SO has a tag for the thing you're describing: "multi-tenant".
Visualize the architecture for supporting a multi-tenant database application as a spectrum. At one extreme of the spectrum is "shared nothing", which means each tenant has its own database. At the other extreme of the spectrum is "shared everything", which means tenants share tables, and each row in each table belongs to one tenant. (Each row contains a tenant identifier.)
Terminology seems to overlap, so read carefully. What one writer means by shared schema might be identical to what another writer means by shared everything.
This SO answer, also written by me, describes the differences and the tradeoffs in terms of cost, data isolation and protection, maintenance, and disaster recovery. It also links to a fairly good introductory article.
We have a SQL server that has a database for each client, and we have hundreds of clients. So imagine the following: database001, database002, database003, ..., database999. We want to combine all of these databases into one database.
Our thoughts are to add a siteId column, 001, 002, 003, ..., 999.
We are exploring options to make this transition as smoothly as possible. And we would LOVE to hear any ideas you have. It's proving to be a VERY challenging problem.
I've heard of a technique that would create a view that would match and then filter.
Any ideas guys?
Create a client database id for each of the client databases. You will use this id to keep the data logically separated. This is the "site id" concept, but you can use a derived key (identity field) instead of manually creating these numbers. Create a table that has database name and id, with any other metadata you need.
The next step would be to create an SSIS package that gets the ID for the database in question and adds it to the tables that have to have their data separated out logically. You then can run that same package over each database with the lookup for ID for the database in question.
After you have a unique id for the data that is unique, and have imported the data, you will have to alter your apps to fit the new schema (actually before, or you are pretty much screwed).
If you want to do this in steps, you can create views or functions in the different "databases" so the old client can still hit the client's data, even though it has been moved. This step may not be necessary if you deploy with some downtime.
The method I propose is fairly flexible and can be applied to one client at a time, depending on your client application deployment methodology.
Why do you want to do that?
You can read about Multi-Tenant Data Architecture and also listen to SO #19 (around 40-50 min) about this design.
The "site-id" solution is what's done.
Another possibility that may not work out as well (but is still appealing) is multiple schemas within a single database. You can pull common tables into a "common" schema, and leave the customer-specific stuff in customer-specific schema. In some database products, however, the each schema is -- effectively -- a separate database. In other products (Oracle, DB2, for example) you can easily write queries that work in multiple schemas.
Also note that -- as an optimization -- you may not need to add siteId column to EVERY table.
Sometimes you have a "contains" relationship. It's a master-detail FK, often defined with a cascade delete so that detail cannot exist without the parent. In this case, the children don't need siteId because they don't have an independent existence.
Your first step will be to determine if these databases even have the same structure. Even if you think they do, you need to compare them to make sure they do. Chances are there will be some that are customized or missed an upgrade cycle or two.
Now depending on the number of clients and the number of records per client, your tables may get huge. Are you sure this will not create a performance problem? At any rate you may need to take a fresh look at indexing. You may need a much more powerful set of servers and may also need to partion by client anyway for performance.
Next, yes each table will need a site id of some sort. Further, depending on your design, you may have primary keys that are now no longer unique. You may need to redefine all primary keys to include the siteid. Always index this field when you add it.
Now all your queries, stored procs, views, udfs will need to be rewritten to ensure that the siteid is part of them. PAy particular attention to any dynamic SQL. Otherwise you could be showing client A's information to client B. Clients don't tend to like that. We brought a client from a separate database into the main application one time (when they decided they didn't still want to pay for a separate server). The developer missed just one place where client_id had to be added. Unfortunately, that sent emails to every client concerning this client's proprietary information and to make matters worse, it was a nightly process that ran in the middle of the night, so it wasn't known about until the next day. (the developer was very lucky not to get fired.) The point is be very very careful when you do this and test, test, test, and test some more. Make sure to test all automated behind the scenes stuff as well as the UI stuff.
what I was explaining in Florence towards the end of last year is if you had to keep the database names and the logical layer of the database the same for the application. In that case you'd do the following:
Collapse all the data into consolidated tables into one master, consolidated database (hereafter referred to as the consolidated DB).
Those tables would have to have an identifier like SiteID.
Create the new databases with the existing names.
Create views with the old table names which use row-level security to query the tables in the consolidated DB, but using the SiteID to filter.
Set up the databases for cross-database ownership chaining so that the service accounts can't "accidentally" query the base tables in the consolidated DB. Access must happen through the views or through stored procedures and other constructs that will enforce row-level security. Now, if it's the same service account for all sites, you can avoid the cross DB ownership chaining and assign the rights on the objects in the consolidated DB.
Rewrite the stored procedures to either handle the change (since they are now referring to views and they don't know to hit the base tables and include SiteID) or use InsteadOf Triggers on the views to intercept update requests and put the appropriate site specific information into the base tables.
If the data is large you could look at using a partioned view. This would simplify your access code as all you'd have to maintain is the view; however, if the data is not large, just add a column to identify the customer.
Depending on what the data is and your security requirements the threat of cross contamination may be a show stopper.
Assuming you have considered this and deem it "safe enough". You may need/want to create VIEWS or impose some other access control to prevent customers from seeing each-other's data.
IIRC a product called "Trusted Oracle" had the ability to partition data based on such a key (about the time Oracle 7 or 8 was out). The idea was that any given query would automagically have "and sourceKey = #userSecurityKey" (or some such) appended. The feature may have been rolled into later versions of the popular commercial product.
To expand on Gregory's answer, you can also make a parent ssis that calls the package doing the actual moving within a foreach loop container.
The parent package queries a config table and puts this in an object variable. The foreach loop then uses this recordset to pass variables to the package, such as your database name and any other details the package might need.
You table could list all of your client databases and have a flag to mark when you are ready to move them. This way you are not sitting around running the ssis package on 32,767 databases. I'm hooked on the foreach loop in ssis.