Whether to Split Data in to Separate PostgreSQL Table - sql

I am creating an app with a WPF frontend and a PostgreSQL database. The data includes patient addresses and supplier addresses. There is an average of about 3 contacts per mailing address listed. I'm estimating 10,000 - 15,000 contact records per database.
When designing the database structure, it occurred to me that rather than storing mailing addresses in a single "contacts" table, I could have one table storing names and other individual data, with a second table holding addresses. I could then create a relationship between the tables, to match addresses with contacts.
I have a pretty good idea how I can neatly organise situations such as changing the address of a single contact, where the other contacts are staying at the same address.
The question is: is it worth it? Can I expect to save much in the way of storage size? Will this impact the speed of queries adversley? How about if I was using something other than PostgreSQL?

I would strongly suggest normalizing this. You never know what kind of trouble you will run into. LedgerSMB has a relatively decent entity/user/contact/location schema that creates a very flexible environment. You can see it here (starts at line 363):
http://ledger-smb.svn.sourceforge.net/viewvc/ledger-smb/trunk/sql/Pg-database.sql?revision=3042&view=markup

Unless you think a large number of your users will be sharing addresses and they'll be often changing, I don't see the need to normalize out the address portion. In the various places I've worked and see users tables, sometimes it is, sometimes it isn't - never really seemed to make a terrible amount of trouble one way or another.
In terms of performance, with just 10-15k records and proper indexes, I can't imagine you'd notice too much difference one way or the other on modern hardware (although technically the separate table should be slower).

I agree with Joshua. Once it's set up properly (normalized) it's very easy to manage any changes in your app in the future.

Related

Is there a database design pattern name for reducing duplicate join table data?

I have two tables with a join table to allow a many-to-many relationship.
It's a very familiar design pattern. It indicates which Branches each Member has access to.
As the number of members and branches increases I end up with a lot of data in the join table that is duplicated across members. Members tend to have access to the same groups of Branches as other Members.
So I'm looking at normalizing my data by creating a MemberProfile table that is effectively immutable. And rather than creating MemberBranch records for every Member I check for a matching MemberProfile, use if it already exists, or create one if it doesn't:
The idea being if I have a million Members with only a hundred access profiles this will save me a lot of space in my database.
I'm happy that it all works and that the development effort is worth is.
My question is "Is this a standard database design pattern, and if so, what is it called?"
EDIT: It's been pointed out that this is compressing the data not normalizing it. Which is the intent behind the design.
Unless your many:many table is always the join of particular other base tables, one is not normalizing. You aren't normalizing here. Normalization does not introduce new column names. It just rearranges the current ones among different base tables.
You are just compressing/encoding your data. There is not necessarily any benefit in this, since now some queries and updates will be slower although your database is smaller. (You have reported that it is worth it in your case.)
I understand you'd like to put a label on that precise transformation, but unfortunately, there aren't many books that discuss database design or refactoring patterns. One of the few is Martin Fowler's Refactoring Databases, which you may know for his work on analysis patterns (he also has a great blog, worth following!). In that book, Martin presents a bunch of refactoring patterns that can be applied to databases and has put a name on common database transformations, including the one you have presented, which he called Split Table.
Split Table. Vertically split (e.g. by columns) an existing table into one or more tables.
A catalog of the database refactorings presented in that book are available here.
Hi I don't know about a pattern name but I've used the same principle before.
To keep this performing well, introduce a checksum to memberProfile based upon the branches for the profile, this way a lookup for an existing profile is plain easy and fast.
But do remember that the checksum is not necessarily unique, in case of collisions you will still have to check the branches, but only for the profiles sharing the same checksum.
Cleanup can be a scheduled task is is nothing more then deleting the profiles without users.

Database design - Sharing data between two databases?

I am thinking and exploring options on designing database for my new application. In general, I will have registered users and info about them. They will be able to do some things in app and that data will be in the sam DB as users data (so I can have FK's shared and stuff)
But, then I plan to have second database that will be in logic totally independent of the first database except it will share userID as FK.
I don't know should I even put that second logic in an extra DB or should I have everything in the same database. I plan to have subdomain in my app for second logic (it is like app in app) but what if I discover they should share more data? Will that cross querying drop my peformances? And is that a way to go actually, is there a real reason to separate databases ?
As soon as you have two databases you have potential complexity. You have not given any particular reason why you need two databases. So keep it simple until you have a reason.
An example of what folks do: have a "current" database, small, holding just the data needed right now. That might be where orders are taken and fulfilled. Once the data is no longer current, say some days or weeks after the order is filled move the data to a "historic" database. There marketing and mangement folks can look at overall trends in the history without affecting performance of the "current" database, whose performance might be critical to keeping your customers happy.
As an example of complexity: any time you have two databases you need to consider consistency between them, this is much harder to ensure than it might appear. Databases do offer Two-Phase Transactional capabilities, or you can devise batch processes but there are always subtleties that are hard to catch.
I would just keep all in one database. Unless you have dozens of tables there should be no real performance problems, imho. It will however facilitate your life greatly, only having to work with one database connection & not having to worry about merging information from two queries,
Also agree that unless volume of your data is going to be huge (judging by the question, doesn't seem like that is the case here), you can use single database to store your data without performance issues.
For "visual" separation of data structure, you can always create tables in two schemas of single database.

Permanent table, temp tables or php session?

My web app offers personalized recommendations. When a user starting to use it, about 1000+ rows are being inserted to one big recommendation table, correlating with other tables in the database. Every item the user votes for affects all of those 1000+ rows.
Since the recommendation info is only useful during the session, and since the recommendation table is getting huge, we'd like to switch to a more appropiate method. There's the possibility of deleting the relevant rows as soon as the user session is over. I guess PHP session array or temp tables are better for this case?
One temp table per session will lead to catalog pollution, so not really recommended.
Have you considered actually keeping the data, so as periodically mine it to improve the suggestions?
First: consider redesigning your data structure, I think it is not optimal.
Store a user's recommendation in a table user-recommendeditem-score: I don't see any need for a temp table or anything else.
Otherwise, you could start using sessions, but you should encapsulate the code carefully, making it easy to change if/when this solution is no more maintainable.
I suspect that the method is flawed - 1000+ recommendations per user? How many of them do they ever look at? If you don't know the answer to that question - then you need to spend some time thinking about why you don't know the answer.
Every item the user votes for affects all of those 1000+ rows
Are you sure your data is properly normalised?
But leaving that aside for the moment. The right place to generate / store that is in the database - a relational database is explicitly designed, and a lot more efficient about generating and maintaining tabular sets of data then a conventional programming language.

Address book database design: denormalize?

I'm designing a contact manager/address book-like application but can't settle on the database design.
In my current setup I have a Contact, which has Addresses, Phonenumbers, Emails, and Organizations. All contact properties are currently separate tables with a fk to the Contact table. Needless to say a contact can have any number of these properties.
Now, I find myself joining all these tables together if I want to read contacts into the app. Since no filters, reverse lookups, sorts etc. are performed on the related tables, isn't it a better/simpler solution to just store the related fields as json-encoded lists on direct properties of the Contact table?
E.g., instead of a Contact with a fk to a phonenumber table with 3 entries, just encode all phonenumbers and store them into a field of the Contact table?
Any insights really appreciated! (fyi I'm using Django although that doesn't really matter)
Can you guarantee that your app will never grow to need these other functionalities? Do you really want to paint yourself into the corner such that you can't easily support all of this later?
Generally, denormalization happens only for preformance reasons. And then, a copy of the normalized data is still kept for live work and the denormalized data is used for offline processing where having a static snapshot is fine.
Get used to writing joins. That's the way SQL works. Having to do so doesn't meant something is wrong.
I know I'm too late, but for anyone with the same issue.
IMO, in this case metadata modeling is the way to go.
http://searchdatamanagement.techtarget.com/feature/Data-model-patterns-A-metadata-map
Sounds like you propose taking data currently modelled as five SQL tables and converting it to a common multi-valued type (does your SQL product have good support for this?) The only way I can see this would constitute 'denormalization' would be if you were proposing to violate 1NF, at which point you may as well abandon SQL as a data store because your data would no longer be relational! Otherwise, your data would still be normalized but you will have lost the ability to query its attributes using SQL (unless your SQL product has extensions for querying multi-value attributes). The deciding factor seems to be: do you need to query these attributes using SQL?

Help with setting up a Database

My site is going to have many products available, but they'll be categorised into completely different sites (domains).
My question is, am I better off lumping all products into one database and using an ID to distinguish between the sites, or should I set up a table and /or DB per site?
Here are my thoughts
SEPARATE DATABASES
Easier to read from a backend
Categorised better
Makes backups more difficult
If I need to make a change to the schema, it will need to be pushed out to all databases
SAME DATABASES
All in one place
Could get unwieldy
One database will have a massive file size and lookups could suffer
Can someone please offer me some advice on which way is best and why?
You didn't give too many details (which makes it difficult to provide a good answer), though the words you chose to use in your question lead me to believe that this is a single application with different "skins".
My site is going to have many products available, but they'll be categorised into completely different sites (domains).
My assumption is that you will have a single web store with several different store fronts: cool-widgets.com, awesome-sprockets.com, neato-things.com, etc. These will all be the same, save for maybe a CSS skin or something simple like that. The store admin stuff will all be done in some central system, and the domain name will simply act as a category name.
As such, splitting the same data into two different containers using an arbitrary criterion (category_ name=='cool-widges.com') is data partitioning, which is an anti-pattern. Just as you wouldn't have two different user tables based on the user name ([Users$A-to-M] and [Users$N-to-Z]), it makes little sense to have two different tables (or databases) for category names.
There is, and will be, lots of code common among the categories: user management, admin, order processing, data import, etc. It will be far more difficult to aggregate the multiple datastores in the common code than it will be to segregate the categories in the store display code. Not only that, the segregation bugs will be much more obvious: the price comparison page shows items from all three stores. The aggregation bugs will be much less: only three of the four stores were updated. This is why it's an anti-pattern.
Side note: yes, before you say that data portioning has its uses (which it does), those uses come in far after performance problems occur. Many serious database platforms allow behind-the-scenes partitioning as not to create a goofy data model.
If data needs to be shared among all the sites, then it will be recommended to share the same database since data transfer is eliminated. Also data is more centralized.
If data does not need to be shared among all sites, it'll be good to split up one database per site. Talking about difficulty to update table structures, you can just simply record down the database changes (saving the ALTER, UPDATE, DELETE queries in a SQL file) you make for one, and update the other databases with the same SQL file.
Storing in different databases might also help with security. You can set different user permissions for each of the site. and if one gets compromised, you protect the other sites.
Also, you are able to easily maintain and track database when the databases are clearly split up.
As you already say, both options have their pros and cons. Since you're talking about two stores, it probably doesn't matter much.
However, a few questions you might want to ask yourself:
Will it really be two stores, or possibly more? If more, one database might be smarter.
Are the products really the same? If you're gonna have to squeeze products in one general database, because they are of a different kind (eg. cars vs. food; the amount and nature of the details you want to store are completely different), then don't; use two databases / tables instead.
The central question is: what is most likely to become more elaborate in the future: the stores, or the products?
I think separate databases will be easier. You can have a quick-start template database from which you can build a new store database. You can even create a common database and contain common tables and list of stores and their databases. After all you can access to any database within the same server using qualified name, observe:
SELECT value FROM CommonDB.currencies WHERE type='euro';
SELECT price FROM OldTownDB.Products WHERE id=newtownprodid;