does setting up proper relationships in a database help with anything else other than data integrity?
do they improve or hinder performance?
As long as you have the obvious indexes in place corresponding to the foreign keys, there should be no perceptible negative effect on performance. It's one of the more foolproof database features you have to work with.
I'd have to say that proper relationships will help people to understand the data (or the intention of the data) better than if omitting them, especially as the overall cost is quite low in maintaining them.
Their presence doesn't hinder performance except in terms of architecture (as others have pointed out, data integrity will occasionally cause foreign key violations which may have some effect) but IMHO is outweighed by the many benefits (if used correctly).
I know you weren't asking whether to use FKs or not, but I thought I'd just add a couple of viewpoints about why to use them (and have to deal with the consequences):
There are other considerations too, such as if you ever plan to use an ORM (perhaps later on) you'll require foreign keys. They can also be very helpful for ETL/Data Import and Export and later for reporting and data warehousing.
It's also helpful if other applications will make use of the schema - since Foreign Keys implement a basic business logic. So your application (and any others) only need to be aware of the relationships (and honour them). It'll keep the data consistent and most likely reduce the number of data errors in any consuming applications.
Lastly, it gives you a pretty decent hint as to where to put indexes - since it's likely you'll lookup table data by an FK value.
It neither helps nor hurts performance in any significant way. The only hindrance is the check for integrity when inserting/updating/deleting.
Foreign keys are an important part of database design because they ensure consistency. You should use them because it offers the lowest level of protection against data screw ups that can wreck your applications. Another benefit is that database tools (visualization/analysis/code generation) use foreign keys to relate data.
Do relationships in databases improve or hinder performance?
Like any tool in your toolbox, the results you'll get depend on how you use it. Properly specified relationships and a well-designed logical database can be an enormous boon to performance -- consider the difference between searching through normalized and denormalized data, for example.
Depending on your database engine, relationships defined through foreign key constraints can benefit performance. The constraint allows the engine to make certain assumptions about the existence of data in tables on the parent side of the key.
A brief explanation for MS SQL Server can be found at http://www.microsoft.com/technet/abouttn/flash/tips/tips_122104.mspx. I don't know about other engines, but the concept would make sense in other platforms.
Relationships in the data exist whether you declare them or not. Declaring and enforcing the relationships via FK constraints will prevent certain kinds of errors in the data, at a small cost of checking data when inserts/updates/deletes occur.
Declaring cascading deletes via relationships helps prevent certain kinds of errors when deleting data.
Knowing the relationships helps to make flexible and correct use of the data when forming queries.
Designing the tables well can make the relationships more obvious and more useful. Using relationships in the data is the primary power behind using relational databases in the first place.
About impact on performance: In my experience with MS Access 2003, if you have a multi-user application and use Relationships to enforce a lot of referential integrity, you can take a big hit in terms of response time for the end-user.
There are different ways to take care of enforcing referential integrity. I decided to take out some rules in Relationships, build more enforcement into the front-end and live with some loss of RI. Of course in the multi-user environment, you want to be very careful with that bit of liberty.
In my experience building performance-sensitive databases, Foreign Keys hurt performance pretty significantly, since they have to be checked every time the referring record is inserted/updated or master record is deleted. If you need a proof, just look at the execution plan.
I still keep them for documentation and for tools to use but I usually disable them, especially in high-performance systems where access to DB is only through the application layer.
Related
I have read through some somewhat related questions, but did not find the specifics related to my question.
If I have a stable application that is not going to be changed and it has been thoroughly tested and used in the wild... one might consider removing referential integrity / foreign key constraints in the database schema, with the aim to improve performance.
Without discussing the cons of doing this, does anyone know how much of a performance benefit one might experience? Has anyone done this and experienced noticeable performance benefits?
From my experience with Oracle:
Foreign Keys provide information to the optimizer ("you're going to find exactly one match on this join"), so removing those might result in (not so) funny things happening to your execution plans.
Foreign Keys do perform checks, which costs performance. I have seen those to use up a big chunk of execution time on batch processing (hours on jobs running for large chunks of a day), causing us to use deferred constraints.
Since dropping foreign keys changes the semantic (think cascade, think the application relying on not being able to remove a master entry which gets referenced by something else, at least in the situation of concurrent access) I would only consider such a step when foreign keys are proven to dominate the performance in this application.
The benefits (however small) with be insignificant to the cons.
If performance is a problem check the indexes. Throw more hardware its way. There are a host of techniques to improve performance.
I know you said not to mention the cons - but you should consider them. The data is a very valuable asset and ensuring its validity keeps your business going. If the data becomes invalid you have a huge problem to fix it.
Trying to decide between performance and validity is like choosing which arm you'd rather live without. As others have pointed out, there's better ways to address performance concerns (like index optimization, hardware, query tuning). In any well-designed database system, the performance impacts of reduced referential integrity should be minimal.
It will vary from application to application. So "how much" will be a relative term.
Performance benefit will come while inserting or deleting records.
So if you have big insertion or deletion operations which are taking time then it might help but i will not suggest you to drop it even if your application is stable because in future development this might lead to big issues.
one might consider removing referential integrity / foreign key constraints in the database schema, with the aim to improve performance ... [does] anyone know how much of a performance benefit one might experience
You've given us no information about your database schema or how it's used, so I'll be conservative and estimate your performance benefit could be between between ±∞% (give or take).
Removing foreign keys can improve performance from the point of view that they don't have to be checked.
Removing foreign keys can reduce performance from the point of view that the query plan generation can't trust them and can't take the same shortcuts it would if they were trusted. See Can you trust your constraints? for a SQL Server example.
Foreign keys have more than just performance implications (e.g. ON DELETE CASCADE). So trying to remove them to improve performance without considering exactly what functionality you are removing is naïve at best.
It is not really a fair question in the context of "only speak to the performance gains and not the drawbacks" of this decision (or likely most / all decisions). Since you can't have the pros without the cons you need to know the full extent of both in order to make a truly informed decision. And for this particular question, since there is at best only one benefit (I say "at best" since the performance gain is not as guaranteed as most people would like to believe), then we have little to discuss if we can't talk about the drawbacks (but we can at least start out with the benefit :).
BENEFITS:
Performance: removing Foreign Keys could get you a performance gain on DML statements (INSERT, UPDATE, and DELETE), but any specifics as to how much is highly dependent on the size of the tables in question, indexes, usage patterns (how often are rows updated and are any of the FK fields updatable; how often are rows inserted and/or deleted), etc. While some questions of best practice can be stated to "nearly always" have a performance gain, any effects on performance related to any change can only be determined through testing. With regards to the typical performance gains related to removing FKs, that is something you are not likely to see until you get into relatively large numbers of rows (as in Millions or more) and bulk operations.
DRAWBACKS:
Performance: most people would not expect to see that performance could be negatively impacted by removing FKs, but it could very well happen. I am not sure how all RDBMS's work, but the Query Optimizer in Microsoft SQL Server uses the existence of FKs (that are both Enabled and Trusted) to short-cut certain operations between tables. Not having properly defined FKs prohibits the Optimizer from having that added insight, sometimes resulting in slower queries.
Data Integrity: the primary responsibility of the database is to ensure data integrity. Performance is secondary (even if a very close second). You should never sacrifice the main goal for a lower-priority goal, especially since performance gains can be achieved via other methods, such as: indexes, more/faster CPU, more/faster RAM, etc. Once your data is bad, you might not be able to correct it. With this in mind:
Don't ever trust that an application is "stable" or won't change. Unless the software is obsolete and nobody has the source code to make a change, it more than likely will change.
Don't trust that you have found all of the bugs in the code yet, no matter how "thoroughly tested" you believe it is. The app might appear stable now, but who is to say that a problem won't be discovered later. If you have more than 10 lines of code in your app, it is doubtful that it is 100% bug free.
Even if the app code doesn't change, can you guarantee that no other app code will be written against the DB? If this is software that leaves your control (i.e. is NOT SaaS), can you stop anyone who has installed it from writing their own custom code to add functionality that they want that was not provided in your app? It happens. And even in SaaS companies, other departments might try writing tools against the DB (such as Support who needs to do an operation to help customers). Anyone considering removing FKs is likely to not have set up permissions / security to prevent such a thing.
Ability to fix / update: The app might be "stable" now, but companies often change direction and decisions. FKs give guidance as to the rules by which the data lives. Even if no data integrity issues are happening now, if there comes a time when the app will have new features added (or bugs fixed), not having the FKs defined will make it more likely that bugs will be introduced due to lack of "documentation" that would have been provided by the FKs.
Referential integrity constraints may [in some databases, not SQL Server] automatically create indexes on those FKs; delivering much better performance in queries on those terms.
These indexes often help query performance, giving potentially large boost to efficiency. They also provide additional information to the optimizer, enabling better query plans.
If performance were an issue, there are many other things (caching, prepared stmts, bulk inserts) I would look at before removing referential integrity. But, if you had large numbers of active indexes & were reaching serious limits on insert speed, it might be considered as a last option.
After having worked at various employers I've noticed a trend of "bad" database design with some of these companies - primarily the exclusion of Foreign Keys Constraints. It has always bugged me that these transactional systems didn't have FK's, which would've promoted referential integrity.
Are there any scenarios, in transactional systems, whereby the omission of FK's would be beneficial?
Has anyone else experienced this, if so what was the outcome?
What should one do if they're presented with this scenario and their asked to maintain/enhance the system?
I cannot think of any scenario where, if two columns have a dependency, they should not have a FK constraint set up between them. Removing referential integrity may certainly speed up database operations but there's a pretty high cost to pay for that.
I have experienced such systems and the usual outcome is corrupted data, in the sense that records exists that shouldn't exist (or vice versa). These are the sort of systems where people believe they're okay because the application takes care of it, not caring that:
Every application has to take care of it, rather than one DB server.
It only takes one bug, or malignant app, to screw it up for everyone.
It is the responsibility of the database to protect itself! That is one of its best features.
As to what you should do, I simply put forward the possible things that can go wrong and how using FKs will prevent that (often with a cost/benefit analysis "skewed" toward my viewpoint, if necessary). Then let the company decide - it is their database, after all.
There is a school of thought that a well-written application does not need referential integrity. If the application does things right, the thinking goes, there's no need for constraints.
Such thinking is akin to not doing defensive programming because if you write the code correctly, you won't have bugs. While true, it simply won't happen. Not using appropriate constraints is asking for data corruption.
As for what you should do, you should encourage the company to add constraints at every opportunity. You don't want to push it to the point of getting in trouble or making a bad name for yourself, but as long as the environment is appropriate, keep pushing for it. Everyone's life will be better in the long run.
Personally, I have no problem with a database not having explicit declarations for foreign keys. But, it depends on how the database is being used.
Most of the databases that I work with are relatively static data derived from one or more transactional systems. I am not particularly concerned with rogue updates affecting the database, so an explicit definition of a foreign key relationship is not particularly important.
One thing that I do have is very consistent naming. Basically, every table has a first column called ID, which is exactly how the column is refered to in other tables (or, sometimes with a prefix, when there are multiple relationships between two entities). I also try to insist that every column in such a database has a unique name that describes the attribute (so "CustomerStartDate" is different from "ProductStartDate").
If I were dealing with data that had more "cooks in the pot", then I would want to be more explicit about the foreign key relationships. And, I then I am more willing to have the overhead of foreign key definitions.
This overhead arises in many places. When creating a new table, I may want to use use "create table as" or "select into" and not worry about the particulars of constraints. When running update or insert queries, I may not want the database overhead of checking things that I know are ok. However, I must emphasize that consistent naming greatly increases my confidence that things are ok.
Clearly, my perspective is not one of a DBA but of a practitioner. However, invalid relationships between tables are something I -- or the rest of my team -- almost never has to deal with.
As long as there's a single point of entry into the database it ultimately doesn't matter which "layer" is maintaining referential integrity. Using the "built-in layer" of foreign key constraints seems to make the most sense, but if you have a rock solid service layer responsible for the same thing then it has freedom to break the rules if necessary.
Personally I use foreign key constraints and engineer my apps so they don't have to break the rules. Relational data with guaranteed referential integrity is just easier to work with.
The performance gained is probably equivalent to the performance lost from having to maintain integrity outside of the db.
In an OLTP database, the only reason I can think of is if you care about performance more than data integrity. Enforcing a FK when row is inserted to the child table requires an index seek on the parent table and I can imagine there may be extreme situations where even this relatively quick index seek is too much. For example, some kind of very intensive logging where you can live with incorrect log entries and the application doing the writing is simple and unlikely to have bugs.
That being said, if you can live with corrupt data, you can probably live without a database in the first place.
Defensive Programming withot foreign keys works if you primarily use stored procedures and every application uses those stored procedures, instead of writing their own queries. Then you can control it quite easily and more flexible than the standard foreign keys.
One situation I can think of off the top of my head where foreign key constraints are not readily usable is a permissions module where permissions can be applied per user or per group, determined by a Boolean. So some of the records in the permissions table have a user id and others have a group id. If you still wanted foreign key constraints, you would have to have two different fields for the same mutally exclusive information and allow them to be null. Meaning adding another constraint saying that one is allowed to be null but they can't both be null, as well as a combination of 3 fields must be unique instead of a combination of 2 fields (user/group id and permission id). And the alternative is two separate tables containing the same data, meaning maintaining both tables separately.
But perhaps in that scenario, it's best to separate the data. Anything where you need the same field to connect to different tables based on other data in that record, you cannot use foreign field constraints, and it becomes best to keep the constraints in the stored procedures and views instead.
I'm building Ruby on Rails 2.3.5 app. By default, Ruby on Rails doesn't provide foreign key contraints so I have to do it manually. I was wondering if introducing foreign keys reduces query performance on the database side enough to make it not worth doing. Performance in this case is my first priority as I can check for data consistency with code. What is your recommendation in general? do you recommend using foreign keys? and how do you suggest I should measure this?
Assuming:
You are already using a storage engine that supports FKs (ie: InnoDB)
You already have indexes on the columns involved
Then I would guess that you'll get better performance by having MySQL enforce integrity. Enforcing referential integrity, is, after all, something that database engines are optimized to do. Writing your own code to manage integrity in Ruby is going to be slow in comparison.
If you need to move from MyISAM to InnoDB to get the FK functionality, you need to consider the tradeoffs in performance between the two engines.
If you don't already have indicies, you need to decide if you want them. Generally speaking, if you're doing more reads than writes, you want (need, even) the indicies.
Stacking an FK on top of stuff that is currently indexed should cause less of an overall performance hit than implementing those kinds of checks in your application code.
Generally speaking, more keys (foreign or otherwise) will reduce INSERT/UPDATE performance and increase SELECT performance.
The added benefit of data integrity, is likely just about always worth the small performance decrease that comes with adding your foreign keys. What good is a fast app if the data within it is junk (missing parts or etc)?
Found a similar query here: Does Foreign Key improve query performance?
You should define foreign keys. In general (though I do not know the specifics about mySQL), there is no effect on queries (and when there is an optimizer, like the Cost based optimizer in Oracle, it may even have a positive effects since the optimizer can rely on the foreign key information to choose better access plans).
As per the effect on insert and update, there may be an impact, but the benefits that you get (referential integrity and data consistency) far outweight the performance impact. Of course, you can design a system that will not perform at all, but the main reason will not be because you added the foreign keys. And the impact on maintaining your code when you decide to use some other language, or because the business rules have slightly changed, or because a new programmer joins your team, etc., is far more expensive than the performance impact.
My recommendation, then, is yes, go and define the foreign keys. Your end product will be more robust.
It is a good idea to use foreign keys because that assures you of data consistency ( you do not want orphan rows and other inconsistent data problems).
But at the same time adding a foreign key does introduce some performance hit. Assuming you are using INNODB as the storage engine, it uses clustered index for PK's where essentially data is stored along with the PK. For accessing data using secondary index requires a pass over the secondary index tree ( where nodes contain the PK) and then a second pass over the clustered index to actually fetch the data. So any DML on the parent table which involves the FK in question, will require two passes over the index in the child table. Ofcourse, the impact of the performance hit depends on the amount of data, your disk performance, your memory constraints ( data/index cached). So it is best to measure it with your target system in mind. I would say the best way to measure it is with your sample target data, or atleast some representative target data for your system. Then try to run some benchmarks with and without FK constraints. Write client side scripts which generate the same load in both cases.
Though, if you are manually checking for FK constraints, I would recommend that you leave it upto mysql and let mysql handle it.
Two points:
1. are you sure that checking integrity at the application level would be better in terms of performance?
2. run your own test - testing if FKs have positive or negative influence on performance should be almost trivial.
I'm trying my best to persuade my boss into letting us use foreign keys in our databases - so far without luck.
He claims it costs a significant amount of performance, and says we'll just have jobs to cleanup the invalid references now and then.
Obviously this doesn't work in practice, and the database is flooded with invalid references.
Does anyone know of a comparison, benchmark or similar which proves there's no significant performance hit to using foreign keys? (Which I hope will convince him)
There is a tiny performance hit on inserts, updates and deletes because the FK has to be checked. For an individual record this would normally be so slight as to be unnoticeable unless you start having a ridiculous number of FKs associated to the table (Clearly it takes longer to check 100 other tables than 2). This is a good thing not a bad thing as databases without integrity are untrustworthy and thus useless. You should not trade integrity for speed. That performance hit is usually offset by the better ability to optimize execution plans.
We have a medium sized database with around 9 million records and FKs everywhere they should be and rarely notice a performance hit (except on one badly designed table that has well over 100 foreign keys, it is a bit slow to delete records from this as all must be checked). Almost every dba I know of who deals with large, terabyte sized databases and a true need for high performance on large data sets insists on foreign key constraints because integrity is key to any database. If the people with terabyte-sized databases can afford the very small performance hit, then so can you.
FKs are not automatically indexed and if they are not indexed this can cause performance problems.
Honestly, I'd take a copy of your database, add properly indexed FKs and show the time difference to insert, delete, update and select from those tables in comparision with the same from your database without the FKs. Show that you won't be causing a performance hit. Then show the results of queries that show orphaned records that no longer have meaning because the PK they are related to no longer exists. It is especially effective to show this for tables which contain financial information ("We have 2700 orders that we can't associate with a customer" will make management sit up and take notice).
From Microsoft Patterns and Practices: Chapter 14 Improving SQL Server Performance:
When primary and foreign keys are
defined as constraints in the database
schema, the server can use that
information to create optimal
execution plans.
This is more of a political issue than a technical one. If your project management doesn't see any value in maintaining the integrity of your data, you need to be on a different project.
If your boss doesn't already know or care that you have thousands of invalid references, he isn't going to start caring just because you tell him about it. I sympathize with the other posters here who are trying to urge you to do the "right thing" by fighting the good fight, but I've tried it many times before and in actual practice it doesn't work. The story of David and Goliath makes good reading, but in real life it's a losing proposition.
It is OK to be concerned about performance, but making paranoid decisions is not.
You can easily write benchmark code to show results yourself, but first you'll need to find out what performance your boss is concerned about and detail exactly those metrics.
As far as the invalid references ar concerned, if you don't allow nulls on your foreign keys, you won't get invalid references. The database will esception if you try to assign an invalid foreign key that does not exist. If you need "nulls", assign a key to be "UNDEFINED" or something like that, and make that the default key.
Finally, explain database normalisation issues to your boss, because I think you will quickly find that this issue will be more of a problem than foreign key performance ever will.
Does anyone know of a comparison, benchmark or similar which proves there's no significant performance hit to using foreign keys ? (Which I hope will convince him)
I think you're going about this the wrong way. Benchmarks never convince anyone.
What you should do, is first uncover the problems that result from not using foreign key constraints. Try to quantify how much work it costs to "clean out invalid references". In addition, try and gauge how many errors result in the business process because of these errors. If you can attach a dollar amount to that - even better.
Now for a benchmark - you should try and get insight into your workload, identify which type of operations are done most often. Then set up a testing environment, and replay those operations with foreign keys in place. Then compare.
Personally I would not claim right away without knowledge of the applications that are running on the database that foreign keys don't cost performance. Especially if you have cascading deletes and/or updates in combination with composite natural primary keys, then I personally would have some fear of performance issues, especially timed-out or deadlocked transactions due to side-effects of cascading operations.
But no-one can tell you- you have to test yourself, with your data, your workload, your number of concurrent users, your hardware, your applications.
A significant factor in the cost would be the size of the index the foreign key references - if it's small and frequently used, the performance impact will be negligible, large and less frequently used indexes will have more impact, but if your foreign key is against a clustered index, it still shouldn't be a huge hit, but #Ronald Bouman is right - you need to test to be sure.
i know that this is a decade post.
But database primitives are always on demand.
I will refer to my own experience.
In one of the projects that i have worked has to deal with a telecommunication switch database. They have developed a database with no FKs, the reason was because they wanted as much faster inserts they could have. Because sy system it self it have to deal with calls, it make some sense.
Before, there was no need for any intensive queries and if you wanted any report, you could use the GUI software of the switch. After some time you could have some basic reports.
But when i was involved they wanted to develop and AI thus to be able to create smart reports and have something like an automatic troubleshooting.
It was completely a nightmare, having millions of records, you couldn't execute any long query and many times facing sql server timeout. And don't even think using Entity Framework.
It is much difference when you have to face a situation like this instead of describing.
My advice is that you have to be very specific on your design and having a very good reason why not using FKs.
I understand the need to have referential integrity for limiting specific values on entry or possibly preventing them from removal upon a request of deletion. However, I am unclear as to a valid use case which would exclude this mechanism from always being used.
I guess this would fall into several sub-questions:
When is referential integrity not appropriate?
Is it appropriate to have fields containing multiple and/or possibly incomplete subsets of a foreign key's list?
Typically, should this be a schema structure design decision or an interface design decision? (Or possibly neither or both)
Thoughts?
When is referential integrity not appropriate?
Referential intergrity if typically not used on Data Warehouses where the data is a read only copy of a transactional datbase. Another example of when you'd not need RI is when you want to log information which includes row ids; maintaining referential integrity for a read-only log table is a waste of database overhead.
Is it appropriate to have fields containing multiple and/or possibly incomplete subsets of a foreign key's list?
Sometimes you care more about capturing data than data quality. Imagine you are aggregating a large amount of data from disparate systems which each in their own right suffer from data quality issues. Sometimes you are after the greater good of data quality and having everything in one place even with broken keys etc. represents a starting point for moving towards true data quality. It's not ideal, but it does happen as the beenfits could outweigh the tradeoffs.
Typically, should this be a schema structure design decision or an interface design decision? (Or possibly neither or both)
Everything about systems development is centered around information security, and a key element of that is data integrity. The database structure should lean towards enforcing these things when possible, however you often are not dealing with modern database systems. Sometimes your data source is an old school AS400 with long-antiquated apps. Sometimes you have to build a data and business layer which provide for data integrity.
Just my thoughts.
The only case I have heard of is if you are going to load a vast amount of data into your database; in that case, it may make sense to turn referential integrity off, as long as you know for certain that the data is valid. Once your loading/migration is complete, referential integrity should be turned back on.
There are arguments about putting data validation rules in programming code vs. the database, and I think it depends on the use cases of your software. If a single application is the only path to the database, you could put validation into the program itself and probably be alright. But if several different programs are using the database at the same time (e.g. your application and your friend's application), you'll want business rules in the database so that your data is always valid.
By 'validation rules', I am talking about rules such as 'items in cart > 0'. You may or may not want validation rules. But I think that primary/foreign keys are always important (or you could find later on that you wish you had them). I think they are required if you want to do replication at some point.
When is referential integrity not appropriate?
Sometimes when you are copying lots
of records in bulk, or restoring
data from some sort of backup, it is
convenient to temporarily turn off
the constraints of referential
integrity.
Is it appropriate to have fields containing multiple and/or possibly incomplete subsets of a foreign key's list?
Duplicating data in this way goes
against the concept of
normalization. There are are
advantages and disadvantages to this
approach.
Typically, should this be a schema structure design decision or an interface design decision? (Or possibly neither or both)
I would consider it a schema design
decision. Think about the best way
to model your problem in relational
terms. Use the database in the way it
was intended.
Referential integrity would always be appropriate if it didn't come at the cost of performance, scalability, and/or other features.
In some applications, referential integrity may be traded for something more important than the quality of the data.
Never, though a few people in the NoSQL, the multi-value, and oo-db realms will feel differently. Don't listen to them, they're wrong.
Yes. For example, if a vehicle is identified uniquely as (lotid,vin) then lotid is a foreign key to the lot table. If you want to find all pictures for a lot you can join the vehicle_pictures table right to the lot table, by using a subset of the vehicle_pictures key (lotid in (lotid,vin)). Or, am I not understanding you?
Schema, interface comes second. If the schema is bad, having a nice interface is not a long term goal.