When is referential integrity not appropriate? - sql

I understand the need to have referential integrity for limiting specific values on entry or possibly preventing them from removal upon a request of deletion. However, I am unclear as to a valid use case which would exclude this mechanism from always being used.
I guess this would fall into several sub-questions:
When is referential integrity not appropriate?
Is it appropriate to have fields containing multiple and/or possibly incomplete subsets of a foreign key's list?
Typically, should this be a schema structure design decision or an interface design decision? (Or possibly neither or both)
Thoughts?

When is referential integrity not appropriate?
Referential intergrity if typically not used on Data Warehouses where the data is a read only copy of a transactional datbase. Another example of when you'd not need RI is when you want to log information which includes row ids; maintaining referential integrity for a read-only log table is a waste of database overhead.
Is it appropriate to have fields containing multiple and/or possibly incomplete subsets of a foreign key's list?
Sometimes you care more about capturing data than data quality. Imagine you are aggregating a large amount of data from disparate systems which each in their own right suffer from data quality issues. Sometimes you are after the greater good of data quality and having everything in one place even with broken keys etc. represents a starting point for moving towards true data quality. It's not ideal, but it does happen as the beenfits could outweigh the tradeoffs.
Typically, should this be a schema structure design decision or an interface design decision? (Or possibly neither or both)
Everything about systems development is centered around information security, and a key element of that is data integrity. The database structure should lean towards enforcing these things when possible, however you often are not dealing with modern database systems. Sometimes your data source is an old school AS400 with long-antiquated apps. Sometimes you have to build a data and business layer which provide for data integrity.
Just my thoughts.

The only case I have heard of is if you are going to load a vast amount of data into your database; in that case, it may make sense to turn referential integrity off, as long as you know for certain that the data is valid. Once your loading/migration is complete, referential integrity should be turned back on.
There are arguments about putting data validation rules in programming code vs. the database, and I think it depends on the use cases of your software. If a single application is the only path to the database, you could put validation into the program itself and probably be alright. But if several different programs are using the database at the same time (e.g. your application and your friend's application), you'll want business rules in the database so that your data is always valid.
By 'validation rules', I am talking about rules such as 'items in cart > 0'. You may or may not want validation rules. But I think that primary/foreign keys are always important (or you could find later on that you wish you had them). I think they are required if you want to do replication at some point.

When is referential integrity not appropriate?
Sometimes when you are copying lots
of records in bulk, or restoring
data from some sort of backup, it is
convenient to temporarily turn off
the constraints of referential
integrity.
Is it appropriate to have fields containing multiple and/or possibly incomplete subsets of a foreign key's list?
Duplicating data in this way goes
against the concept of
normalization. There are are
advantages and disadvantages to this
approach.
Typically, should this be a schema structure design decision or an interface design decision? (Or possibly neither or both)
I would consider it a schema design
decision. Think about the best way
to model your problem in relational
terms. Use the database in the way it
was intended.

Referential integrity would always be appropriate if it didn't come at the cost of performance, scalability, and/or other features.
In some applications, referential integrity may be traded for something more important than the quality of the data.

Never, though a few people in the NoSQL, the multi-value, and oo-db realms will feel differently. Don't listen to them, they're wrong.
Yes. For example, if a vehicle is identified uniquely as (lotid,vin) then lotid is a foreign key to the lot table. If you want to find all pictures for a lot you can join the vehicle_pictures table right to the lot table, by using a subset of the vehicle_pictures key (lotid in (lotid,vin)). Or, am I not understanding you?
Schema, interface comes second. If the schema is bad, having a nice interface is not a long term goal.

Related

How to create a dynamique data base

i'm working with .Net core, i want to create a data base of a stock so that the user can add a new type of product with unknown features and he can also add features to existing product.
i really need help with the design of the data base.
Databases have schemas. This is a rigid structure that defines both the characteristics and constraints of the data that can be placed in it. You cannot do something like dynamically add columns, etc. without fundamentally impacting the database integrity.
In true relational databases (SQL Server, MySQL, Postgresql, etc.). Such changes are flat out disallowed. However, some less rigid NoSQL solutions are either schema-less or have malleable schemas and will allow you to just start tracking some new data point without first altering the structure of the database. Even then, though, data integrity becomes a serious issue, and you can end up borking your entire dataset if you do this kind of stuff willy-nilly.
Long and short, there's really no "dynamic" where databases are concerned. Even in NoSQL solutions, you're largely expected to plan out your data structure before hand, and failure to do so, results in inconsistencies in the data that can negate its usefulness entirely.
Your best bet for something like the described requirement is to actually have a Features table. In the simplest form, it might just have a string column for a name and a foreign key or simply an ID referencing column (depending on whether it's relational or not) back to the product it's associate with. You'll need a primary key as well, which could either be a composite of the name and product ID (essentially making the combination a unique) or you might want to have an actual identity type column.
The key with data, in general, is to generalize. Nothing is completely unique and is just usually variations of other things. Boil down your data to least common denominators to determine your actual schema. Then, where there's outliers, you can take a less-rigid strategy like described above.

Is CHECK better then invalid data?

I'm writing a code for a database. I have a table with log of machines activity, looking like:
CREATE TABLE Work(
id SERIAL PRIMARY KEY,
machine_ID integer NOT NULL DEFAULT 0,
start_work timestamp,
etc...
);
I know machine_ID can be in 1 to 5.
Here my question comes:
Are there any benefits of using CHECK(machine_ID >= 1 AND machine_ID <=5)?
Wouldn't it be better to accept polluted data in order to repair possible bugs later, or even to use the possibility to clean up the data?
Perhaps this is more appropriate as a comment.
But a column called machine_id should not be validated using a check constraint. Instead, you should have a table -- say machines -- that is a reference table for machines.
Your code should use a foreign key constraint. This is a special type of data validation called relational integrity.
As for your question. In my experience, it is usually better to capture errors when they go into the database. Bad data in the database usually leads to problems further down the road -- problems that could have been avoided.
That will be okey if you want to limit how may id can be in your database, but nothing else.
This is massively subjective, with different approaches and opinions, and is highly dependent on your business case.
One principle is to keep things simple, but even what that means is subjective.
Reject Early
In a pre-processing layer, apply rules to the data being received. As soon as possible reject any row/file/source and request re-submission.
This makes it clear what has been accepted, what hasn't been accepted, how to deal with such scenarios, etc.
Consume Everything
Where failures can be complicated and varied, a database can often be the best place to do analysis to understand the source, cause, magnitude and/or impact of failures. In which case ingesting everything can be beneficial.
But that can have a potentially uncontrolled impact. As such, my general experience of this is to have a separate staging area in the database. Essentially as unstructured as possible, to allow as much data in as possible. But even that's not always helpful. If you use strings for all fields, you can accept data that would normally have to be accepted. But even then, what if you have a file with 9 columns being supplied for a table of 8 columns? You could accept the whole row as a single string, but analyzing that for meaningful results will be close to impossible.
What this means in your case depends on the project you are working on, and that's what you haven't described.
My personal default position is to re-categorise certain source data failures as predicted/business-as-usual inconsistencies. You can then structure your staging area to handle these, for reporting, reconciliation, remedial action, etc. And, more importantly, explicitly build them in to all related business processes. Doing so will introduce a cost, which can be estimated against the benefit of accommodating them, with the intention of only accommodating inconsistencies where it's actually a material benefit. (Rather than just being a hoarder that keeps everything just in case it might help someone someday, maybe.)
Whatever direction that goes, however, the operational database itself (not the staging area) is heavily structured, with integrity constraints in place to prevent bugs, unexpected inputs, etc, from corrupting existing data or causing unexpected, unmonitored consequences.
If you truly believe that your operational database (be that transactional, analytical, or anything else) would benefit from being absent some/many/all of these data integrity tools, then a SQL Relational Database is probably the wrong tool for you. Instead consider the enormous number of unstructured data storage and processing platforms.
There are a bunch of different considerations.
Firstly, I prefer not to use check constraints for business rules that are likely to change - I don't want to roll out a database schema change (and a check constraint is a schema change) in response to a predictable business event. Adding or removing machines feels like a predictable business event; so as #gordonLinoff suggests, I'd recommend using a foreign key to the "machines" table.
Secondly, I don't like "hidden code" - from a development and maintenance point of view, I like to keep all the validation as intelligible as possible. Check constraints and triggers are relatively "hidden", hard to document, hard to debug, hard to test for non-trivial use cases; foreign keys, on the other hand, are explicit and developers expect them.

Database Design Without Foreign Keys

After having worked at various employers I've noticed a trend of "bad" database design with some of these companies - primarily the exclusion of Foreign Keys Constraints. It has always bugged me that these transactional systems didn't have FK's, which would've promoted referential integrity.
Are there any scenarios, in transactional systems, whereby the omission of FK's would be beneficial?
Has anyone else experienced this, if so what was the outcome?
What should one do if they're presented with this scenario and their asked to maintain/enhance the system?
I cannot think of any scenario where, if two columns have a dependency, they should not have a FK constraint set up between them. Removing referential integrity may certainly speed up database operations but there's a pretty high cost to pay for that.
I have experienced such systems and the usual outcome is corrupted data, in the sense that records exists that shouldn't exist (or vice versa). These are the sort of systems where people believe they're okay because the application takes care of it, not caring that:
Every application has to take care of it, rather than one DB server.
It only takes one bug, or malignant app, to screw it up for everyone.
It is the responsibility of the database to protect itself! That is one of its best features.
As to what you should do, I simply put forward the possible things that can go wrong and how using FKs will prevent that (often with a cost/benefit analysis "skewed" toward my viewpoint, if necessary). Then let the company decide - it is their database, after all.
There is a school of thought that a well-written application does not need referential integrity. If the application does things right, the thinking goes, there's no need for constraints.
Such thinking is akin to not doing defensive programming because if you write the code correctly, you won't have bugs. While true, it simply won't happen. Not using appropriate constraints is asking for data corruption.
As for what you should do, you should encourage the company to add constraints at every opportunity. You don't want to push it to the point of getting in trouble or making a bad name for yourself, but as long as the environment is appropriate, keep pushing for it. Everyone's life will be better in the long run.
Personally, I have no problem with a database not having explicit declarations for foreign keys. But, it depends on how the database is being used.
Most of the databases that I work with are relatively static data derived from one or more transactional systems. I am not particularly concerned with rogue updates affecting the database, so an explicit definition of a foreign key relationship is not particularly important.
One thing that I do have is very consistent naming. Basically, every table has a first column called ID, which is exactly how the column is refered to in other tables (or, sometimes with a prefix, when there are multiple relationships between two entities). I also try to insist that every column in such a database has a unique name that describes the attribute (so "CustomerStartDate" is different from "ProductStartDate").
If I were dealing with data that had more "cooks in the pot", then I would want to be more explicit about the foreign key relationships. And, I then I am more willing to have the overhead of foreign key definitions.
This overhead arises in many places. When creating a new table, I may want to use use "create table as" or "select into" and not worry about the particulars of constraints. When running update or insert queries, I may not want the database overhead of checking things that I know are ok. However, I must emphasize that consistent naming greatly increases my confidence that things are ok.
Clearly, my perspective is not one of a DBA but of a practitioner. However, invalid relationships between tables are something I -- or the rest of my team -- almost never has to deal with.
As long as there's a single point of entry into the database it ultimately doesn't matter which "layer" is maintaining referential integrity. Using the "built-in layer" of foreign key constraints seems to make the most sense, but if you have a rock solid service layer responsible for the same thing then it has freedom to break the rules if necessary.
Personally I use foreign key constraints and engineer my apps so they don't have to break the rules. Relational data with guaranteed referential integrity is just easier to work with.
The performance gained is probably equivalent to the performance lost from having to maintain integrity outside of the db.
In an OLTP database, the only reason I can think of is if you care about performance more than data integrity. Enforcing a FK when row is inserted to the child table requires an index seek on the parent table and I can imagine there may be extreme situations where even this relatively quick index seek is too much. For example, some kind of very intensive logging where you can live with incorrect log entries and the application doing the writing is simple and unlikely to have bugs.
That being said, if you can live with corrupt data, you can probably live without a database in the first place.
Defensive Programming withot foreign keys works if you primarily use stored procedures and every application uses those stored procedures, instead of writing their own queries. Then you can control it quite easily and more flexible than the standard foreign keys.
One situation I can think of off the top of my head where foreign key constraints are not readily usable is a permissions module where permissions can be applied per user or per group, determined by a Boolean. So some of the records in the permissions table have a user id and others have a group id. If you still wanted foreign key constraints, you would have to have two different fields for the same mutally exclusive information and allow them to be null. Meaning adding another constraint saying that one is allowed to be null but they can't both be null, as well as a combination of 3 fields must be unique instead of a combination of 2 fields (user/group id and permission id). And the alternative is two separate tables containing the same data, meaning maintaining both tables separately.
But perhaps in that scenario, it's best to separate the data. Anything where you need the same field to connect to different tables based on other data in that record, you cannot use foreign field constraints, and it becomes best to keep the constraints in the stored procedures and views instead.

Best practice: should I use FK on DB using nHibernate/FluentNhibernate?

So far I always enforce my DB with FK relationship. Things changed yesterday while mapping some classes with FluentNhibernate. My mapping didn't work and I discovered that's the issue was because of the order FN create the query.
Now a question arise: should I keep enforcing data with FK or it's better to avoid it since I focus on domain classes instead of sql queries?
Thanks
To my knowledge, it will be far better to keep your database consistent,
cause you may not be the only one who works on this DB in future,
and maybe someone else have access to the DB and do sth that could corrupt your data consistency
and as a result your application also doesn't behavior in the way you expect because of assummed conditions that no longer hold.
Letting Fluent/NH create your database during development is fine, but when it goes into production you really should check all the foreign keys, index's, etc etc and then only do scripted changes there on after.
Keep your database consistent, maintain referential integrity.
If a tool you are using breaks as a result there is bound to be a workaround. However if you lose referential integrity to use nhibernate - what happens if you decide to use a different ORM? You will have a dodgy database and who's to say that the next ORM in line will like that?
Its like a separation-of-concerns question, each chunk of your application should be designed to be robust enough to survive if another chunk is changed or removed - so don't change good database practice simply to make a product that is layered above it play nicely.
Using a domain-driven approach , or model oriented approach where the DB is merely seen as an 'implementation-detail', does not mean that you should ignore the integrity of your data.
I see no reason why you should drop foreign-key (and other) constraints from your database.
The database is more then just a storage for your data. It's task is also to guard the integrity of it.
It is perfectly possible to combine the 2 worlds (domain driven and relational database) with NHibernate. Make sure that the 2 areas focus on what they're best at. And, the database is best at storing data and making sure that the data remains valid / integer.

database relationships

does setting up proper relationships in a database help with anything else other than data integrity?
do they improve or hinder performance?
As long as you have the obvious indexes in place corresponding to the foreign keys, there should be no perceptible negative effect on performance. It's one of the more foolproof database features you have to work with.
I'd have to say that proper relationships will help people to understand the data (or the intention of the data) better than if omitting them, especially as the overall cost is quite low in maintaining them.
Their presence doesn't hinder performance except in terms of architecture (as others have pointed out, data integrity will occasionally cause foreign key violations which may have some effect) but IMHO is outweighed by the many benefits (if used correctly).
I know you weren't asking whether to use FKs or not, but I thought I'd just add a couple of viewpoints about why to use them (and have to deal with the consequences):
There are other considerations too, such as if you ever plan to use an ORM (perhaps later on) you'll require foreign keys. They can also be very helpful for ETL/Data Import and Export and later for reporting and data warehousing.
It's also helpful if other applications will make use of the schema - since Foreign Keys implement a basic business logic. So your application (and any others) only need to be aware of the relationships (and honour them). It'll keep the data consistent and most likely reduce the number of data errors in any consuming applications.
Lastly, it gives you a pretty decent hint as to where to put indexes - since it's likely you'll lookup table data by an FK value.
It neither helps nor hurts performance in any significant way. The only hindrance is the check for integrity when inserting/updating/deleting.
Foreign keys are an important part of database design because they ensure consistency. You should use them because it offers the lowest level of protection against data screw ups that can wreck your applications. Another benefit is that database tools (visualization/analysis/code generation) use foreign keys to relate data.
Do relationships in databases improve or hinder performance?
Like any tool in your toolbox, the results you'll get depend on how you use it. Properly specified relationships and a well-designed logical database can be an enormous boon to performance -- consider the difference between searching through normalized and denormalized data, for example.
Depending on your database engine, relationships defined through foreign key constraints can benefit performance. The constraint allows the engine to make certain assumptions about the existence of data in tables on the parent side of the key.
A brief explanation for MS SQL Server can be found at http://www.microsoft.com/technet/abouttn/flash/tips/tips_122104.mspx. I don't know about other engines, but the concept would make sense in other platforms.
Relationships in the data exist whether you declare them or not. Declaring and enforcing the relationships via FK constraints will prevent certain kinds of errors in the data, at a small cost of checking data when inserts/updates/deletes occur.
Declaring cascading deletes via relationships helps prevent certain kinds of errors when deleting data.
Knowing the relationships helps to make flexible and correct use of the data when forming queries.
Designing the tables well can make the relationships more obvious and more useful. Using relationships in the data is the primary power behind using relational databases in the first place.
About impact on performance: In my experience with MS Access 2003, if you have a multi-user application and use Relationships to enforce a lot of referential integrity, you can take a big hit in terms of response time for the end-user.
There are different ways to take care of enforcing referential integrity. I decided to take out some rules in Relationships, build more enforcement into the front-end and live with some loss of RI. Of course in the multi-user environment, you want to be very careful with that bit of liberty.
In my experience building performance-sensitive databases, Foreign Keys hurt performance pretty significantly, since they have to be checked every time the referring record is inserted/updated or master record is deleted. If you need a proof, just look at the execution plan.
I still keep them for documentation and for tools to use but I usually disable them, especially in high-performance systems where access to DB is only through the application layer.