How would you make a Temporal Many-to-Many Relationship in SQL? - sql

How would you represent a temporal many-to-many relation in SQL? Under non-temporal circumstances one would use a junction table (aka link/bridge/map) to connect the two sides.
Is adding temporal tracking as simple as including a ValidStart and ValidEnd columns on the junction table? If you have done this, what issues (if any) did you run into? Is there a better method for keeping track of changes over time in this kind of relation?
If it helps at all, in my case I'm specifically using SQL 2008 and the temporal data is not bitemporal as I'm only tracking valid time.

I am working on a project (for some years now) that uses both temporal data and temporal many-to-many relations. Each table has ValidFrom and ValidUntil columns (storing dates only).
First you have to define the semantics of the Valid* columns, i.e. whether ValidUntil is included or excluded from the validity range. You also need to specify whether NULL dates are valid and what their meaning is.
Next you need a couple of functions, such as dbo.Overlaps2() and dbo.Overlaps3() which receive 2 and 3 date ranges respectively, and return 1 if the date ranges overlap and 0 otherwise.
On top of that, I defined views for the many-to-many relationships with dbo.Overlap3(...)=1.
One further point is to have a set of functions which calculate the effective validity range based on dates in 2 or 3 related tables.
Recently I had to add functionality to allow a user to display all available data, or only currently valid data. I save this setting in a users table, associate the SPID to the user when opening a connection, and filter the records in another set of views.

Related

Is it better to have more or less tables in sql?

I have a large database, one of the tables is called Users.
There are two kinds of users in the database - basic users, and advanced users. In the users table, there are 29 columns. However, only 12 of these are applicable to the basic users - the other 17 columns are only used for advanced users, and for basic users they all just contain a value of null.
Is this an OK setup? Would it be more efficient to, say, split the two kinds of users into two different tables, or put all the extra fields that advanced users have in a separate table?
It's better to have the right amount of tables - this may be more or less, depending on your needs.
To your specific case, you should always start with third normal form and only revert to lesser forms when absolutely necessary (such as for performance) and only when you understand the consequences.
An attribute (column) belongs in a table if it is dependent on the key, the whole key and nothing but the key (so help me, Codd).
It's arguable whether your other 17 columns depend on the key in your user table but I would be separating them anyway just for the space saving.
Have your basic user table with the twelve columns (including a unique key of some sort) and your advanced user table with the other columns, and also that key so you can tie the rows from each together.
You could go even further and have a one to many relationship if your use case is that users can have any of the 17 attributes independent of each other but that doesn't seem to be what you've described.
It depends:
If the number of columns is large, then it will be more efficient to create two tables as you describe as you will not be reserving space for 17 columns which end up holding null.
You can always tack a view on the front which combines both tables, so your application code could be unaffected.
Yes its better to split up this table but not in two Its better to split in three table
User Table-
Contain common property of both user and Adavace user
UserID(PK)
UserName
Basic user -
Contains basic user property and have use primary key of user table and foreign key
USerID(FK) - from user table
BasicUsedetail
Advance user-
Contains Advance user property and have use primary key of user table and foreign key
USerID(FK) - from user table
AdvanceUsedetail
In this case, it's valid and more efficient to use 'single table per class hierarchy' in terms of speed to retrieve data but if you insert a BasicUser, it will reserve 17 columns per tuple just for nothing. This case is is so frequent that it is provided by ORMs such as Hibernate. Using this approach you avoid a join between tables which may be expensive depending the case.
The bad thing is that in case your design needs to scale in terms of types of users, you will need to add additional columns which many of them will be empty.
Usually it won't matter much, but if you got many many users and only a few of them are advanced user, it might be better to split. To my knowledge there are not exact rules of when to split and when not.

How to avoid creating a date island in QlikView?

I'm a beginner developer and I have a database which has several different dates.
Created Date
Converted Date
Lost Date
Changed Date
etc.
The data needs to be shown in one application and filter on all dates. I am coding in QlikView and I could create and date island and use their native set analysis to use filter the data, but that is having a major impact on performance.
Anyone coding in QlikView come across a similar scenario?
Set analysis indeed has a major impact on performance. You are better off using the normal 'selection' functionality in QlikView.
For the answer below I am going to assume that you are familiar with the concept of Star Schema development. In short it means separating Dimensions (selection fields) from Fact fields (counter fields, summation fields, etc.) and connecting them via a link table.
There are two possible scenarios:
1. More than one date is related to the same fact.
For example you have a ´sales transactions´ table which has as a fact the amount of money involved in the sale, and there is not only the ´sale date´ but also the ´payment date´ and you want to select on both. In this case you want to have several independent date selections, since you cannot be sure whether the user wants to select on Converted date, Created date... etc. You need to duplicate your ´date island´ with different keynames and connect it to your transactions table twice. Both date pools will no longer be islands and are more properly called ´Calendar dimensions´.
2. Different dates are related to different facts.
In this case you can use one 'Calendar dimension' to accommodate for all date fields. Simply create one AutoNumber key in your calendar and call it %DateKey. Make this field the connection between your calendar table and your link table. Now for all Fact Tables that have a date which you want to make selectable with the calendar, make sure you connect it to the linktable using a key that includes the Date in the Autonumber hash.
Having it experienced this same what i would reccomend would be creating what i call a Key Table like the example below ; keeps the relationships and you don't have to use set analysis as much; just make sure you put a table with all posible dates as one of the child tables and a %DateKey like littlegreen suggested

Difference between a db view and a lookuptable

When I create a view I can base it on multiple columns from different tables.
When I want to create a lookup table I need information from one table, for example the foreign key of an order table, to get customer details from another table. I can create a view having parameters to make sure it will get all data that I need. I could also - from what I have been reading - make a lookup table. What is the difference in this case and when should I choose for a lookup table?? I hope this ain't a bad question, I'm not very into db's yet ;).
Creating a view gives you a "live" representation of the data as it is at the time of querying. This comes at the cost of higher load on the server, because it has to determine the values for every query.
This can be expensive, depending on table sizes, database implementations and the complexity of the view definition.
A lookup table on the other hand is usually filled "manually", i. e. not every query against it will cause an expensive operation to fetch values from multiple tables. Instead your program has to take care of updating the lookup table should the underlying data change.
Usually lookup tables lend themselves to things that change seldomly, but are read often. Views on the other hand - while more expensive to execute - are more current.
I think your usage of "Lookup Table" is slightly awry. In normal parlance a lookup table is a code or reference data table. It might consist of a CODE and a DESCRIPTION or a code expansion. The purpose of such tables is to provide a lsit of permitted values for restricted columns, things like CUSTOMER_TYPE or PRIORITY_CODE. This category of table is often referred to as "standing data" because it changes very rarely if at all. The value of defining this data in Lookup tables is that they can be used in foreign keys and to populate Dropdowns and Lists Of Values.
What you are describing is a slightly different scenario:
I need information from one table, for
example the foreign key of an order
table, to get customer details from
another table
Both these tables are application data tables. Customer and Order records are dynamic. Now it is obviously valid to retrieve additional data from the Customer table to display along side the Order data, and in that sense Customer is a "lookup table". More pertinently it is the parent table of Order, because it has the primary key referenced by the foreign key on Order.
By all means build a view to capture the joining logic between Order and Customer. Such views can be quite helpful when building an application that uses the same joined tables in several places.
Here's an example of a lookup table. We have a system that tracks Jurors, one of the tables is JurorStatus. This table contains all the valid StatusCodes for Jurors:
Code: Value
WS : Will Serve
PP : Postponed
EM : Excuse Military
IF : Ineligible Felon
This is a lookup table for the valid codes.
A view is like a query.
Read this tutorial and you may find helpful info when a lookup table is needed:
SQL: Creating a Lookup Table
Just learn to write sql queries to get exactly what you need. No need to create a view! Views are not good to use in many instances, especially if you start to base them on other views, when they will kill performance. Do not use views just as a shorthand for query writing.

One mysql table with many fields or many (hundreds of) tables with fewer fields?

I am designing a system for a client, where he is able to create data forms for various products he sales him self.
The number of fields he will be using will not be more than 600-700 (worst case scenario). As it looks like he will probably be in the range of 400 - 500 (max).
I had 2 methods in mind for creating the database (using meta data):
a) Create a table for each product, which will hold only fields necessary for this product, which will result to hundreds of tables but with only the neccessary fields for each product
or
b) use one single table with all availabe form fields (any range from current 300 to max 700), resulting in one table that will have MANY fields, of which only about 10% will be used for each product entry (a product should usualy not use more than 50-80 fields)
Which solution is best? keeping in mind that table maintenance (creation, updates and changes) to the table(s) will be done using meta data, so I will not need to do changes to the table(s) manually.
Thank you!
/**** UPDATE *****/
Just an update, even after this long time (and allot of additional experience gathered) I needed to mention that not normalizing your database is a terrible idea. What is more, a not normalized database almost always (just always from my experience) indicates a flawed application design as well.
i would have 3 tables:
product
id
name
whatever else you need
field
id
field name
anything else you might need
product_field
id
product_id
field_id
field value
Your key deciding factor is whether normalization is required. Even though you are only adding data using an application, you'll still need to cater for anomalies, e.g. what happens if someone's phone number changes, and they insert multiple rows over the lifetime of the application? Which row contains the correct phone number?
As an example, you may find that you'll have repeating groups in your data, like one person with several phone numbers; rather than have three columns called "Phone1", "Phone2", "Phone3", you'd break that data into its own table.
There are other issues in normalisation, such as transitive or non-key dependencies. These concepts will hopefully lead you to a database table design without modification anomalies, as you should hope for!
Pulegiums solution is a good way to go.
You do not want to go with the one-table-for-each-product solution, because the structure of your database should not have to change when you insert or delete a product. Only the rows of one or many tables should be inserted or deleted, not the tables themselves.
While it's possible that it may be necessary, having that many fields for something as simple as a product list sounds to me like you probably have a flawed design.
You need to analyze your potential table structures to ensure that each field contains no more than one piece of information (e.g., "2 hammers, 500 nails" in a single field is bad) and that each piece of information has no more than one field where it belongs (e.g., having phone1, phone2, phone3 fields is bad). Either of these situations indicates that you should move that information out into a separate, related table with a foreign key connecting it back to the original table. As pulegium has demonstrated, this technique can quickly break things down to three tables with only about a dozen fields total.

What is the preferred way to store custom fields in a SQL database?

My friend is building a product to be used by different independent medical units.
The database stores a vast collection of measurements taken at different times, like the temperature, blood pressure, etc...
Let us assume these are held in a table called exams with columns temperature, pressure, etc... (as well as id, patient_id and timestamp). Most of the measurements are stored as floats, but some are of other types (strings, integers...)
While many of these measurements are handled by their product, it needs to allow the different medical units to record and process other custom measurements. A very nifty UI allows the administrator to edit these customs fields, specify their name, type, possible range of values, etc...
He is unsure as to how to store these custom fields.
He is leaning towards a separate table (say a table custom_exam_data with fields like exam_id, custom_field_id, float_value, string_value, ...)
I worry that this will make searching both more difficult to achieve and less efficient.
I am leaning towards modifying the exam table directly (while avoiding conflicts on column names with some scheme like prefixing all custom fields with an underscore or naming them custom_1, ...)
He worries about modifying the database dynamically and having different schemas for each medical unit.
Hopefully some people which more experience can weigh in on this issue.
Notes:
he is using Ruby on Rails but I think this question is pretty much framework agnostic, except from the fact that he is only looking for solutions in SQL databases only.
I simplified the problem a bit since the custom fields need to be available for more than one table, but I believe this doesn`t really impact the direction to take.
(added) A very generic reporting module will need to search, sort, generate stats, etc.. of this data, so it is required that this data be stored in the columns of the appropriate type
(added) User inputs will be filtered, for the standard fields as well as for the custom fields. For example, numbers will be checked within a given range (can't have a temperature of -12 or +444), etc... Thus, conversion to the appropriate SQL type is not a problem.
I've had to deal with this situation many times over the years, and I agree with your initial idea of modifying the DB tables directly, and using dynamic SQL to generate statements.
Creating string UserAttribute or Key/Value columns sounds appealing at first, but it leads to the inner-platform effect where you end up having to re-implement foreign keys, data types, constraints, transactions, validation, sorting, grouping, calculations, et al. inside your RDBMS. You may as well just use flat files and not SQL at all.
SQL Server provides INFORMATION_SCHEMA tables that let you create, query, and modify table schemas at runtime. This has full type checking, constraints, transactions, calculations, and everything you need already built-in, don't reinvent it.
It's strange that so many people come up with ad-hoc solutions for this when there's a well-documented pattern for it:
Entity-Attribute-Value (EAV) Model
Two alternatives are XML and Nested Sets. XML is easier to manage but generally slow. Nested Sets usually require some type of proprietary database extension to do without making a mess, like CLR types in SQL Server 2005+. They violate first-normal form, but are nevertheless the fastest-performing solution.
Microsoft Dynamics CRM achieves this by altering the database design each time a change is made. Nasty, I think.
I would say a better option would be to consider an attribute table. Even though these are often frowned upon, it gives you the flexibility you need, and you can always create views using dynamic SQL to pivot the data out again. Just make sure you always use LEFT JOINs and FKs when creating these views, so that the Query Optimizer can do its job better.
I have seen a use of your friend's idea in a commercial accounting package. The table was split into two, first contained fields solely defined by the system, second contained fields like USER_STRING1, USER_STRING2, USER_FLOAT1 etc. The tables were linked by identity value (when a record is inserted into the main table, a record with same identity is inserted into the second one). Each table that needed user fields was split like that.
Well, whenever I need to store some unknown type in a database field, I usually store it as String, serializing it as needed, and also store the type of the data.
This way, you can have any kind of data, working with any type of database.
I would be inclined to store the measurement in the database as a string (varchar) with another column identifying the measurement type. My reasoning is that it will presumably, come from the UI as a string and casting to any other datatype may introduce a corruption before the user input get's stored.
The downside is that when you go to filter result-sets by some measurement metric you will still have to perform a casting but at least the storage and persistence mechanism is not introducing corruption.
I can't tell you the best way but I can tell you how Drupal achieves a sort of schemaless structure while still using the standard RDBMSs available today.
The general idea is that there's a schema table with a list of fields. Each row really only has two columns, the 'table':String column and the 'column':String column. For each of these columns it actually defines a whole table with just an id and the actual data for that column.
The trick really is that when you are working with the data it's never more than one join away from the bundle table that lists all the possible columns so you end up not losing as much speed as you might otherwise think. This will also allow you to expand much farther than just a few medical companies unlike the custom_ prefix you were proposing.
MySQL is very fast at returning row data for short rows with few columns. In this way this scheme ends up fairly quick while allowing you lots of flexibility.
As to search, my suggestion would be to index the page content instead of the database content. Use Solr to parse through rendered pages and hold links to the actual page instead of trying to search through the database using clever SQL.
Define two new tables: custom_exam_schema and custom_exam_data.
custom_exam_data has an exam_id column, plus an additional column for every custom attribute.
custom_exam_schema would have a row to describe how to interpret each of the columns of the custom_exam_data table. It would have columns like name, type, minValue, maxValue, etc.
So, for example, to create a custom field to track the number of fingers a person has, you would add ('fingerCount', 'number', 0, 10) to custom_exam_schema and then add a column named fingerCount to the exam table.
Someone might say it's bad to change the database schema at run time, but I'd argue that configuring these custom fields is part of set up and won't happen too often. Still, this method lets you handle changes at any time and doesn't risk messing around with your core table schemas.
lets say that your friend's database has to store data values from multiple sources such as demogrphic values, diagnosis, interventions, physionomic values, physiologic exam values, hospitalisation values etc.
He might have as well to define choices, lets say his database is missing the race and the unit staff need the race of the patient (different races are more unlikely to get some diseases), they might want to use a drop down with several choices.
I would propose to use an other table that would have these choices or would you just use a "Custom_field_choices" table, which at some point is exactly the same but with a different name.
Considering that the database :
- needs to be flexible
- that data from multiple tables can be added and be customized
- that you might want to keep the integrity of the main structure of your database for distribution and uniformity purpose
- that data MUST have a limit and alarms and warnings
- that data must have units ( 10 kg or 10 pounds) ?
- that data can have a selection of choices
- that data can be with different rights (from simple user to admin)
- that these data might be needed to generate reports without modifying the code (automation)
- that these data might be needed to make cross reference analysis within the system without modifying the code
the custom table would be my solution, modifying each table would end up being too risky.
I would store those custom fields in a table where each record ( dataType, dataValue, dataUnit ) would use in one row. So there would be a relation oneToMany from one sample to the data. You can also create a table to record all the kind of cutsom types you would use. For example:
create table DataType
(
id int primary key,
name varchar(100) not null unique
description text,
uri varchar(255) //<-- can be used for an ONTOLOGY
)
create table DataRecord
(
id int primary key,
sample_id int not null,//<-- reference to the sample
dataType_id int not null, //<-- references DataType
value varchar(100),//<-- the value as string
unit varchar(50)//<-- g, mg/ml, etc... but it could also be a link to a table describing the units just like DataType
)