If I'm going to write a whole SQL script to create a database with tables (that has foreign keys) should I write the dependent tables first?
You have some options:
You can create all the tables first, and then use ALTER TABLE to add the Foreign Keys.
You can create the one to many relationships as the tables are created. In that case, the order of table creation will matter.
When you create such DBs you (more often than not) seed the tables with data as well.
Depending on how much data you insert, you may want to make a decision to either INSERT data first, or to enforce RI first. If you have small tables, the RI checks don't consume too many resources. If you have large tables, then you may want to first insert the data and then implement the RI - that way the check is not done one row at a time, but at one time for all rows. Since you're seeding the tables, you know your data - presumably you'll do clean inserts so as to not fail the downstream RI check.
Related
I have different tables in my scheme with different columns, but I want to store data of when was the table modified or when was the data stored, so I added some columns to specify that.
I realized that I had to add the same "modification_date" and "modification_time" columns to all my tables, so I thought about making a new table called DATA_INFO so I won't need to do so, but every table has a different PRIMARY KEY and I don't know which one to add as FOREIGN KEY to the DATA_INFO table.
I don't know if I have to maybe add all of them or is there another way to do what I need.
It's better to have the same "modification_datetime" column in all tables, rather than trying to keep that data in a central table.
That's what we have done at every shop I've worked in.
I want to emphasize that a separate table is not reasonable for this purpose. The lack of an obvious foreign key is a hint.
Unlike Tab Allerman, tables that I create are much less likely to be updated, so I have three additional columns on most tables:
CreatedBy -- the user who created the row
CreatedAt -- when the row was creatd
CreatedOn -- the system where the table was created
The most important point is that this information can -- in many databases -- be implemented using default values rather than triggers. That is a big advantage of working within a single row. The fewer triggers, the better.
When I create a view I can base it on multiple columns from different tables.
When I want to create a lookup table I need information from one table, for example the foreign key of an order table, to get customer details from another table. I can create a view having parameters to make sure it will get all data that I need. I could also - from what I have been reading - make a lookup table. What is the difference in this case and when should I choose for a lookup table?? I hope this ain't a bad question, I'm not very into db's yet ;).
Creating a view gives you a "live" representation of the data as it is at the time of querying. This comes at the cost of higher load on the server, because it has to determine the values for every query.
This can be expensive, depending on table sizes, database implementations and the complexity of the view definition.
A lookup table on the other hand is usually filled "manually", i. e. not every query against it will cause an expensive operation to fetch values from multiple tables. Instead your program has to take care of updating the lookup table should the underlying data change.
Usually lookup tables lend themselves to things that change seldomly, but are read often. Views on the other hand - while more expensive to execute - are more current.
I think your usage of "Lookup Table" is slightly awry. In normal parlance a lookup table is a code or reference data table. It might consist of a CODE and a DESCRIPTION or a code expansion. The purpose of such tables is to provide a lsit of permitted values for restricted columns, things like CUSTOMER_TYPE or PRIORITY_CODE. This category of table is often referred to as "standing data" because it changes very rarely if at all. The value of defining this data in Lookup tables is that they can be used in foreign keys and to populate Dropdowns and Lists Of Values.
What you are describing is a slightly different scenario:
I need information from one table, for
example the foreign key of an order
table, to get customer details from
another table
Both these tables are application data tables. Customer and Order records are dynamic. Now it is obviously valid to retrieve additional data from the Customer table to display along side the Order data, and in that sense Customer is a "lookup table". More pertinently it is the parent table of Order, because it has the primary key referenced by the foreign key on Order.
By all means build a view to capture the joining logic between Order and Customer. Such views can be quite helpful when building an application that uses the same joined tables in several places.
Here's an example of a lookup table. We have a system that tracks Jurors, one of the tables is JurorStatus. This table contains all the valid StatusCodes for Jurors:
Code: Value
WS : Will Serve
PP : Postponed
EM : Excuse Military
IF : Ineligible Felon
This is a lookup table for the valid codes.
A view is like a query.
Read this tutorial and you may find helpful info when a lookup table is needed:
SQL: Creating a Lookup Table
Just learn to write sql queries to get exactly what you need. No need to create a view! Views are not good to use in many instances, especially if you start to base them on other views, when they will kill performance. Do not use views just as a shorthand for query writing.
I have two databases in SQL Server and i have a common table for both the databases an important big table which holds the foreign keys to other tables. The problem is the Table is in DatabaseA, and I need to refer foreign keys to this table from DatabaseB.
I know SQL doesn't support cross database referential integrity so what's the best way to achieve this? I am thinking of combining two databases and make into single database - it wouldn't matter aside from the increase in complexity.
Any suggestions?
I would avoid doing this if I could - can you just keep both tables in one datbase and use an FK?
Parent and Child Tables Are in Different Databases.
Although you cannot use a foreign key in this situation, there are workarounds – you can use either triggers or UDFs wrapped in check constraints. Either way, your data integrity is not completely watertight: if the database with your parent table crashes and you restore it from a backup, you may easily end up with orphans.
Parent-Child Relationship Is Enforced by Triggers.
There are quite a few situations when triggers do not fire, such as:
· A table is dropped.
· A table is truncated.
· Settings for nested and/or recursive triggers prevent a trigger from firing.
Also a trigger may be just incorrect. Either way, you may end up with orphans in your database.
Here's an article on how to use the SSIS Import / Export wizard:
http://www.databasejournal.com/features/mssql/article.php/3580216/SQL-Server-2005-Import--Export-Wizard.htm
The easiest way to do this is just to export one database (I'd use the smallest of the two) to whatever format is the most convenient for you, and then import it into the other. As long as the table names are all different, this shouldn't present any problem.
Triggers can be written to enforce referential integrity against different databases.
I have a table that contains a few columns and then 2 final (nullable) columns which are varbinary (actually, they are SQL 2008 geography types, but I want to keep this post database agnostic).
I've hit around 500mb with around 200K rows. The varbinary is the problem - and I need the data.
So, I was wondering if it's bad if I do the following:-
Create a separate FILEGROUP: SpatialData.mdf
Create a new table, assigned to that new filegroup.
Move all the spatial data (read: last two fields) out of the original table and into the new table. The new table has a foreign key against the original table.
Create a view representing both tables.
Now, the view will be a left outer join because the relationship is: the new table has a zero or one row relationship to the original table.
EG.
Original Table
FooId INT PK NOT NULL IDENTITY
Blah VARCHAR(..) NOT NULL
Boo WHATEVER NOT NULL
New Table
FooID PK FK NOT NULL
Spatial_A VARBINARY(MAX)/GEOGRAPHY
Spatial_B VARBINARY(MAX)/GEOGRAPHY
The reason why I want to know if this is bad is because of the view and how the view is doing a join on the spatial table. I'll be using the view a lot. Currently, I'm just doing queries against the original table (because the new table doesn't exist just yet). By adding this join and the PK/FK relationship, will this impact performance?
Why split the data? I need to download the live DB to our dev servers now and then. We don't really care too much about those two spatial fields, so not having them is fine. Therefor, the size of the database to download will be much smaller.
Thoughts?
Instead of creating a second table, joining, and creating a view, a better solution that is possible with SQL Server 2005/2008 is to use table partitioning. To my recollection, you can vertically partition a table, and place some columns (i.e. your geospatial columns) in one file group, while putting the rest in another file group. SQL Server will handle the rest for you, you don't need to bother with a join, and you don't need a view.
The method that you've described is actually fairly common in my experience. Technically, if you were to normalize your database to the fullest extent you would have a lot of tables like that since one of the (usually not used) steps in normalization includes making sure that no columns have NULL values.
In practice it isn't usually carried out to that extent, but for a column (or columns) that is sparsely populated it's not a bad idea to separate it out. As long as the tables share the same primary key (which will of course be indexed), performance shouldn't be a problem.
I have a fairly huge database with a master table with a single column GUID (custom GUID like algorithm) as primary key and 8 child tables that have foreign key relationships with this GUID column. All the tables have approximately 3-8 million records. None of these tables have any BLOB/CLOB/TEXT or any other fancy data types just normal numbers, varchars, dates, and timestamps (about 15-45 columns in each table). No partitions or other indexes other than the primary and foreign keys.
Now, the custom GUID algorithm has changed and though there are no collisions I would like to migrate all the old data to use GUIDs generated using the new algorithm. No other columns need to be changed. Number one priority is data integrity and performance is secondary.
Some of the possible solutions that I could think of were (as you will probably notice they all revolve around one idea only)
add new column ngu_id and populate with new gu_id; disable constraints; update child tables with ngu_id as gu_id; renaname ngu_id->gu_id; re-enable constraints
read one master record and its dependent child records from child tables; insert into the same table with new gu_id; remove all records with old gu_ids
drop constraints; add a trigger to the master table such that all the child tables are updated; start updating old gu_id's with new new gu_ids; re-enable constraints
add a trigger to the master table such that all the child tables are updated; start updating old gu_id's with new new gu_ids
create new column ngu_ids on all master and child tables; create foreign key constraints on ngu_id columns; add update trigger to the master table to cascade values to child tables; insert new gu_id values into ngu_id column; remove old foreign key constraints based on gu_id; remove gu_id column and rename ngu_id to gu_id; recreate constraints if necessary;
use on update cascade if available?
My questions are:
Is there a better way? (Can't burrow my head in the sand, gotta do this)
What is the most suitable way to do this? (I've to do this in Oracle, SQL server and mysql4 so, vendor-specific hacks are welcome)
What are the typical points of failure for such an exercise and how to minimize them?
If you are with me so far, thank you and hope you can help :)
Your ideas should work. the first is probably the way I would use. Some cautions and things to think about when doing this:
Do not do this unless you have a current backup.
I would leave both values in the main table. That way if you ever have to figure out from some old paperwork which record you need to access, you can do it.
Take the database down for maintenance while you do this and put it in single user mode. The very last thing you need while doing something like this is a user attempting to make changes while you are in midstream. Of course, the first action once in single user mode is the above-mentioned backup. You probably should schedule the downtime for some time when the usage is lightest.
Test on dev first! This should also give you an idea as to how long you will need to close production for. Also, you can try several methods to see which is the fastest.
Be sure to communicate in advance to users that the database will be going down at the scheduled time for maintenance and when they can expect to have it be available again. Make sure the timing is ok. It really makes people mad when they plan to stay late to run the quarterly reports and the database is not available and they didn't know it.
There are a fairly large number of records, you might want to run the updates of the child tables in batches (one reason not to use cascading updates). This can be faster than trying to update 5 million records with one update. However, don't try to update one record at a time or you will still be here next year doing this task.
Drop indexes on the GUID field in all the tables and recreate after you are done. This should improve the performance of the change.
Create a new table with the old and the new pk values in it. Place unique constraints on both columns to ensure you haven't broken anything so far.
Disable constraints.
Run an updates against all the tables to modify the old value to the new value.
Enable the PK, then enable the FK's.
It's difficult to say what the "best" or "most suitable" approach is as you have not described what you are looking for in a solution. For example, do the tables need to be available for query while you are migrating to new IDs? Do they need to be available for concurrent modification? Is it important to complete the migration as fast as possible? Is it important to minimize the space used for migration?
Having said that, I would prefer #1 over your other ideas, assuming they all met your requirements.
Anything that involves a trigger to update the child tables seems error-prone and over complicated and likely will not perform as well as #1.
Is it safe to assume that new IDs will never collide with old IDs? If not, solutions based on updating the IDs one at a time will have to worry about collisions -- this will get messy in a hurry.
Have you considered using CREATE TABLE AS SELECT (CTAS) to populate new tables with the new IDs? You'll be making a copy of your existing tables and this will require additional space, however it is likely to be faster than updating the existing tables in place. The idea is: (i) use CTAS to create new tables with new IDs in place of the old, (ii) create indexes and constraints as appropriate on the new tables, (iii) drop the old tables, (iv) rename the new tables to the old names.
In fact, it depend on your RDBMS.
Using Oracle, the simpliest choice is to make all of the foreign key constraints "deferred" (check on commit), perform updates in a single transaction, then commit.